Jae's Blog

Fedora 42 and new RSS reader

A few weeks ago, I finally did the final switched and completely nuked my Windows 11 install from my workstation.

The last thing that was keeping me on Windows, VR, is now pretty much painless on Linux. All of this thanks to the guides provided by Linux VR Adventures, and most particularly, the software Envision which allows you to set up and start everything is a very painless way.

As I mentioned multiple times in the past, my distro of choice is Fedora, given it’s really easy to install, use and maintain.

This week we also had some great news, a new Fedora version, 42 which brought on a bunch of cool stuff.

As expected the upgrade was painless, and now my workstation is shinier than ever (just ignore my awful PC building skills and the fact that some USB ports might be fried by now).

I also switched RSS readers and now using NewsFlash in combination with MiniFlux.

So far the setup has worked quite well, and I can access all my feeds on all my devices without having to copy config files around which is a big win by my books.

Free spell checking

You probably know about Grammarly and other web browser add-ons that basically act as fancier spell checkers.
They’re expensive, a bit opaque, and you can’t really integrate them into whatever you want.

Well today, I’ll talk about LanguageTool. Despite offering full-blown plans, what is little known is that you can use it locally and host your own spell checker for free!

Setting this up

I personally use Fedora Linux, so this tutorial will assume you have a similar setup. This particular one should work for any Systemd-enabled distribution.

First, you’ll need to download the latest LanguageTool HTTP server snapshot from their mirror which should be in the form of a zip file, then unzip it which should leave you with a file named something like LanguageTool-6.6-SNAPSHOT (replace the 6.6-SNAPSHOT by the version you downloaded).

For simplicity’s sake, let’s rename LanguageTool-6.6-SNAPSHOT into languagetool and move it to our own folder with:

mv LanguageTool-6.6-SNAPSHOT languagetool
mv languagetool ~Code language: Bash (bash)

You can also go in that directory using cd ~/languagetool and type pwd to get the full path to there, we’ll need it a bit later.

Now, time to create a systemd service to start it automatically. First, we’re gonna have to create the folder ~/.config/systemd/user/ using mkdir -p ~/.config/systemd/user/.

Once this is done, you can then edit the languagetool.service file using your favourite editor, in my case, Sublime Text: subl ~/.config/systemd/user/languagetool.service.

In there, you can put the following sample service file, feel free to tweak it accordingly to your needs, but this should be good for most use cases (replace jae by your user):

[Unit]
Description=LanguageTool server
After=graphical.target

[Service]
WorkingDirectory=/home/jae/languagetool
ExecStart=java -cp languagetool-server.jar org.languagetool.server.HTTPServer --config server.properties --port 8081 --allow-origin

[Install]
WantedBy=default.target
Code language: JavaScript (javascript)

Before, anything, go in the ~/languagetool directory and create the server.properties file by using: touch ~/languagetool/server.properties.

Now time to start and enable the service:

systemctl --user start languagetool
systemctl --user enable languagetoolCode language: Bash (bash)

And there you go, your local LanguageTool server will be started automatically when you log into your session.

Now you, as a finishing touch, you can install the Firefox add-on, and once install, go in the settings, scroll all the way at the bottom, click on the “advanced settings” tab, and swap the “LanguageTool server” option to “local server”.

Congratulations, you now have an amazing spell checker in your browser for 100% free.

If you’re curious about how exactly that stuff works, you can see the full LanguageTool source on GitHub.

If you are a developer, check out their API docs to build stuff around it.

Making your own web corner

So, you’ve finally bought yourself a domain (or thinking about it), got a server at home, and now you want to host your own corner of the web?

Great! Allow me to be your guide through this journey.

Pre-requisites

You’ll need a bunch of stuff for this tutorial, including:

A domain

Your domain will be the public face and how people (and yourself) will access your corner, choose it wisely.

To get a domain, you need to choose a registrar first, to which you will register it. Registering a domain can cost a fee anywhere from 5€ to 5000€.

Some good registrars include:

  • Spaceship – Really cheap, you can get a .eu for just under 5€ there
  • Hetzner – Well-known hosting service & DNS registrar
  • PorkBun – Well-known, huge selection, cheap sometimes
  • Inwx – German registrar, good service

If your friends also have their own corners, ask them about their registrar, maybe they had good experiences with some others than listed here!

From now on, assume we just bought example.com as a domain. Of course, replace this example.com by the domain you just got in the next steps.

A server

Now here comes the part where you have to choose where your stuff will be hosted. There are multiple ways of doing this:

  • Run a spare computer at home (this tutorial will focus on this)
  • Use a hosting provider like Hetzner or Infomaniak (similar to the first option, so this tutorial also applies)
  • Use GitLab, GitHub or Codeberg pages to host a static website (not covered in this tutorial, coming soon!)

In this example, we assume you have a spare computer at home running Debian Linux.

The boring networking part

DNS stands for Domain Name System. You can read more about it on howdns.works, but the basic gist is:

  • IP addresses are hard for people to remember as-is
  • DNS puts in relation a domain name to an IP address
  • For instance: j4.lc will point to 2a12:4946:9900:f00::f00 when being looked up
  • There are a lot of DNS record types, but the most importants are A and AAAA here
  • A A record maps a domain name to an IPv4 address, for instance: j4.lc -> 95.217.179.88
  • A AAAA record maps a domain name to an IPv6 address, for instance: j4.lc -> 2a12:4946:9900:f00::f00

Pointing your domain to your server

First, let’s figure out what’s the public IP of your server. For this you can execute:

curl -4 ifconfig.me
curl -6 ifconfig.meCode language: Bash (bash)

If the second command fails, this means your ISP doesn’t supports IPv6. In any case, write those IPs down in a notepad and let’s move on.

You will then need to add a DNS record on your domain to point to your server. To do this, log onto your registar and direct yourself to the DNS control panel.

When adding a record, you will have a few properties to fill:

  • name – Which subdomain you want to use. Setting this to @ will mean the root of the domain, in our case example.com, setting this to anything else, for instance awoo will “create” the subdomain awoo.example.com and make it point to your IP instead of the root
  • type – We’ve seen this earlier, we want this to be A or AAAA depending of if we’re adding an IPv4 or IPv6 (both can be present at the same time)
  • ttl – This is the time (in seconds) the record will stay cached. Leave it as-is. This is how long you will have to wait when you do a change to this record for you to see it
  • data – The IP address you want the record to point to
  • proxy status – This is for CloudFlare only, this setting controls if we want our site to be through CloudFlare, let’s disable this for now

Note: you do not need to specify the port of your application in the record. It is up to the app you are using (for instance, a web browser) to query the right ports. Adding a record to 95.217.179.88:8080 will be invalid for instance.

In our example, we can set everything (once again replace with your own data):

  • Name: @
  • Type: AAAA
  • TTL: 60 (default)
  • Data: 2a12:4946:9900:f00::f00

Meaning our root domain example.com will resolve to 2a12:4946:9900:f00::f00.

We can also add a A record to provide IPv4 connectivity:

  • Name: @
  • Type: A
  • TTL: 60 (default)
  • Data: 95.217.179.88

Opening ports

Now that your domain is pointing to your home server, you will need to open a few ports to make it accessible from the outside.

First, here are the list of ports you need:

  • 80 is the default HTTP port that you will need to later to obtain SSL certificates
  • 443 is the default HTTPS port that you will need to serve your corner

You will then need to allow those two ports in two places:

  • Your OS firewall, can be done through ufw usually
  • Your router’s settings (also called “port opening”, “port redirection” and a lot of other names), make sure the two ports are open on both TCP and UDP and pointing to your home server

Warning: in some countries, some ISPs will not allow you to open those two ports. It’s probably because you are behind something called CGNAT which allows ISPs to share the same IP address between multiple customers.
If this is the case call your ISP to get a proper IP that is not behind CGNAT. If this is not possible, you will have to either rent a server at a hosting provider, or get a tunnel.

Once this is done, congratulations, the external world can now reach you.

Web server shenanigans

Now, to serve your corner to the external world, you will need a web server. In this case, we will use Caddy which is really easy to use and takes care of HTTPS renewals for you.

Installing a web server

First, we’re gonna need to install Caddy, it goes a bit like this:

sudo apt install -y debian-keyring debian-archive-keyring apt-transport-https curl
curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/gpg.key' | sudo gpg --dearmor -o /usr/share/keyrings/caddy-stable-archive-keyring.gpg
curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/debian.deb.txt' | sudo tee /etc/apt/sources.list.d/caddy-stable.list
sudo apt update
sudo apt install caddyCode language: Bash (bash)

After doing this, make sure Caddy is started and enabled (meaning it will start with the server) by doing:

sudo systemctl start caddy sudo systemctl enable caddy

Now, if you visit your domain, you will see the example Caddy page, meaning you are now online!

Editing the web server configuration

The configuration for Caddy is located at /etc/caddy/Caddyfile. You can see basic examples of it on the Caddy documentation website.

In our example, we’re gonna use the following simple configuration (as always, replace example.com by your domain):

https://example.com {
    root * /var/www/corner
    file_server
}
Code language: JavaScript (javascript)

Now, create the directory /var/www/corner and add your website files in there, for instance an index.html.

Restart Caddy using sudo systemctl restart caddy, wait a minute for the HTTPS certificate to be issued and you’re in business, you now have your own corner on the internet!

Have fun editing it and sharing it to your friends.
A blog post will be published later this month on how to create your own blog (for free) using GitLab pages!

Reading more

Here are some links to help you get started with your newfound internet home:

If you feel like I missed something, please do contact me and I will add it there.

My Signal config & switching to it

A few months ago, I started using Signal again. The messenger evolved quite a lot since I last used it, for instance, usernames weren’t even a thing back then, and this is mainly what drove me out of the platform.

But now that we have them, I feel a bit safer using it, knowing people don’t need my direct phone number to add me anymore. As a bonus, it’s now possible to import extended chat history when linking a desktop app, which is quite nice.

If you don’t know what Signal is, it’s quite simple: it’s an encrypted chat app. As it stands, it’s also the safest (broad sense there, please don’t hurt me) option at the moment as it has the most eyeballs on it and sane encryption. Soatok talks about that quite often on his blog as well.

Because I have no imagination whatsoever, here is how I use Signal (basically, the configuration keys I use for it in the “Privacy” section, please note that this is basically for the iOS version, no clue if the Android one has the same configuration keys).

Privacy

Phone number:

  • Who can see my phone number: nobody
  • Who can find me by number: nobody

Advanced:

  • Always relay calls: on

General:

  • Read receipts: off
  • Typing indicators: off
  • Disappearing messages: 4w
  • Hide screen in app switcher: on
  • Screen lock: on
  • Lock screen timeout: 5 minutes
  • Show calls in recent: off

Chats

General:

  • Generate links previews: off
  • Share contacts with iOS: off
  • Use phone contacts photos: off

Stories

General:

  • Turn off stories

Data usage

General:

  • Sent media quality: high

Coinciding with this post, I turned off the ability for new people to DM me on Telegram, from now on, personal contacts will have to be done through Signal.

If we have an existing DM and you want to switch to Signal, use that DM thread to ask me for my username. Otherwise, either email me yours or ping me on a common chat platform. Remember, none of us have to know each other’s phone number anymore, just setup a username if you haven’t already.

The making of the Resonite sessions Discord bot

If you are a Resonite player and are in the Discord guild, you might be familiar with the #active-sessions channel, in which a bot displays the 10 most popular sessions on Resonite as well as some stats.

Screenshot of Discord showing game stats as well as an embed for a single session at the bottom.

What you might not know, is that I’m the author of this bot, that I originally started as just a small oneshot project to have some fun.

For some background, when Neos was still a thing (technically still is, but in what state), a bot like this was in the Discord guild, showing sessions and game stats like server metrics and session counts.

When Resonite was released, the channel was there, however, no metrics or posts were ever made, saying that the bot would be revived at some point in the future(TM).

At the time, I was a bit bored and wanted a new project, so I decided to start experimenting with .NET Discord bots and in term set myself the objective to re-create the original bot.

Why .NET? One reason being that I use .NET all the time as it’s fairly easy to use, the other one being that most of the Resonite community also knows .NET due to being used to make mods and whatnot.

The bot itself is fairly simple and divided in multiple parts built around the .NET Hosted Services:

  • Discord client service – handles the connectivity to Discord
  • Interaction service – provides the command handler for the few commands the bot has
  • Startup service – sets up some bot must-haves like the logger and other data
  • Message service – the core functionality of the bot that runs all the logic that makes the magic happen
  • Healthcheck service – this one is optional, but important when hosting the bot in Docker

Let’s go through all of those in detail and see why they were made that way.

I’m gonna group those together as they belong to the same component really: handling Discord-related shenanigans.

The bot has a few commands:

  • /info which shows some info about the bot
  • /setchannel which sets the channel in which you want the notifications
  • /which which shows which channel is set as the notification one
  • /unregister which unsets the notification channel
  • /setting allows you to toggle some settings like if you want thumbnails or not

All of those commands (with the exception of /info) are admin-only.

This part is honestly fairly boring and straightforward, the rest just passes a Discord bot token, connects to the chat service and tries to log in.

        await _client.LoginAsync(TokenType.Bot, token);
        await _client.StartAsync();
Code language: C# (cs)

The message service

Now this is the juicy stuff.

This part handles:

  • Retrieving the info from the Resonite API
  • Formats it neatly into a message
  • Generates embeds from the session info
  • Storing which messages were sent where in a DB
  • Checking if messages can be updated
  • Updating said messages with the new data

First off, the bot uses a relatively simple SQLite DB to avoid everything being hardcoded. The first versions were using direct channel IDs, but this is far from ideal if you want something modular without having to host multiple copies.

The DB basically stores the guild ID, channel ID and settings for the bot to send the updates in the right place, containing what we want.

Speaking of settings, there is only one so far: show the session thumbnails or not. The reason for this is a difference between the Discord & Resonite ToS. While nipples aren’t considered sexual on Resonite, they are on Discord, meaning having a session thumbnail showing nipples on Discord without gating the channel to 18+ users would be violating the ToS of the platform.

One thing I am quite proud of is how stable the bot it, nowadays it rarely, if ever, crashes alone, which it used to do quite often.

The bot is made to handle errors gracefully and never shut down or crash the program unless something really, really warrants it. When running this bot, all the errors that would normally crash it are instead logged with an error message and stacktrace to make it easy to debug.

Another thing to note is that the database schema hasn’t been updated since the bot basically released and touching it is considered a very last resort thing. Having the DB break during an upgrade would be disastrous, requiring all admins to re-set the notifications channel. As they say, if it ain’t broken, don’t fix it.

Out of all the variables in the mix, Discord is the most unstable one, having lots of outages, sometimes lasting hours at a time, just being slow for no reason whatsoever or thinking existing things (such as channels or messages) don’t exist even though they do.

This is why the whole checking logic exists, it will first check if the channel exists, if it does will check if the message exists, and if it does, try to update it. If it fails at any point, it will try again for a while then delete the message, try to re-send it, and ultimately, if this fails, delete the channel from the DB and the admin will have to re-set the notification channel again.

The re-try was implemented after some issues raised from Discord:

  • The bot would forget about the message (because Discord said it didn’t exist anymore) and would send a second one in the same channel or over and over until restarted
  • Sometimes the checks would fail on first try and delete everything gracefully without notifying anybody
  • Bot would crash because everything “ceased” to exist

On the Resonite side, if an error happens while contacting the API, the bot will just skip this cycle and try updating the next time (one minute by default). This used to crash the bot (whoops) in the early days.

The latest addition made was the Docker healthcheck, given recently the bot crashed in the main Resonite guild (GH-3521) and no monitoring was triggered.

Now the bot has a small HTTP server running, simply returning the date that a curl instance will check every 30 seconds.

The CI/CD

It’s no secret that I love GitLab as a software. I also work daily on and with it in my day-to-day job.

The CI/CD used in this project is extensive, but classic:

  • Secret detection
  • SAST scan
  • Dependency scanning
  • NuGet build
  • NuGet deployment
  • ARM64 and x86 Docker builds
  • Release publishing

The first three are kinda explicit, and will warn if any secrets have been pushed, if any unsafe codepaths are detected or if any dependencies needs updating.

Now the most important thing to highlight are the separated Docker builds for the two architectures. I originally tried combining the builds into a single one as you would do by specifying multiple architectures in buildx, however this did not work.

An error when building ARM on x86 (with virtualization) and vice versa would always arise, though the same command would work for other projects.

To avoid errors when doing things manually, the release process is also automated, triggering when a tag is detected. It will basically build as usual with the version tag and then publish the release following a markdown template. It will also automatically fill-in some details like image tags, etc from the changelog.


Fun fact, the bot is Open-Source under the MIT license, meaning you are welcome to host your own version.

Some stats about the project so far:

  • 47 pull requests merged
  • 97 commits (PRs are squashed)
  • 16 releases
  • 6+ months of existence
  • 10k (roughly) of users served in the Resonite Discord alone
  • Way too many hours sunk into overengineering this

What’s to come? Nothing much really, this bot has basically achieved its goal of having the active session list published somewhere.

The project is basically in maintenance mode, housed under the Crystalline Shard development group which I am one of the founders.

Now for the self-criticism: if I had to restart the project from scratch, I would probably opt into an even less complex design. Some services are really huge files that only increased in complexity with time.

Currently nothing too bad, but I think a refactor would be warranted to decrease complexity and make it more maintainable.

I wouldn’t touch the language though, since the bot’s footprint is really small, only about 30Mb to 50Mb of RAM used total during normal runtime.

In the end, this bot is a really fun project to make and maintain, and I’m extremely happy that it got canonized and used in an official manner.

For a quick roadmap:

  • GL-33 have someone completely review the codebase
  • GL-39 disable unneeded intents for the bot (on hold for evaluation)
  • GL-48 solve a mystery crash that randomly happens… sometimes

Let’s see in about 6 more months to see how everything evolved with even more bugs being squashed.

A week of Framework

As per my usual tradition when I receive a new cool device, I have to write about it a week later, then either three months or a year later (as I did previously with my Index, or work MacBook Pro M1).

As you may know, my previous laptop was a ThinkPad x200.
It’s not exactly a young machine, being around 16 years old now.

As I started working on more demanding projects (mainly C# ones), the x200 simply wasn’t enough (it couldn’t even run a modern web browser any more).

This is why I decided to scout for a new laptop.

Fear not, the x200 is not going to waste! It will now be used mainly to test Libreboot and other stuff like that.

Now, I had a bunch of criteria for the new laptop:

  • Can last as long as the x200
  • Can run my IDEs correctly (namely Sublime Text and JetBrains Rider)
  • Has a proper GPU (to run VR stuff)
  • Has modern hardware in general

The Framework 16, tho expensive, checked a lot of those cases:

  • Can last long by sheer virtue of being repairable
  • Has modern hardware, therefore can run my IDEs correctly
  • Can be upgraded to have a dedicated GPU

So for around €2100 (ouch, my wallet D:), this is what you get with Framework:

  • AMD Ryzen 7 7840HS (8c16t)
  • Radeon 780M Graphics integrated graphics (decided to buy the GPU upgrade later)
  • 32GB of RAM (single stick to leave room for upgrades)
  • Wi-Fi 6
  • A 2560×1600@165hz display
  • 2x USB-C expansion cards
  • 2x USB-A expansion cards
  • 1x HDMI expansion card
  • 1x DisplayPort expansion card
  • 1TB of m.2 storage
  • Power supply
  • Keypad module

Overall, pretty good specs by my standards for a laptop.
Before you say anything: the HDMI is for a second screen, the DisplayPort is for VR headsets.

To save up some money, I also decided to take it as the DIY edition without any OS, and then install Fedora on it.
The laptop itself was painless to build, even fun. The only issue is my hands trembling when doing anything requiring a bit of precision (in this instance handling a screwdriver with small screws), but that’s a me issue.

There was a small issue on first boots where the keyboard wasn’t responding at all, but taking it apart and verifying all the connections one by one made it work.

Fedora is one of the supported OSes on the Framework, along Ubuntu and Windows. I would have gone with Arch, however I wanted a headache-free setup this time, which Fedora offered.

During this week we actually got a new BIOS upgrade for the 16 being 3.05, fixing some security issues and adding a cool new feature to extend the battery longevity.

Upgrading the BIOS was pretty much painless thanks to fwupdmgr and was easy as:

$ fwupdmgr refresh --force
$ fwupdmgr get-updates
$ fwupdmgr updateCode language: Bash (bash)

Then, being patient.
I remember having to fiddle with USB keys back a few years ago, so this CLI utility is much welcome.

The battery life itself is correct, never really running out when working on stuff.

Fedora itself also is a breeze to use, having GUIs for everything simplifying the task a lot.
I do miss a bit my good old XFCE4, but GNOME does the job just fine as well.

Another thing I totally forgot to do after the first install was to get the EasyEffects profile, which does makes a huge difference on the laptop’s audio.

Overall, I’m very satisfied with what I got, remains to see a few things:

  • Will new hardware upgrades come out (for instance, additional GPU modules)
  • Will any other companies start making expansion cards (instead of relying on Framework alone; though the community already made a lot of those)
  • Will Framework as a company remain in business long enough to offer the longevity I want

But those can only be answered with time. It goes without saying that most of the hardware replacement (or upgrades) (RAM, storage, etc) can be done with any off-the-shelf components and not just ones sold by Framework.

For now, I’ll keep using it, and I’ll see you peeps in either three months or a year (or both) if I don’t forget for the traditional update!

Liberate your news with RSS

RSS, standing for Really Simple Syndication, is a really good and easy way to get all your news right onto your computer.

While the standard is fairly old now, being older than me, it still fills its purpose wonderfully.

To have a simple outlook, RSS allows you to get news from feeds made available by websites and aggregate them into a software.

Some good RSS readers include:

Finding RSS feeds is also easy. You can find them by searching online for them or looking for the RSS icon on websites (small dot with the three lines going out like a broadcast).

Some readers like RSSGuard also have a feature to discover RSS feeds on pages.

Some nice feeds I personally watch are:

  • Blender – https://blender.org/feed
  • Resonite – https://store.steampowered.com/feeds/news/app/2519830
  • Acrouzet (YouTube) – https://www.youtube.com/feeds/videos.xml?channel_id=UClv1kZDpIA9LcXPYY4KTU-w
  • The Servo blog – https://servo.org/blog/feed.xml
  • The Matrix blog – http://matrix.org/blog/feed/
  • Bellingcat – http://www.bellingcat.com/category/news/feed/rdf

Some tricks as well:

  • Any website using WordPress will have a feed at the URL /feed
  • You can watch updates for any Steam game or app using https://store.steampowered.com/feeds/news/app/<appid> (and replacing <appid> by the ID of the game which you can find in the store URL)
  • If a website doesn’t directly offers an RSS feed, you can build one by using something like rss-bridge or RSSHub
  • You can follow any YouTube channel using RSS by using https://www.youtube.com/feeds/videos.xml?channel_id=<channelid>
  • Most blogs also have a RSS feed (don’t forget to subscribe to this one to not miss anything in the future :3)

Overall, RSS is an amazing technology, supported by websites you wouldn’t even suspect.
I can only encourage using it as it’s lightweight, easy, ad-free (at least from the experience I’ve had from it).

I am not interested about AI

Recently, I’ve received some e-mails from so-called “AI” startups, wanting me to join them to develop their product.

I will be blunt: don’t bother. I’m not interested.

I’m not interested in your startup that resells the OpenAI API under a fancy interface.
I’m not interested in your startup that has no plans for the future beyond “we’ll see when we get more funding”.
I’m not interested in your startup that wastes incredible amounts of resources just to hallucinate results and for the whole thing to fall down in a year when the funding expires.

And once and foremost, I’m not interested in AI in general.

While I did thinker with it when it was new, it’s pretty much useless outside of making boilerplates.

Here, we make fresh, organic, handmade software.

Newer Posts
Jae 2012-2025, CC BY-SA 4.0 unless stated otherwise.