Jae's Blog

Setting up NetNewsWire with Miniflux

As I previously mentioned, I use Miniflux to sync all my RSS feeds around.

Today, I discovered NetNewsWire, a free and Open-Source app allowing you to read, and sync, RSS feeds on iOS. One issue: when opening the app, going in the settings, and trying to add an account, Miniflux is nowhere to be seen.

It’s actually really easy to sync your feeds there. For that, you want first to log into your Miniflux account and head to the settings.

Head to “Integrations” and scroll down until you see the “Google Reader” section. There, enable it, then set a username and password. Don’t forget to click “update” when this is done.

Screenshot showing the "Google Reader" section with an username and password fields.

When this is done, open NetNewsWire on your iPhone, go in the settings and in the “Accounts” section, click on “Add account”. In the list that opens, select “FreshRSS”.

There, put the username and password you set, and as the URL, https://reader.miniflux.app.

Screenshot of the setup screen in the app showing the username, password and URL.

Once this is done, just hit “Add Account” and you should be all set.

As an additional step, I can recommend turning off “On my iPhone” in the accounts section to avoid the clutter of the default feeds being added.

You now have all your feeds synced to your phone.

Screenshot of NetNewsWire showing the 404 Media news feed.
For instance, the app showing the synced 404 Media feed.
Your own local copy of Wikipedia

Recently, Wikipedia has come under a lot of attacks from malicious political entities. Unlike the Wikimedia Foundation, they have much more resources to harass projects such as this one into oblivion.

This is why it is more important than ever to get your own copy of Wikipedia (or any wiki, really) at home.

For this quick HOWTO, we’re gonna need a few things:

  • A computer
  • About 100Gb of space free on your hard drive
  • Some time on your hands (depending on how good your internet is)

My software of choice to have my own Wikipedia copy is Kiwix. It is Free and Open-Source, and has a built-in downloader, allowing you to easily select a language and wiki to download.

My recommended way to download Kiwix is through Flatpak, which makes the process easier. If like me, you want the articles to be downloaded somewhere else, use FlatSeal to allow the application to write in a specific location, a second hard drive for instance.

If you don’t want to use Flatpak, see the options listed on Kiwix’s website, there is probably one that will fit your use case.

When launching the software for the first time, you can change the download location by going in the three buttons on the top right, then “Settings” and finally change the “Download directory”.

Once this is done, time to download some wikis. If it’s not already done, on the left of the main page, select “Online Files”, then select your languages and content types.

UI showing the selection of languages and content types, showing French and English being selected as well as “images” and “full article”.
How I download stuff

Do note that downloading Wikipedia with pictures takes around 100Gb while without it will only take 53Gb (for the English version, being the largest to this day).

Once the downloads are finished, click on “Clear” to remove all your filters, and swap to the “Local files” tab.

From there, you can open a specific wiki, search it and basically use it as it were online.

Screenshot of Kiwix showing the French Wikipedia page for the city of Annecy.

And this is how you can still access Wikipedia in case your internet randomly decides to stop, or if something worse would happen.

Debugging SSL in GitLab Pages

Debugging GitLab Pages on a self-managed instance can be a hassle, especially if you’re only getting nebulous messages without any further information.

Do note, for this to work, you’ll need to have an admin access to the machine hosting the main Rails app.

To find out exactly why Let’s Encrypt is failing to get a certificate, use the following command:

docker compose exec gitlab gitlab-ctl tail gitlab-rails --follow | grep -i "encrypt\|acme\|certificate"Code language: JavaScript (javascript)

It should output the exact log of Let’s Encrypt and tell you if it’s hurting a wall somewhere. If you have a bare-metal instance, just remove the docker part of the command, and only use gitlab-ctl.

Conditional Git config

A little known feature of Git is that you can have conditions, for instance, to have a work and personal name and email.

First, the ~/.gitconfig file:

[includeIf "gitdir:~/src/personal/"]
  path = ~/.gitconfig.personal

[includeIf "gitdir:~/src/work/"]
  path = ~/.gitconfig.work
Code language: PHP (php)

After specifying this, you can then create two files, ~/.gitconfig.personal and ~/.gitconfig.work, containing:

[user]
    email = email@something.com
    name = MyName

and

[user]
    email = jae@consoso.com
    name = Very serious business person

Now, when you are in ~/src/personal/, the personal email and name will be used, and when in ~/src/work/, the work one will be.

Getting Steam game changelogs in your RSS reader

A little known feature of Steam is that it offers RSS feeds for any app/game/whatever shared using it.

The URL is also very simple to use:

https://store.steampowered.com/feeds/news/app/$AppIDCode language: JavaScript (javascript)

You can then replace $AppID by the application ID of your game.

For instance, if we want to monitor Resonite, app ID 2519830, you will need the following URL:

https://store.steampowered.com/feeds/news/app/2519830Code language: JavaScript (javascript)

There, super easy!

Screenshot of the Resonite Steam RSS feed, showing the 2025.4.10.1305 changelog.
How it looks in a RSS reader
Fixing ffmpeg missing codec issues on Fedora

At some point, I had some issues converting some files with ffmpeg, most particularly videos on my Fedora install.

Turns out fixing this is really easy with the help of RPMFusion.

If you haven’t enabled it at the system installation, you can do that really easily via a single command which will install the free and nonfree variants of the repository:

sudo dnf install https://mirrors.rpmfusion.org/free/fedora/rpmfusion-free-release-$(rpm -E %fedora).noarch.rpm https://mirrors.rpmfusion.org/nonfree/fedora/rpmfusion-nonfree-release-$(rpm -E %fedora).noarch.rpmCode language: JavaScript (javascript)

Now, you can just swap ffmpeg for the RPMFusion one like so:

sudo dnf swap ffmpeg-free ffmpeg --allowerasing

This will install a bunch of codecs and replace the regular build of ffmpeg by a more permissive one (in terms of what you can do, not licensing).

Keep in mind that if you want to keep your system free (as in freedom), you might not want to do this.

Configuring DNSSEC on systemd-resolved

Enabling DNSSEC on systemd-resolved is quite easy.

First, let’s go in /etc/systemd/resolved.conf.d/main.conf and add/modify the file like so:

[Resolve]
DNSSEC=true
Code language: JavaScript (javascript)

For good measure, you can also enable DoT (DNS Over TLS) in there, which you can use with something like DNS0.

Restart systemd-resolved using systemctl restart systemd-resolved and voilà.

Now, if you type something like resolvectl query j4.lc, you should get an answer like so:

j4.lc: 95.217.179.88                           -- link: enp6s0
       2a12:4946:9900:f00::f00                 -- link: enp6s0

-- Information acquired via protocol DNS in 141.7ms.
-- Data is authenticated: yes; Data was acquired via local or encrypted transport: yes
-- Data from: network
Code language: CSS (css)

On the contrary, if you try to query a domain which has an invalid signature, for instance with resolvectl query badsig.go.dnscheck.tools, you will get:

badsig.go.dnscheck.tools: resolve call failed: All attempts to contact name servers or networks failed
Code language: CSS (css)

Do note some domains might stop resolving because of this, in which case, contact their admin so they can correct the issue.

Also, on my side, resolution hangs rather than displaying a proper error, which seems to be something like this bug (or maybe another, haven’t looked too much into this yet) on the systemd issue tracker.

Free spell checking

You probably know about Grammarly and other web browser add-ons that basically act as fancier spell checkers.
They’re expensive, a bit opaque, and you can’t really integrate them into whatever you want.

Well today, I’ll talk about LanguageTool. Despite offering full-blown plans, what is little known is that you can use it locally and host your own spell checker for free!

Setting this up

I personally use Fedora Linux, so this tutorial will assume you have a similar setup. This particular one should work for any Systemd-enabled distribution.

First, you’ll need to download the latest LanguageTool HTTP server snapshot from their mirror which should be in the form of a zip file, then unzip it which should leave you with a file named something like LanguageTool-6.6-SNAPSHOT (replace the 6.6-SNAPSHOT by the version you downloaded).

For simplicity’s sake, let’s rename LanguageTool-6.6-SNAPSHOT into languagetool and move it to our own folder with:

mv LanguageTool-6.6-SNAPSHOT languagetool
mv languagetool ~Code language: Bash (bash)

You can also go in that directory using cd ~/languagetool and type pwd to get the full path to there, we’ll need it a bit later.

Now, time to create a systemd service to start it automatically. First, we’re gonna have to create the folder ~/.config/systemd/user/ using mkdir -p ~/.config/systemd/user/.

Once this is done, you can then edit the languagetool.service file using your favourite editor, in my case, Sublime Text: subl ~/.config/systemd/user/languagetool.service.

In there, you can put the following sample service file, feel free to tweak it accordingly to your needs, but this should be good for most use cases (replace jae by your user):

[Unit]
Description=LanguageTool server
After=graphical.target

[Service]
WorkingDirectory=/home/jae/languagetool
ExecStart=java -cp languagetool-server.jar org.languagetool.server.HTTPServer --config server.properties --port 8081 --allow-origin

[Install]
WantedBy=default.target
Code language: JavaScript (javascript)

Before, anything, go in the ~/languagetool directory and create the server.properties file by using: touch ~/languagetool/server.properties.

Now time to start and enable the service:

systemctl --user start languagetool
systemctl --user enable languagetoolCode language: Bash (bash)

And there you go, your local LanguageTool server will be started automatically when you log into your session.

Now you, as a finishing touch, you can install the Firefox add-on, and once install, go in the settings, scroll all the way at the bottom, click on the “advanced settings” tab, and swap the “LanguageTool server” option to “local server”.

Congratulations, you now have an amazing spell checker in your browser for 100% free.

If you’re curious about how exactly that stuff works, you can see the full LanguageTool source on GitHub.

If you are a developer, check out their API docs to build stuff around it.

Making your own web corner

So, you’ve finally bought yourself a domain (or thinking about it), got a server at home, and now you want to host your own corner of the web?

Great! Allow me to be your guide through this journey.

Pre-requisites

You’ll need a bunch of stuff for this tutorial, including:

A domain

Your domain will be the public face and how people (and yourself) will access your corner, choose it wisely.

To get a domain, you need to choose a registrar first, to which you will register it. Registering a domain can cost a fee anywhere from 5€ to 5000€.

Some good registrars include:

  • Spaceship – Really cheap, you can get a .eu for just under 5€ there
  • Hetzner – Well-known hosting service & DNS registrar
  • PorkBun – Well-known, huge selection, cheap sometimes
  • Inwx – German registrar, good service

If your friends also have their own corners, ask them about their registrar, maybe they had good experiences with some others than listed here!

From now on, assume we just bought example.com as a domain. Of course, replace this example.com by the domain you just got in the next steps.

A server

Now here comes the part where you have to choose where your stuff will be hosted. There are multiple ways of doing this:

  • Run a spare computer at home (this tutorial will focus on this)
  • Use a hosting provider like Hetzner or Infomaniak (similar to the first option, so this tutorial also applies)
  • Use GitLab, GitHub or Codeberg pages to host a static website (not covered in this tutorial, coming soon!)

In this example, we assume you have a spare computer at home running Debian Linux.

The boring networking part

DNS stands for Domain Name System. You can read more about it on howdns.works, but the basic gist is:

  • IP addresses are hard for people to remember as-is
  • DNS puts in relation a domain name to an IP address
  • For instance: j4.lc will point to 2a12:4946:9900:f00::f00 when being looked up
  • There are a lot of DNS record types, but the most importants are A and AAAA here
  • A A record maps a domain name to an IPv4 address, for instance: j4.lc -> 95.217.179.88
  • A AAAA record maps a domain name to an IPv6 address, for instance: j4.lc -> 2a12:4946:9900:f00::f00

Pointing your domain to your server

First, let’s figure out what’s the public IP of your server. For this you can execute:

curl -4 ifconfig.me
curl -6 ifconfig.meCode language: Bash (bash)

If the second command fails, this means your ISP doesn’t supports IPv6. In any case, write those IPs down in a notepad and let’s move on.

You will then need to add a DNS record on your domain to point to your server. To do this, log onto your registar and direct yourself to the DNS control panel.

When adding a record, you will have a few properties to fill:

  • name – Which subdomain you want to use. Setting this to @ will mean the root of the domain, in our case example.com, setting this to anything else, for instance awoo will “create” the subdomain awoo.example.com and make it point to your IP instead of the root
  • type – We’ve seen this earlier, we want this to be A or AAAA depending of if we’re adding an IPv4 or IPv6 (both can be present at the same time)
  • ttl – This is the time (in seconds) the record will stay cached. Leave it as-is. This is how long you will have to wait when you do a change to this record for you to see it
  • data – The IP address you want the record to point to
  • proxy status – This is for CloudFlare only, this setting controls if we want our site to be through CloudFlare, let’s disable this for now

Note: you do not need to specify the port of your application in the record. It is up to the app you are using (for instance, a web browser) to query the right ports. Adding a record to 95.217.179.88:8080 will be invalid for instance.

In our example, we can set everything (once again replace with your own data):

  • Name: @
  • Type: AAAA
  • TTL: 60 (default)
  • Data: 2a12:4946:9900:f00::f00

Meaning our root domain example.com will resolve to 2a12:4946:9900:f00::f00.

We can also add a A record to provide IPv4 connectivity:

  • Name: @
  • Type: A
  • TTL: 60 (default)
  • Data: 95.217.179.88

Opening ports

Now that your domain is pointing to your home server, you will need to open a few ports to make it accessible from the outside.

First, here are the list of ports you need:

  • 80 is the default HTTP port that you will need to later to obtain SSL certificates
  • 443 is the default HTTPS port that you will need to serve your corner

You will then need to allow those two ports in two places:

  • Your OS firewall, can be done through ufw usually
  • Your router’s settings (also called “port opening”, “port redirection” and a lot of other names), make sure the two ports are open on both TCP and UDP and pointing to your home server

Warning: in some countries, some ISPs will not allow you to open those two ports. It’s probably because you are behind something called CGNAT which allows ISPs to share the same IP address between multiple customers.
If this is the case call your ISP to get a proper IP that is not behind CGNAT. If this is not possible, you will have to either rent a server at a hosting provider, or get a tunnel.

Once this is done, congratulations, the external world can now reach you.

Web server shenanigans

Now, to serve your corner to the external world, you will need a web server. In this case, we will use Caddy which is really easy to use and takes care of HTTPS renewals for you.

Installing a web server

First, we’re gonna need to install Caddy, it goes a bit like this:

sudo apt install -y debian-keyring debian-archive-keyring apt-transport-https curl
curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/gpg.key' | sudo gpg --dearmor -o /usr/share/keyrings/caddy-stable-archive-keyring.gpg
curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/debian.deb.txt' | sudo tee /etc/apt/sources.list.d/caddy-stable.list
sudo apt update
sudo apt install caddyCode language: Bash (bash)

After doing this, make sure Caddy is started and enabled (meaning it will start with the server) by doing:

sudo systemctl start caddy sudo systemctl enable caddy

Now, if you visit your domain, you will see the example Caddy page, meaning you are now online!

Editing the web server configuration

The configuration for Caddy is located at /etc/caddy/Caddyfile. You can see basic examples of it on the Caddy documentation website.

In our example, we’re gonna use the following simple configuration (as always, replace example.com by your domain):

https://example.com {
    root * /var/www/corner
    file_server
}
Code language: JavaScript (javascript)

Now, create the directory /var/www/corner and add your website files in there, for instance an index.html.

Restart Caddy using sudo systemctl restart caddy, wait a minute for the HTTPS certificate to be issued and you’re in business, you now have your own corner on the internet!

Have fun editing it and sharing it to your friends.
A blog post will be published later this month on how to create your own blog (for free) using GitLab pages!

Reading more

Here are some links to help you get started with your newfound internet home:

If you feel like I missed something, please do contact me and I will add it there.

Using the new GitHub ARM runners

Just yesterday at the time of writing, GitHub (finally) released their public ARM runners for Open-Source projects.

This means you can now build ARM programs natively on Linux without having to fiddle with weird cross-compilation.

One way to achieve that is through a Matrix. Considering the following workflow to build, then upload an artifact (taken from the YDMS Opus workflow I wrote):

on: [push]

jobs:
  Build-Linux:
    name: Builds Opus for Linux
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - name: Download models
        run: ./autogen.sh

      - name: Create build directory
        run: mkdir build

      - name: Create build out variable
        id: buildoutput
        run: echo "build-output-dir=${{ github.workspace }}/build" >> "$GITHUB_OUTPUT"

      - name: Configure CMake
        working-directory: ${{ steps.buildoutput.outputs.build-output-dir }}
        run: cmake .. -DBUILD_SHARED_LIBS=ON

      - name: Build Opus for Linux
        working-directory: ${{ steps.buildoutput.outputs.build-output-dir }}
        run: cmake --build . --config Release --target package

      - name: Upload artifacts
        uses: actions/upload-artifact@v4
        with:
          name: opus-linux
          path: ${{ steps.buildoutput.outputs.build-output-dir }}/**/*.so
Code language: YAML (yaml)

We can now easily make it build for ARM by using a matrix referencing the new ubuntu-24.04-arm runner label.

For instance, we can add this before the job steps:

    strategy:
      matrix:
        osver: [ubuntu-latest, ubuntu-24.04-arm]
Code language: YAML (yaml)

Then change the runs-on configuration to specify ${{ matrix.osver }} which will create jobs for all the OS versions specified in the matrix.

One issue that might then arise is a name conflict when uploading the job artifacts. For instance, if our old Linux build uses:

      - name: Upload artifacts
        uses: actions/upload-artifact@v4
        with:
          name: opus-linux
          path: ${{ steps.buildoutput.outputs.build-output-dir }}/**/*.so
Code language: YAML (yaml)

And the same step is used by the ARM workflow, we will get an error that the artifact matching the name opus-linux already exists for this workflow run.

This is where a small conditional step can be added to set an environment variable with the desired name:

      - name: Set dist name
        run: |
          if ${{ matrix.osver == 'ubuntu-24.04-arm' }}; then
            echo "distname=opus-linux-arm" >> "$GITHUB_ENV"
          else
            echo "distname=opus-linux" >> "$GITHUB_ENV"
          fi
Code language: YAML (yaml)

We can then change our artifact upload step to use these new names:

      - name: Upload artifacts
        uses: actions/upload-artifact@v4
        with:
          name: ${{ env.distname }}
          path: ${{ steps.buildoutput.outputs.build-output-dir }}/**/*.so
Code language: YAML (yaml)

As a bit of a sidetrack, you can also use checks like this to conditionally skip (or execute) steps depending on the architecture, using a if statement:

      - name: Mystep
        uses: actions/myaction@v4
        if: ${{ matrix.osver != 'ubuntu-24.04-arm' }}
        steps: |
          echo Hello world
Code language: YAML (yaml)

In the end, it’s good that this GitHub feature finally landed. Before that, you had to use “large” runners which can cost quite a bit in the end.

Older Posts
Jae 2012-2025, CC BY-SA 4.0 unless stated otherwise.