Jae's Blog

Debugging SSL in GitLab Pages

Debugging GitLab Pages on a self-managed instance can be a hassle, especially if you’re only getting nebulous messages without any further information.

Do note, for this to work, you’ll need to have an admin access to the machine hosting the main Rails app.

To find out exactly why Let’s Encrypt is failing to get a certificate, use the following command:

docker compose exec gitlab gitlab-ctl tail gitlab-rails --follow | grep -i "encrypt\|acme\|certificate"Code language: JavaScript (javascript)

It should output the exact log of Let’s Encrypt and tell you if it’s hurting a wall somewhere. If you have a bare-metal instance, just remove the docker part of the command, and only use gitlab-ctl.

Headless road to ARM: status

Being currently assigned to the issue about ARM support for Resonite headlesses (GH-2555), time for an update since there hasn’t been one in some time.

First off, everything is looking great, current status being:

  • 6 PRs are currently open (FreeImage, Opus, Crunch, Assimp, MSDFGen, RNNoise)
  • 1 PR has been merged 🎉 (Brotli)
  • 1 repo is missing (Freetype)

This marks the first ARM-related PR being reviewed and merged into an official repository, being the PR #1 on the Brotli repo, which bundled Windows, Linux x64 and Linux ARM CI/CD builds.

As a reminder, I am currently providing a complete package of all libraries built directly for ARM on my website.

Next steps would be to:

  • Get an official fork of the Freetype repository (requested on 2025/04/09)
  • Create a container image bundling my libraries and a way to download the headless easily on ARM machines
  • Get all the current PRs reviewed and merged

The second one is more important as distribution is probably the biggest issue for complete ARM support of the headless, SteamCMD not supporting this architecture.

I am very confident to say that we will reach official support very soon, given how well this has been going so far.

My plans after this feature is shipped is to work on the macOS support (GH-1412) as it’s also marked as “community help wanted”.

Making your own web corner

So, you’ve finally bought yourself a domain (or thinking about it), got a server at home, and now you want to host your own corner of the web?

Great! Allow me to be your guide through this journey.

Pre-requisites

You’ll need a bunch of stuff for this tutorial, including:

A domain

Your domain will be the public face and how people (and yourself) will access your corner, choose it wisely.

To get a domain, you need to choose a registrar first, to which you will register it. Registering a domain can cost a fee anywhere from 5€ to 5000€.

Some good registrars include:

  • Spaceship – Really cheap, you can get a .eu for just under 5€ there
  • Hetzner – Well-known hosting service & DNS registrar
  • PorkBun – Well-known, huge selection, cheap sometimes
  • Inwx – German registrar, good service

If your friends also have their own corners, ask them about their registrar, maybe they had good experiences with some others than listed here!

From now on, assume we just bought example.com as a domain. Of course, replace this example.com by the domain you just got in the next steps.

A server

Now here comes the part where you have to choose where your stuff will be hosted. There are multiple ways of doing this:

  • Run a spare computer at home (this tutorial will focus on this)
  • Use a hosting provider like Hetzner or Infomaniak (similar to the first option, so this tutorial also applies)
  • Use GitLab, GitHub or Codeberg pages to host a static website (not covered in this tutorial, coming soon!)

In this example, we assume you have a spare computer at home running Debian Linux.

The boring networking part

DNS stands for Domain Name System. You can read more about it on howdns.works, but the basic gist is:

  • IP addresses are hard for people to remember as-is
  • DNS puts in relation a domain name to an IP address
  • For instance: j4.lc will point to 2a12:4946:9900:f00::f00 when being looked up
  • There are a lot of DNS record types, but the most importants are A and AAAA here
  • A A record maps a domain name to an IPv4 address, for instance: j4.lc -> 95.217.179.88
  • A AAAA record maps a domain name to an IPv6 address, for instance: j4.lc -> 2a12:4946:9900:f00::f00

Pointing your domain to your server

First, let’s figure out what’s the public IP of your server. For this you can execute:

curl -4 ifconfig.me
curl -6 ifconfig.meCode language: Bash (bash)

If the second command fails, this means your ISP doesn’t supports IPv6. In any case, write those IPs down in a notepad and let’s move on.

You will then need to add a DNS record on your domain to point to your server. To do this, log onto your registar and direct yourself to the DNS control panel.

When adding a record, you will have a few properties to fill:

  • name – Which subdomain you want to use. Setting this to @ will mean the root of the domain, in our case example.com, setting this to anything else, for instance awoo will “create” the subdomain awoo.example.com and make it point to your IP instead of the root
  • type – We’ve seen this earlier, we want this to be A or AAAA depending of if we’re adding an IPv4 or IPv6 (both can be present at the same time)
  • ttl – This is the time (in seconds) the record will stay cached. Leave it as-is. This is how long you will have to wait when you do a change to this record for you to see it
  • data – The IP address you want the record to point to
  • proxy status – This is for CloudFlare only, this setting controls if we want our site to be through CloudFlare, let’s disable this for now

Note: you do not need to specify the port of your application in the record. It is up to the app you are using (for instance, a web browser) to query the right ports. Adding a record to 95.217.179.88:8080 will be invalid for instance.

In our example, we can set everything (once again replace with your own data):

  • Name: @
  • Type: AAAA
  • TTL: 60 (default)
  • Data: 2a12:4946:9900:f00::f00

Meaning our root domain example.com will resolve to 2a12:4946:9900:f00::f00.

We can also add a A record to provide IPv4 connectivity:

  • Name: @
  • Type: A
  • TTL: 60 (default)
  • Data: 95.217.179.88

Opening ports

Now that your domain is pointing to your home server, you will need to open a few ports to make it accessible from the outside.

First, here are the list of ports you need:

  • 80 is the default HTTP port that you will need to later to obtain SSL certificates
  • 443 is the default HTTPS port that you will need to serve your corner

You will then need to allow those two ports in two places:

  • Your OS firewall, can be done through ufw usually
  • Your router’s settings (also called “port opening”, “port redirection” and a lot of other names), make sure the two ports are open on both TCP and UDP and pointing to your home server

Warning: in some countries, some ISPs will not allow you to open those two ports. It’s probably because you are behind something called CGNAT which allows ISPs to share the same IP address between multiple customers.
If this is the case call your ISP to get a proper IP that is not behind CGNAT. If this is not possible, you will have to either rent a server at a hosting provider, or get a tunnel.

Once this is done, congratulations, the external world can now reach you.

Web server shenanigans

Now, to serve your corner to the external world, you will need a web server. In this case, we will use Caddy which is really easy to use and takes care of HTTPS renewals for you.

Installing a web server

First, we’re gonna need to install Caddy, it goes a bit like this:

sudo apt install -y debian-keyring debian-archive-keyring apt-transport-https curl
curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/gpg.key' | sudo gpg --dearmor -o /usr/share/keyrings/caddy-stable-archive-keyring.gpg
curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/debian.deb.txt' | sudo tee /etc/apt/sources.list.d/caddy-stable.list
sudo apt update
sudo apt install caddyCode language: Bash (bash)

After doing this, make sure Caddy is started and enabled (meaning it will start with the server) by doing:

sudo systemctl start caddy sudo systemctl enable caddy

Now, if you visit your domain, you will see the example Caddy page, meaning you are now online!

Editing the web server configuration

The configuration for Caddy is located at /etc/caddy/Caddyfile. You can see basic examples of it on the Caddy documentation website.

In our example, we’re gonna use the following simple configuration (as always, replace example.com by your domain):

https://example.com {
    root * /var/www/corner
    file_server
}
Code language: JavaScript (javascript)

Now, create the directory /var/www/corner and add your website files in there, for instance an index.html.

Restart Caddy using sudo systemctl restart caddy, wait a minute for the HTTPS certificate to be issued and you’re in business, you now have your own corner on the internet!

Have fun editing it and sharing it to your friends.
A blog post will be published later this month on how to create your own blog (for free) using GitLab pages!

Reading more

Here are some links to help you get started with your newfound internet home:

If you feel like I missed something, please do contact me and I will add it there.

Resostats outage postmortem

Today, from approximately 16:30 UTC to 17:45 UTC, the Resostats Dashboard which provides various public metrics on Resonite was offline.

Background

Routine maintenance was being done on the machine hosting Resostats, namely updating the packages, containers, cleaning up some debugging tools.
Configuration changes were committed to try and have the TSDB sync faster to the S3 storage bucket that backs the whole instance.

Metrics stored on the Mimir instances do not have any set expiration.

The S3 bucket itself is fully replicated and backed up using Restic in multiple places, including rsync.net as an external one.

The cause

While committing changes to the mimir configuration, the compactor_blocks_retention_period configuration key was swapped from 0 to 12h.

The compactor_blocks_retention_period configuration key in mimir specifies the retention period for blocks. Anything older than the set amount will get marked for deletion, then cleaned up.
You can read more about this in the official mimir configuration documentation.

This prompted the mimir instances to start marking blocks older than 12h for deletion, thus cleaning inadvertently years of historical data.

Restoration

The error in the configuration was quickly spotted and corrected, but the blocks already marked for deletion were already being cleaned up regardless.
Given the backup hosted on rsync.net was the closest and fastest for this available server, the decision was taken to restore everything from there.

The restoration process was easy enough, given Restic provides a nice command for this:

$ restic  --password-file=/sec/pass -r sftp:user@user.rsync.net:bck/st-fi/mimir-blocks restore latest --target /srv/pool/mimir-blocksCode language: Bash (bash)

Most of the time spent was the stressful wait for the backup to be downloaded onto the machine.

In the end, about 12h of metrics were lost, which is not that much considering the scale of the outage.

Learnings

From now on, a backup will be done before starting any maintenance.
The current backup strategy has also been proven robust enough to withstand an event like this one.

Turns out having a proper backup strategy is damn effective.

Using the new GitHub ARM runners

Just yesterday at the time of writing, GitHub (finally) released their public ARM runners for Open-Source projects.

This means you can now build ARM programs natively on Linux without having to fiddle with weird cross-compilation.

One way to achieve that is through a Matrix. Considering the following workflow to build, then upload an artifact (taken from the YDMS Opus workflow I wrote):

on: [push]

jobs:
  Build-Linux:
    name: Builds Opus for Linux
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - name: Download models
        run: ./autogen.sh

      - name: Create build directory
        run: mkdir build

      - name: Create build out variable
        id: buildoutput
        run: echo "build-output-dir=${{ github.workspace }}/build" >> "$GITHUB_OUTPUT"

      - name: Configure CMake
        working-directory: ${{ steps.buildoutput.outputs.build-output-dir }}
        run: cmake .. -DBUILD_SHARED_LIBS=ON

      - name: Build Opus for Linux
        working-directory: ${{ steps.buildoutput.outputs.build-output-dir }}
        run: cmake --build . --config Release --target package

      - name: Upload artifacts
        uses: actions/upload-artifact@v4
        with:
          name: opus-linux
          path: ${{ steps.buildoutput.outputs.build-output-dir }}/**/*.so
Code language: YAML (yaml)

We can now easily make it build for ARM by using a matrix referencing the new ubuntu-24.04-arm runner label.

For instance, we can add this before the job steps:

    strategy:
      matrix:
        osver: [ubuntu-latest, ubuntu-24.04-arm]
Code language: YAML (yaml)

Then change the runs-on configuration to specify ${{ matrix.osver }} which will create jobs for all the OS versions specified in the matrix.

One issue that might then arise is a name conflict when uploading the job artifacts. For instance, if our old Linux build uses:

      - name: Upload artifacts
        uses: actions/upload-artifact@v4
        with:
          name: opus-linux
          path: ${{ steps.buildoutput.outputs.build-output-dir }}/**/*.so
Code language: YAML (yaml)

And the same step is used by the ARM workflow, we will get an error that the artifact matching the name opus-linux already exists for this workflow run.

This is where a small conditional step can be added to set an environment variable with the desired name:

      - name: Set dist name
        run: |
          if ${{ matrix.osver == 'ubuntu-24.04-arm' }}; then
            echo "distname=opus-linux-arm" >> "$GITHUB_ENV"
          else
            echo "distname=opus-linux" >> "$GITHUB_ENV"
          fi
Code language: YAML (yaml)

We can then change our artifact upload step to use these new names:

      - name: Upload artifacts
        uses: actions/upload-artifact@v4
        with:
          name: ${{ env.distname }}
          path: ${{ steps.buildoutput.outputs.build-output-dir }}/**/*.so
Code language: YAML (yaml)

As a bit of a sidetrack, you can also use checks like this to conditionally skip (or execute) steps depending on the architecture, using a if statement:

      - name: Mystep
        uses: actions/myaction@v4
        if: ${{ matrix.osver != 'ubuntu-24.04-arm' }}
        steps: |
          echo Hello world
Code language: YAML (yaml)

In the end, it’s good that this GitHub feature finally landed. Before that, you had to use “large” runners which can cost quite a bit in the end.

Building .NET using GitLab CI/CD

As I often mention, I use .NET a lot in general, as it’s fairly easy to use, has a huge ecosystem, and has evolved really positively in the past years (long gone are the days of Mono :D).

Another component of this is that .NET projects are incredibly easy to build and publish using GitLab CI/CD.
Today, we’re gonna explore some ways of building and publishing a .NET project using just that.

Docker

Probably the most straightforward, considering a simple Dockerfile:

FROM mcr.microsoft.com/dotnet/sdk:9.0 AS build
COPY . ./builddir
WORKDIR /builddir/

ARG ARCH=linux-x64

RUN dotnet publish --runtime ${ARCH} --self-contained -o output

FROM mcr.microsoft.com/dotnet/runtime:9.0

WORKDIR /app
COPY --from=build /builddir/output .

RUN apt-get update && apt-get install -y --no-install-recommends \
    curl \
    && rm -rf /var/lib/apt/lists/*

HEALTHCHECK CMD ["curl", "--insecure", "--fail", "--silent", "--show-error", "http://127.0.0.1:8080"]

ENTRYPOINT ["dotnet", "MyApp.dll"]

Code language: YAML (yaml)

Note: this assumes your app builds to MyApp.dll and has a healtheck endpoint on http://127.0.0.1:8080

Then building the container image itself is really easy:

stages:
  - docker

variables:
  DOCKER_DIND_IMAGE: "docker:24.0.7-dind"

build:docker:
  stage: docker
  services:
    - "$DOCKER_DIND_IMAGE"
  image: "$DOCKER_DIND_IMAGE"
  before_script:
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" "$CI_REGISTRY"
  script:
    - docker buildx create --use
    - docker buildx build
      --platform linux/amd64
      --file Dockerfile
      --tag "$CI_REGISTRY_IMAGE:${CI_COMMIT_REF_NAME%+*}"
      --provenance=false
      --push
      .
    - docker buildx rm
  only:
    - branches
    - tags
Code language: YAML (yaml)

This will build, then publish the image to the GitLab container registry of the repo. It’s possible to also specify a different registry, but kinda useless as the default one is already excellent for most cases.

Regular build / NuGet build

This type of build just requires source itself without much additional configuration.

It will build the software, then either upload the resulting files as an artifact or publish it into the GitLab NuGet registry.

For those two, I can recommend setting up a cache policy like:

cache:
  key: "$CI_JOB_STAGE-$CI_COMMIT_REF_SLUG"
  paths:
    - '$SOURCE_CODE_PATH$OBJECTS_DIRECTORY/project.assets.json'
    - '$SOURCE_CODE_PATH$OBJECTS_DIRECTORY/*.csproj.nuget.*'
    - '$NUGET_PACKAGES_DIRECTORY'
  policy: pull-push
Code language: YAML (yaml)

And a small restore snippet:

.restore_nuget:
  before_script:
    - 'dotnet restore --packages $NUGET_PACKAGES_DIRECTORY'
Code language: YAML (yaml)

You can also directly specify the build image that you want to use at the top of the CI definition file with, for instance:

image: mcr.microsoft.com/dotnet/sdk:9.0
Code language: YAML (yaml)

The regular build with artifact upload is also really easy:

build:
  extends: .restore_nuget
  stage: build
  script:
    - 'dotnet publish --no-restore'
  artifacts:
    paths:
      - MyApp/bin/Release/**/MyApp.dll
Code language: YAML (yaml)

In this case, we use ** to avoid having to update the path every time we upgrade the .NET version (for instance, .NET 8 will put the build in the net8.0 directory, .NET 9 in net9.0, etc).

Now, we can also build and publish the solution to the NuGet registry:

deploy:
  stage: deploy
  only: 
    - tags
  script:
    - dotnet pack -c Release
    - dotnet nuget add source "${CI_API_V4_URL}/projects/${CI_PROJECT_ID}/packages/nuget/index.json" --name gitlab --username gitlab-ci-token --password $CI_JOB_TOKEN --store-password-in-clear-text
    - dotnet nuget push "MyApp/bin/Release/*.nupkg" --source gitlab
  environment: $DEPLOY_ENVIRONMENT
Code language: YAML (yaml)

As seen in this definition, this publish stage will only run on tag pushes, but it’s also possible to generate a version string with the current commit and pushing this as a nightly release.

As an additional step, but not really related to the build itself, I often activate the Secret, SAST and dependencies scanning as it can prevent really obvious mistakes. Doing so is also really trivial:

include:
  - template: Jobs/Secret-Detection.gitlab-ci.yml
  - template: Security/SAST.gitlab-ci.yml
  - template: Security/Dependency-Scanning.gitlab-ci.yml
Code language: YAML (yaml)

In the end, building .NET is extremely trivial.

Deploying Hugo using GitLab pages

It is 2025 and it’s still super easy to deploy a blog using Hugo and GitLab pages.

In fact, the post you are reading right now is exactly that, deployed on my self-managed instance.

But Jae, weren’t you on another host beginning from last year?

Yes, last year I switched to Mataroa for the ease of mind the platform has.

The interface is very clean, has no bullshit whatsoever and is made and hosted in small web fashion.

Sadly, it has one caveat that was underlined once again by @miyuru@ipv6.social on the Fediverse (ironically under my post about GitHub and its lack of IPv6): no IPv6.

This is why today I moved back my blog on something I used to have a long time ago, a GitLab pages site generated by Hugo.

Actually implementing it was as easy as I remembered:

  1. Create a new Hugo project
  2. Add the CI config file
  3. Move my domain’s CNAME
  4. Wait for Let’s Encrypt to do its work (funnily enough this was the longest part)
  5. Tada, all done

The Hugo setup itself is fairly easy, so is the CI file:

default:
  image: ghcr.io/hugomods/hugo:ci-non-root

variables:
  GIT_SUBMODULE_STRATEGY: recursive

test:
  script:
    - hugo
  rules:
    - if: $CI_COMMIT_BRANCH != $CI_DEFAULT_BRANCH

deploy-pages:
  script:
    - hugo
  pages: true
  artifacts:
    paths:
      - public
  rules:
    - if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
  environment: production
Code language: YAML (yaml)

The main difference from years ago is that now, we have fully-featured (and maintained) Docker images for Hugo, the one being selected in this instance being ghcr.io/hugomods/hugo, maintained by HugoMods.

So now, enjoy all the posts over IPv6 \o/

Deploying your own GitLab instance under 5 minutes

It’s no secret that I work around GitLab during my day job and that I generally love this software.
This blog post is therefore not biased at all in any way or form. (do I need to mark this further as sarcasm, or did everyone get the memo?)

For this quick tutorial, you’ll need:

  • Some machine where the instance will be hosted
  • Docker installed on the machine
  • Ability to read instructions

For this, we’ll be using docker compose which provides an easy way to bring services up from a configuration file.
This tutorial just provides a bare bones instance that you will need to configure further later.

Small note: for this to work, your system SSH daemon will need to run on something else than port 22.

The compose file for GitLab is really simple:

services:
  gitlab:
    image: gitlab/gitlab-ce:17.6.1-ce.0
    volumes:
      - ./gitlab/config:/etc/gitlab
      - ./gitlab/log:/var/log/gitlab
      - ./gitlab/data:/var/opt/gitlab
    ports:
      - "22:22"
      - "80:80"
      - "443:443"
Code language: YAML (yaml)

And there you go, name this file docker-compose.yml on your server and issue:

$ docker compose up -dCode language: Bash (bash)

After a few minutes, the GitLab instance should be reachable on the IP of your machine.

To reset the root password, use:

$ docker compose exec gitlab gitlab-rake "gitlab:password:reset[root]"Code language: Bash (bash)

Now, some few steps that are recommended to take after having a working instance:

  • Reverse-proxy GitLab and get HTTPS certificates for everything
  • Host a runner (to be able to utilize CI/CD)
  • Refine the Gitlab.rb configuration, most notably:

In a future blog post, I’ll show how to configure one cool feature of GitLab which is the service desk which can be useful for some projects.

Setting up WireGuard tunnels from a BGP router

I recently re-started my BGP shenanigans, and with that, re-setup some VPNs using WireGuard for my personal machines.

I basically use those to whitelist connections to certain applications to only the prefix used by my machines.

The host machine runs Debian and BIRD, and the end devices are diverse from standard Linux machines, to Windows desktops, to iOS devices.

First, the BIRD configuration is pretty trivial, just adding a route for the prefix via lo:

route 2a12:4946:9900:dead::/64 via "lo";Code language: PHP (php)

I’m aware my subnet configurations can be sub-optimal, but I’m just running this for fun, not for it to be perfect¨.

Then, generating WireGuard keys on the host (the package wireguard-tools will need to be installed):

$ umask 077
$ wg genkey > privatekey
$ wg pubkey < privatekey > publickeyCode language: Bash (bash)

Now, the WireGuard host configuration is pretty trivial:

[Interface]
Address = 2a12:4946:9900:dead::1/128
ListenPort = 1337
PrivateKey = myVeryPrivateKey=

The key generation on the client follows the same procedure, if not easier via a GUI. The configuration itself looks like this:

[Interface]
PrivateKey = myVerySecretKey=
Address = 2a12:4946:9900:dead::1337/128

[Peer]
PublicKey = serverPubKey=
AllowedIPs = ::/1, 8000::/1
Endpoint = [2a12:4946:9900:dead::1]:1337
PersistentKeepalive = 30
Code language: JavaScript (javascript)

Note that I’m using ::/1 and 8000::/1 in AllowedIPs on Windows as setting it to ::/0 kills IPv4 connectivity (that is sadly still needed) and local connectivity to stuff like my storage array. On Linux, ::/0 works as expected, letting IPv4 through correctly.

Now, we can add a Peer section into the server’s configuration:

[Peer]
# PC Client
PublicKey = clientPubKey=
AllowedIPs = 2a12:4946:9900:dead::1337/128
Code language: PHP (php)

Now you should be all set and ready to bring up the tunnel on both ends.

On the server (assuming your configuration file is named tunnels.conf):

$ systemctl enable wg-quick@tunnels
$ systemctl start wg-quick@tunnelsCode language: Bash (bash)

And on the client using the same procedure, or just clicking the “Connect” button on the GUI client.

I’ve had some cases where this all of this alone isn’t enough, and had to add the prefixes to lo.

For instance:

$ ip -6 add 2a12:4946:9900:dead::/64 dev lo

And in /etc/network/interfaces:

iface lo inet6 static
        address 2a12:4946:9900:dead::/64
Code language: JavaScript (javascript)

Tho I will admit, I had more issues setting this up than I should have, and most configs would benefit from being re-written. Admittedly, I executed and documented this procedure while being extremely tired, which of course causes some issues.

But at least, this works, and can be very useful when I’m connected to networks not offering IPv6 connectivity as well.

Sending commands to a Docker Compose Resonite headless

After searching for a bit, I found a way to send commands to a Resonite headless within Docker programmatically without having to run docker compose attach <container> and having to manually detach.

You will need the software socat installed on the host machine, given most of my machines are running Debian, this can be done via apt install socat.

Now, you can use:

echo 'worlds' | socat EXEC:"docker attach $(docker compose ps -q reso-headless)",pty STDINCode language: Bash (bash)

In this command, replace:

  • worlds by the command you want
  • reso-headless by the defined name of your headless container

Alternatively, you can just specify the container name directly instead of doing $(docker compose ps -q reso-headless).

Addendum:

For this to work, you will have to make sure your container is defined with:

    tty: true
    stdin_open: true
Code language: YAML (yaml)

in the Compose file.

Jae 2012-2025, CC BY-SA 4.0 unless stated otherwise.