Jae's Blog

Building .NET using GitLab CI/CD

As I often mention, I use .NET a lot in general, as it’s fairly easy to use, has a huge ecosystem, and has evolved really positively in the past years (long gone are the days of Mono :D).

Another component of this is that .NET projects are incredibly easy to build and publish using GitLab CI/CD.
Today, we’re gonna explore some ways of building and publishing a .NET project using just that.

Docker

Probably the most straightforward, considering a simple Dockerfile:

FROM mcr.microsoft.com/dotnet/sdk:9.0 AS build
COPY . ./builddir
WORKDIR /builddir/

ARG ARCH=linux-x64

RUN dotnet publish --runtime ${ARCH} --self-contained -o output

FROM mcr.microsoft.com/dotnet/runtime:9.0

WORKDIR /app
COPY --from=build /builddir/output .

RUN apt-get update && apt-get install -y --no-install-recommends \
    curl \
    && rm -rf /var/lib/apt/lists/*

HEALTHCHECK CMD ["curl", "--insecure", "--fail", "--silent", "--show-error", "http://127.0.0.1:8080"]

ENTRYPOINT ["dotnet", "MyApp.dll"]

Code language: YAML (yaml)

Note: this assumes your app builds to MyApp.dll and has a healtheck endpoint on http://127.0.0.1:8080

Then building the container image itself is really easy:

stages:
  - docker

variables:
  DOCKER_DIND_IMAGE: "docker:24.0.7-dind"

build:docker:
  stage: docker
  services:
    - "$DOCKER_DIND_IMAGE"
  image: "$DOCKER_DIND_IMAGE"
  before_script:
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" "$CI_REGISTRY"
  script:
    - docker buildx create --use
    - docker buildx build
      --platform linux/amd64
      --file Dockerfile
      --tag "$CI_REGISTRY_IMAGE:${CI_COMMIT_REF_NAME%+*}"
      --provenance=false
      --push
      .
    - docker buildx rm
  only:
    - branches
    - tags
Code language: YAML (yaml)

This will build, then publish the image to the GitLab container registry of the repo. It’s possible to also specify a different registry, but kinda useless as the default one is already excellent for most cases.

Regular build / NuGet build

This type of build just requires source itself without much additional configuration.

It will build the software, then either upload the resulting files as an artifact or publish it into the GitLab NuGet registry.

For those two, I can recommend setting up a cache policy like:

cache:
  key: "$CI_JOB_STAGE-$CI_COMMIT_REF_SLUG"
  paths:
    - '$SOURCE_CODE_PATH$OBJECTS_DIRECTORY/project.assets.json'
    - '$SOURCE_CODE_PATH$OBJECTS_DIRECTORY/*.csproj.nuget.*'
    - '$NUGET_PACKAGES_DIRECTORY'
  policy: pull-push
Code language: YAML (yaml)

And a small restore snippet:

.restore_nuget:
  before_script:
    - 'dotnet restore --packages $NUGET_PACKAGES_DIRECTORY'
Code language: YAML (yaml)

You can also directly specify the build image that you want to use at the top of the CI definition file with, for instance:

image: mcr.microsoft.com/dotnet/sdk:9.0
Code language: YAML (yaml)

The regular build with artifact upload is also really easy:

build:
  extends: .restore_nuget
  stage: build
  script:
    - 'dotnet publish --no-restore'
  artifacts:
    paths:
      - MyApp/bin/Release/**/MyApp.dll
Code language: YAML (yaml)

In this case, we use ** to avoid having to update the path every time we upgrade the .NET version (for instance, .NET 8 will put the build in the net8.0 directory, .NET 9 in net9.0, etc).

Now, we can also build and publish the solution to the NuGet registry:

deploy:
  stage: deploy
  only: 
    - tags
  script:
    - dotnet pack -c Release
    - dotnet nuget add source "${CI_API_V4_URL}/projects/${CI_PROJECT_ID}/packages/nuget/index.json" --name gitlab --username gitlab-ci-token --password $CI_JOB_TOKEN --store-password-in-clear-text
    - dotnet nuget push "MyApp/bin/Release/*.nupkg" --source gitlab
  environment: $DEPLOY_ENVIRONMENT
Code language: YAML (yaml)

As seen in this definition, this publish stage will only run on tag pushes, but it’s also possible to generate a version string with the current commit and pushing this as a nightly release.

As an additional step, but not really related to the build itself, I often activate the Secret, SAST and dependencies scanning as it can prevent really obvious mistakes. Doing so is also really trivial:

include:
  - template: Jobs/Secret-Detection.gitlab-ci.yml
  - template: Security/SAST.gitlab-ci.yml
  - template: Security/Dependency-Scanning.gitlab-ci.yml
Code language: YAML (yaml)

In the end, building .NET is extremely trivial.

Deploying Hugo using GitLab pages

It is 2025 and it’s still super easy to deploy a blog using Hugo and GitLab pages.

In fact, the post you are reading right now is exactly that, deployed on my self-managed instance.

But Jae, weren’t you on another host beginning from last year?

Yes, last year I switched to Mataroa for the ease of mind the platform has.

The interface is very clean, has no bullshit whatsoever and is made and hosted in small web fashion.

Sadly, it has one caveat that was underlined once again by @miyuru@ipv6.social on the Fediverse (ironically under my post about GitHub and its lack of IPv6): no IPv6.

This is why today I moved back my blog on something I used to have a long time ago, a GitLab pages site generated by Hugo.

Actually implementing it was as easy as I remembered:

  1. Create a new Hugo project
  2. Add the CI config file
  3. Move my domain’s CNAME
  4. Wait for Let’s Encrypt to do its work (funnily enough this was the longest part)
  5. Tada, all done

The Hugo setup itself is fairly easy, so is the CI file:

default:
  image: ghcr.io/hugomods/hugo:ci-non-root

variables:
  GIT_SUBMODULE_STRATEGY: recursive

test:
  script:
    - hugo
  rules:
    - if: $CI_COMMIT_BRANCH != $CI_DEFAULT_BRANCH

deploy-pages:
  script:
    - hugo
  pages: true
  artifacts:
    paths:
      - public
  rules:
    - if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
  environment: production
Code language: YAML (yaml)

The main difference from years ago is that now, we have fully-featured (and maintained) Docker images for Hugo, the one being selected in this instance being ghcr.io/hugomods/hugo, maintained by HugoMods.

So now, enjoy all the posts over IPv6 \o/

Deploying your own GitLab instance under 5 minutes

It’s no secret that I work around GitLab during my day job and that I generally love this software.
This blog post is therefore not biased at all in any way or form. (do I need to mark this further as sarcasm, or did everyone get the memo?)

For this quick tutorial, you’ll need:

  • Some machine where the instance will be hosted
  • Docker installed on the machine
  • Ability to read instructions

For this, we’ll be using docker compose which provides an easy way to bring services up from a configuration file.
This tutorial just provides a bare bones instance that you will need to configure further later.

Small note: for this to work, your system SSH daemon will need to run on something else than port 22.

The compose file for GitLab is really simple:

services:
  gitlab:
    image: gitlab/gitlab-ce:17.6.1-ce.0
    volumes:
      - ./gitlab/config:/etc/gitlab
      - ./gitlab/log:/var/log/gitlab
      - ./gitlab/data:/var/opt/gitlab
    ports:
      - "22:22"
      - "80:80"
      - "443:443"
Code language: YAML (yaml)

And there you go, name this file docker-compose.yml on your server and issue:

$ docker compose up -dCode language: Bash (bash)

After a few minutes, the GitLab instance should be reachable on the IP of your machine.

To reset the root password, use:

$ docker compose exec gitlab gitlab-rake "gitlab:password:reset[root]"Code language: Bash (bash)

Now, some few steps that are recommended to take after having a working instance:

  • Reverse-proxy GitLab and get HTTPS certificates for everything
  • Host a runner (to be able to utilize CI/CD)
  • Refine the Gitlab.rb configuration, most notably:

In a future blog post, I’ll show how to configure one cool feature of GitLab which is the service desk which can be useful for some projects.

Setting up WireGuard tunnels from a BGP router

I recently re-started my BGP shenanigans, and with that, re-setup some VPNs using WireGuard for my personal machines.

I basically use those to whitelist connections to certain applications to only the prefix used by my machines.

The host machine runs Debian and BIRD, and the end devices are diverse from standard Linux machines, to Windows desktops, to iOS devices.

First, the BIRD configuration is pretty trivial, just adding a route for the prefix via lo:

route 2a12:4946:9900:dead::/64 via "lo";Code language: PHP (php)

I’m aware my subnet configurations can be sub-optimal, but I’m just running this for fun, not for it to be perfect¨.

Then, generating WireGuard keys on the host (the package wireguard-tools will need to be installed):

$ umask 077
$ wg genkey > privatekey
$ wg pubkey < privatekey > publickeyCode language: Bash (bash)

Now, the WireGuard host configuration is pretty trivial:

[Interface]
Address = 2a12:4946:9900:dead::1/128
ListenPort = 1337
PrivateKey = myVeryPrivateKey=

The key generation on the client follows the same procedure, if not easier via a GUI. The configuration itself looks like this:

[Interface]
PrivateKey = myVerySecretKey=
Address = 2a12:4946:9900:dead::1337/128

[Peer]
PublicKey = serverPubKey=
AllowedIPs = ::/1, 8000::/1
Endpoint = [2a12:4946:9900:dead::1]:1337
PersistentKeepalive = 30
Code language: JavaScript (javascript)

Note that I’m using ::/1 and 8000::/1 in AllowedIPs on Windows as setting it to ::/0 kills IPv4 connectivity (that is sadly still needed) and local connectivity to stuff like my storage array. On Linux, ::/0 works as expected, letting IPv4 through correctly.

Now, we can add a Peer section into the server’s configuration:

[Peer]
# PC Client
PublicKey = clientPubKey=
AllowedIPs = 2a12:4946:9900:dead::1337/128
Code language: PHP (php)

Now you should be all set and ready to bring up the tunnel on both ends.

On the server (assuming your configuration file is named tunnels.conf):

$ systemctl enable wg-quick@tunnels
$ systemctl start wg-quick@tunnelsCode language: Bash (bash)

And on the client using the same procedure, or just clicking the “Connect” button on the GUI client.

I’ve had some cases where this all of this alone isn’t enough, and had to add the prefixes to lo.

For instance:

$ ip -6 add 2a12:4946:9900:dead::/64 dev lo

And in /etc/network/interfaces:

iface lo inet6 static
        address 2a12:4946:9900:dead::/64
Code language: JavaScript (javascript)

Tho I will admit, I had more issues setting this up than I should have, and most configs would benefit from being re-written. Admittedly, I executed and documented this procedure while being extremely tired, which of course causes some issues.

But at least, this works, and can be very useful when I’m connected to networks not offering IPv6 connectivity as well.

Sending commands to a Docker Compose Resonite headless

After searching for a bit, I found a way to send commands to a Resonite headless within Docker programmatically without having to run docker compose attach <container> and having to manually detach.

You will need the software socat installed on the host machine, given most of my machines are running Debian, this can be done via apt install socat.

Now, you can use:

echo 'worlds' | socat EXEC:"docker attach $(docker compose ps -q reso-headless)",pty STDINCode language: Bash (bash)

In this command, replace:

  • worlds by the command you want
  • reso-headless by the defined name of your headless container

Alternatively, you can just specify the container name directly instead of doing $(docker compose ps -q reso-headless).

Addendum:

For this to work, you will have to make sure your container is defined with:

    tty: true
    stdin_open: true
Code language: YAML (yaml)

in the Compose file.

Setting up a Resonite Headless and getting metrics from it

As you may know, I made a mod for the platform Resonite that allows you to export metrics from the game as OpenMetrics data readable by Prometheus amongst other software.

In this post, we’re gonna see:

  1. How to setup a basic Resonite headless on Windows
  2. How to install the Resonite mod loader
  3. How to install the Headless Prometheus Exporter mod
  4. How to install and configure Grafana and Prometheus to scrape the metrics

First, some pre-requisites:

Setting up a headless in Windows is really easy. To first get the headless files, there are two ways that all begin the same way, sending /headlessCode to the Resonite Bot while logged in-game. This will give you the code needed to activate the beta branch for the headless.

Now, to download the headless, you have two ways:

  1. Use the graphical Steam client
  2. Use SteamCMD

Using the Steam client is the easiest. Just right-click on Resonite, hit “Properties”, then “Betas”, enter the code you previously got into the field and click on “Check code”.
You should now be able to select the Headless branch in the small dropdown and will download it automatically to your Steam game folder.

When using SteamCMD, unpack the zip file it comes in in a directory, hold SHIFT and right-click, then select “Open PowerShell window here”. Once the PowerShell open, you can use the following command to download the headless (replace your account name, password and headless code):

.\steamcmd.exe +force_install_dir ./resonite +login <account name> <account password> +app_license_request 2519830 +app_update 2519830 -beta headless -betapassword <headless code> validate +quitCode language: Bash (bash)

You should now be able to find the headless within the resonite\Headless directory near where SteamCMD is unpacked.

Now, to run the mod itself, the headless is not enough, we need to extend it via the Resonite Mod Loader. Its installation is straightforward, as outlined by their README file:

Download ResoniteModLoader.dll to Resonite’s Libraries folder (C:\Program Files (x86)\Steam\steamapps\common\Resonite\Libraries).
You may need to create this folder if it’s missing.
Place 0Harmony.dll into a rml_libs folder under your Resonite install directory (C:\Program Files (x86)\Steam\steamapps\common\Resonite\rml_libs).
You will need to create this folder.
Add the following to Resonite’s launch options: -LoadAssembly Libraries/ResoniteModLoader.dll. If you put ResoniteModLoader.dll somewhere else you will need to change the path.
Optionally add mod DLL files to a rml_mods folder under your Resonite install directory (C:\Program Files (x86)\Steam\steamapps\common\Resonite\rml_mods).
You can create the folder if it’s missing, or launch Resonite once with ResoniteModLoader installed and it will be created automatically.
Start the game. If you want to verify that ResoniteModLoader is working you can check the Resonite logs. (C:\Program Files (x86)\Steam\steamapps\common\Resonite\Logs).
The modloader adds some very obvious logs on startup, and if they’re missing something has gone wrong. Here is an example log file where everything worked correctly.

Those same instructions also apply to the headless software. On certain Windows version, you might want to right click and open the properties of ResoniteModLoader.dll and 0Harmony.dll then check the “unblock” checkbox as it will prevent the mod loader from functioning correctly.

Once this is done, head over to the releases tab of the Headless Prometheus Exporter mod and download the HeadlessPrometheusExporter.dll file. You will need to move this file in the rml_mods folder that should be located in the headless directory. If this folder isn’t there, you can create it. Also check the properties of this dll as well for the unblock checkmark.

Now that the mod installation is done, we have one last step: configuring our headless. This step is also incredibly easy, being documented on the Resonite Wiki.
I can recommend going in the Config folder then copying the example configuration file provided to fit your needs. It is not recommended to start out with a “minimal” configuration file, that might lack some essential settings and result in the headless not working as intended or not starting at all. Once you are familiar with what goes where and does what, feel free to trim the configuration file following your needs.

If you still have your PowerShell window open, you can type cd resonite\Headless to navigate to where the executable is and then use the following to start it with the mods:

.\Resonite.exe -LoadAssembly Libraries\ResoniteModLoader.dllCode language: Bash (bash)

After waiting a bit for it to start, you should be able to visit https://localhost:9000 in a web browser and see some metrics being displayed such as some world stats, engine FPS and many others.

Now we can tackle the last issue: how to display those metrics. For this, we’re going to use Prometheus in combination with Grafana, which are hands-on probably the best solution for this.

We’re gonna use my pre-made minimal Grafana setup for this. You can obtain the files by either using git clone https://g.j4.lc/general-stuff/configuration/grafana-minimal-setup.git or by downloading a ZIP or tarball of the source.

Getting started with it is extremely easy, but first, let’s go through each configuration file and what it does.

First, let’s open prometheus/prometheus.yml:

scrape_configs:
  - job_name: 'resonite-headless'
    scrape_interval: 15s
    static_configs:
      - targets: ['host.docker.internal:9000']
Code language: YAML (yaml)

This one is fairly simple. This configures Prometheus, whose job is to aggregate the metrics the mod is exposing. In this particular configuration, we are telling it to scrape our headless at the address host.docker.internal:9000 every 15 seconds.

Note that in most cases, you would need to use localhost:9000 or container_name:9000; host.docker.internal is only used because the headless is not in a container and on the host machine.

This leads us to our docker-compose.yml:

services:
    grafana:
        image: grafana/grafana
        ports:
            - 3000:3000
        volumes:
            - ./grafana:/var/lib/grafana

    prometheus:
        image: prom/prometheus
        volumes:
            - ./prometheus/prometheus.yml:/etc/prometheus/prometheus.yml
            - ./prometheus/data:/prometheus
        command:
            - '--config.file=/etc/prometheus/prometheus.yml'
            - '--storage.tsdb.path=/prometheus'
            - '--log.level=debug'
        extra_hosts:
            - 'host.docker.internal:host-gateway'
Code language: YAML (yaml)

It basically defines two services:

  • Grafana, which:
    ; Will store all of its data in the directory ./grafana
    ; Will be accessible on the port 3000
  • Prometheus, which:
    ; Will store all of its data in the directory ./prometheus/data
    ; Have access to the configuration file mentioned earlier
    ; Pass the localhost of the machine inside (as well as passing some command-line configuration arguments defining the configuration file path and the storage path)

Now that this is out of the way, open a PowerShell window in the directory where the docker-compose.yml file is located and make sure Docker is launched. There, just write docker-compose up -d and watch Docker automatically pull and start the images. If you are curious, you can use docker compose logs -f to see all the logs in real time.

After waiting for a minute or two, you can visit http://localhost:3000 in a web browser to setup Grafana. The default login should be admin and admin for both user and password.

Once in Grafana, head to the menu on the left, go in “Connections” then “Data sources”. There, select “Add new data source” and select Prometheus. You will only need to set the URL of said source to http://prometheus:9090, you can then go on the bottom and click on “Save & Test”.

You can now either select to go to the explore view or create a brand new dashboard on the top right of the screen. I can recommend playing with the Explore view for a bit before starting to build dashboards, as it will teach you the different types of visualisation as well as some useful queries.

Some metrics you can query include:

# RESONITE HEADLESS PROMETHEUS EXPORTER
totalPlayers 1
totalWorlds 1
# WORLD STATS 
world_users{label="S-09444920-caab-4c3d-a242-a50b028c33e6"} 1
world_maxusers{label="S-09444920-caab-4c3d-a242-a50b028c33e6"} 16
world_network{label="S-09444920-caab-4c3d-a242-a50b028c33e6",type="totalCorrections"} 82
world_network{label="S-09444920-caab-4c3d-a242-a50b028c33e6",type="totalProcessedMessages"} 4049
world_network{label="S-09444920-caab-4c3d-a242-a50b028c33e6",type="totalReceivedConfirmations"} 0
world_network{label="S-09444920-caab-4c3d-a242-a50b028c33e6",type="totalReceivedControls"} 3
world_network{label="S-09444920-caab-4c3d-a242-a50b028c33e6",type="totalReceivedDeltas"} 554
world_network{label="S-09444920-caab-4c3d-a242-a50b028c33e6",type="totalReceivedFulls"} 0
world_network{label="S-09444920-caab-4c3d-a242-a50b028c33e6",type="totalReceivedStreams"} 3491
world_network{label="S-09444920-caab-4c3d-a242-a50b028c33e6",type="totalSentConfirmations"} 554
world_network{label="S-09444920-caab-4c3d-a242-a50b028c33e6",type="totalSentDeltas"} 2410
world_network{label="S-09444920-caab-4c3d-a242-a50b028c33e6",type="totalSentControls"} 3
world_network{label="S-09444920-caab-4c3d-a242-a50b028c33e6",type="totalSentFulls"} 1
world_network{label="S-09444920-caab-4c3d-a242-a50b028c33e6",type="totalSentStreams"} 106
engineFps 59.98274
completedGatherJobs 0
startedGatherJobs 0
failedGatherJobs 0
engineUpdateTime 0.0004873Code language: PHP (php)

Noting that the S-09444920-caab-4c3d-a242-a50b028c33e6 is not a set variable as it is a session ID.

Now that you’ve done all of this, you can now enjoy having a nice dashboard displaying metrics about your headless!

You can also expand on the subject by reading further:

Newer Posts
Jae 2012-2025, CC BY-SA 4.0 unless stated otherwise.