Jae's Blog

The making of the Resonite sessions Discord bot

If you are a Resonite player and are in the Discord guild, you might be familiar with the #active-sessions channel, in which a bot displays the 10 most popular sessions on Resonite as well as some stats.

Screenshot of Discord showing game stats as well as an embed for a single session at the bottom.

What you might not know, is that I’m the author of this bot, that I originally started as just a small oneshot project to have some fun.

For some background, when Neos was still a thing (technically still is, but in what state), a bot like this was in the Discord guild, showing sessions and game stats like server metrics and session counts.

When Resonite was released, the channel was there, however, no metrics or posts were ever made, saying that the bot would be revived at some point in the future(TM).

At the time, I was a bit bored and wanted a new project, so I decided to start experimenting with .NET Discord bots and in term set myself the objective to re-create the original bot.

Why .NET? One reason being that I use .NET all the time as it’s fairly easy to use, the other one being that most of the Resonite community also knows .NET due to being used to make mods and whatnot.

The bot itself is fairly simple and divided in multiple parts built around the .NET Hosted Services:

  • Discord client service – handles the connectivity to Discord
  • Interaction service – provides the command handler for the few commands the bot has
  • Startup service – sets up some bot must-haves like the logger and other data
  • Message service – the core functionality of the bot that runs all the logic that makes the magic happen
  • Healthcheck service – this one is optional, but important when hosting the bot in Docker

Let’s go through all of those in detail and see why they were made that way.

I’m gonna group those together as they belong to the same component really: handling Discord-related shenanigans.

The bot has a few commands:

  • /info which shows some info about the bot
  • /setchannel which sets the channel in which you want the notifications
  • /which which shows which channel is set as the notification one
  • /unregister which unsets the notification channel
  • /setting allows you to toggle some settings like if you want thumbnails or not

All of those commands (with the exception of /info) are admin-only.

This part is honestly fairly boring and straightforward, the rest just passes a Discord bot token, connects to the chat service and tries to log in.

        await _client.LoginAsync(TokenType.Bot, token);
        await _client.StartAsync();
Code language: C# (cs)

The message service

Now this is the juicy stuff.

This part handles:

  • Retrieving the info from the Resonite API
  • Formats it neatly into a message
  • Generates embeds from the session info
  • Storing which messages were sent where in a DB
  • Checking if messages can be updated
  • Updating said messages with the new data

First off, the bot uses a relatively simple SQLite DB to avoid everything being hardcoded. The first versions were using direct channel IDs, but this is far from ideal if you want something modular without having to host multiple copies.

The DB basically stores the guild ID, channel ID and settings for the bot to send the updates in the right place, containing what we want.

Speaking of settings, there is only one so far: show the session thumbnails or not. The reason for this is a difference between the Discord & Resonite ToS. While nipples aren’t considered sexual on Resonite, they are on Discord, meaning having a session thumbnail showing nipples on Discord without gating the channel to 18+ users would be violating the ToS of the platform.

One thing I am quite proud of is how stable the bot it, nowadays it rarely, if ever, crashes alone, which it used to do quite often.

The bot is made to handle errors gracefully and never shut down or crash the program unless something really, really warrants it. When running this bot, all the errors that would normally crash it are instead logged with an error message and stacktrace to make it easy to debug.

Another thing to note is that the database schema hasn’t been updated since the bot basically released and touching it is considered a very last resort thing. Having the DB break during an upgrade would be disastrous, requiring all admins to re-set the notifications channel. As they say, if it ain’t broken, don’t fix it.

Out of all the variables in the mix, Discord is the most unstable one, having lots of outages, sometimes lasting hours at a time, just being slow for no reason whatsoever or thinking existing things (such as channels or messages) don’t exist even though they do.

This is why the whole checking logic exists, it will first check if the channel exists, if it does will check if the message exists, and if it does, try to update it. If it fails at any point, it will try again for a while then delete the message, try to re-send it, and ultimately, if this fails, delete the channel from the DB and the admin will have to re-set the notification channel again.

The re-try was implemented after some issues raised from Discord:

  • The bot would forget about the message (because Discord said it didn’t exist anymore) and would send a second one in the same channel or over and over until restarted
  • Sometimes the checks would fail on first try and delete everything gracefully without notifying anybody
  • Bot would crash because everything “ceased” to exist

On the Resonite side, if an error happens while contacting the API, the bot will just skip this cycle and try updating the next time (one minute by default). This used to crash the bot (whoops) in the early days.

The latest addition made was the Docker healthcheck, given recently the bot crashed in the main Resonite guild (GH-3521) and no monitoring was triggered.

Now the bot has a small HTTP server running, simply returning the date that a curl instance will check every 30 seconds.

The CI/CD

It’s no secret that I love GitLab as a software. I also work daily on and with it in my day-to-day job.

The CI/CD used in this project is extensive, but classic:

  • Secret detection
  • SAST scan
  • Dependency scanning
  • NuGet build
  • NuGet deployment
  • ARM64 and x86 Docker builds
  • Release publishing

The first three are kinda explicit, and will warn if any secrets have been pushed, if any unsafe codepaths are detected or if any dependencies needs updating.

Now the most important thing to highlight are the separated Docker builds for the two architectures. I originally tried combining the builds into a single one as you would do by specifying multiple architectures in buildx, however this did not work.

An error when building ARM on x86 (with virtualization) and vice versa would always arise, though the same command would work for other projects.

To avoid errors when doing things manually, the release process is also automated, triggering when a tag is detected. It will basically build as usual with the version tag and then publish the release following a markdown template. It will also automatically fill-in some details like image tags, etc from the changelog.


Fun fact, the bot is Open-Source under the MIT license, meaning you are welcome to host your own version.

Some stats about the project so far:

  • 47 pull requests merged
  • 97 commits (PRs are squashed)
  • 16 releases
  • 6+ months of existence
  • 10k (roughly) of users served in the Resonite Discord alone
  • Way too many hours sunk into overengineering this

What’s to come? Nothing much really, this bot has basically achieved its goal of having the active session list published somewhere.

The project is basically in maintenance mode, housed under the Crystalline Shard development group which I am one of the founders.

Now for the self-criticism: if I had to restart the project from scratch, I would probably opt into an even less complex design. Some services are really huge files that only increased in complexity with time.

Currently nothing too bad, but I think a refactor would be warranted to decrease complexity and make it more maintainable.

I wouldn’t touch the language though, since the bot’s footprint is really small, only about 30Mb to 50Mb of RAM used total during normal runtime.

In the end, this bot is a really fun project to make and maintain, and I’m extremely happy that it got canonized and used in an official manner.

For a quick roadmap:

  • GL-33 have someone completely review the codebase
  • GL-39 disable unneeded intents for the bot (on hold for evaluation)
  • GL-48 solve a mystery crash that randomly happens… sometimes

Let’s see in about 6 more months to see how everything evolved with even more bugs being squashed.

Deploying Hugo using GitLab pages

It is 2025 and it’s still super easy to deploy a blog using Hugo and GitLab pages.

In fact, the post you are reading right now is exactly that, deployed on my self-managed instance.

But Jae, weren’t you on another host beginning from last year?

Yes, last year I switched to Mataroa for the ease of mind the platform has.

The interface is very clean, has no bullshit whatsoever and is made and hosted in small web fashion.

Sadly, it has one caveat that was underlined once again by @miyuru@ipv6.social on the Fediverse (ironically under my post about GitHub and its lack of IPv6): no IPv6.

This is why today I moved back my blog on something I used to have a long time ago, a GitLab pages site generated by Hugo.

Actually implementing it was as easy as I remembered:

  1. Create a new Hugo project
  2. Add the CI config file
  3. Move my domain’s CNAME
  4. Wait for Let’s Encrypt to do its work (funnily enough this was the longest part)
  5. Tada, all done

The Hugo setup itself is fairly easy, so is the CI file:

default:
  image: ghcr.io/hugomods/hugo:ci-non-root

variables:
  GIT_SUBMODULE_STRATEGY: recursive

test:
  script:
    - hugo
  rules:
    - if: $CI_COMMIT_BRANCH != $CI_DEFAULT_BRANCH

deploy-pages:
  script:
    - hugo
  pages: true
  artifacts:
    paths:
      - public
  rules:
    - if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
  environment: production
Code language: YAML (yaml)

The main difference from years ago is that now, we have fully-featured (and maintained) Docker images for Hugo, the one being selected in this instance being ghcr.io/hugomods/hugo, maintained by HugoMods.

So now, enjoy all the posts over IPv6 \o/

GitHub and IPv6 in 2025

In this year 2025, the main GitHub domain still doesn’t serve over IPv6.

Some subdomains do have IPv6 though, just… not what you would expect.

For instance, avatars.githubusercontent.com:

; <<>> DiG 9.18.28-1~deb12u2-Debian <<>> AAAA avatars.githubusercontent.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 18015
;; flags: qr rd ra; QUERY: 1, ANSWER: 4, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 512
;; QUESTION SECTION:
;avatars.githubusercontent.com. IN      AAAA

;; ANSWER SECTION:
avatars.githubusercontent.com. 159 IN   AAAA    2606:50c0:8002::154
avatars.githubusercontent.com. 159 IN   AAAA    2606:50c0:8001::154
avatars.githubusercontent.com. 159 IN   AAAA    2606:50c0:8003::154
avatars.githubusercontent.com. 159 IN   AAAA    2606:50c0:8000::154

;; Query time: 32 msec
;; SERVER: 2a09::#53(2a09::) (UDP)
;; WHEN: Wed Jan 08 04:55:56 UTC 2025
;; MSG SIZE  rcvd: 170Code language: CSS (css)

Whereas none of the other domains accessed when accessing github.com (named github.com, github.githubassets.com and alive.github.com) has IPv6.

The package registry ghcr.io as well doesn’t support IPv6:

; <<>> DiG 9.18.28-1~deb12u2-Debian <<>> AAAA ghcr.io
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 967
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 512
; EDE: 29: (Result from negative cache)
;; QUESTION SECTION:
;ghcr.io.                       IN      AAAA

;; AUTHORITY SECTION:
ghcr.io.                547     IN      SOA     ns-773.awsdns-32.net. awsdns-hostmaster.amazon.com. 1 7200 900 1209600 86400

;; Query time: 32 msec
;; SERVER: 2a09::#53(2a09::) (UDP)
;; WHEN: Wed Jan 08 04:57:35 UTC 2025
;; MSG SIZE  rcvd: 152Code language: CSS (css)

And so does gist:

; <<>> DiG 9.18.28-1~deb12u2-Debian <<>> AAAA gist.github.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 52139
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 1, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 512
;; QUESTION SECTION:
;gist.github.com.               IN      AAAA

;; ANSWER SECTION:
gist.github.com.        1019    IN      CNAME   github.com.

;; AUTHORITY SECTION:
github.com.             182     IN      SOA     dns1.p08.nsone.net. hostmaster.nsone.net. 1656468023 43200 7200 1209600 3600

;; Query time: 32 msec
;; SERVER: 2a09::#53(2a09::) (UDP)
;; WHEN: Wed Jan 08 04:58:11 UTC 2025
;; MSG SIZE  rcvd: 123Code language: CSS (css)

So yeah, years later, always the same excuse, and no support at all.

I played Star Citizen so you don’t have to

Star Citizen is a game that has been in development since around 2013.
It claims to be a space simulator, MMORPG, FPS and a bunch of other things.

More than 10 years later, 750 million USD invested, what have we got?

The good

The game looks visually good, even if overly generic “space stuff”.

Nothing much else to say on that point, the ships have real thought put in their design, starting environment seem to be somewhat consistent and hand built (tho I noticed there are really a lot of food shops everywhere for some reason).

There is also a good already existing selection of ships, all having their own stats and benefits, which you can also upgrade.

The bad

Not a single of my sessions was what I would call a good experience.

From server crashes, game crashes, contracts items magically disappearing during loading, contracts themselves disappearing after a game crash, ships going through the floor, getting kicked out of your ship for no reason at all and having a fine, the list just goes on.

As of now, the game feels like any Unreal Engine asset flip of a “space stuff” game.

The game is mostly empty, contracts get repetitive really quickly, you can barely talk to any NPC and despite having those, you have to shop using touchscreens (which barely work if the server is a tad laggy).

Nothing in the game is really explained, it feels like there is feature creep even though no single feature is actually finished and in a working state.

The performance is also a huge issue, never going above 45fps, even when no players are around (and it goes without saying my computer specs blows the minimal ones out of the water).

The ugly

The game itself requires putting on the table at least 50€ to get into, the more expensive starter pack being at more than 1300€ as of time of writing.

In normal times, I wouldn’t have an issue having a store where you can buy things to support the development of a game, however, we’re talking about one that is possibly one of the most expensive game of all time, where in the end, there barely is a game.

The fact that if you lose in-game something you bought with real money for 250€, you lose it permanently, is also the cherry on top.

They also use the “we are in development” as a shield to deflect any criticism of the game, which honestly cannot be done when talking about a game having this much money poured into it and already that much development time.


So, conclusion: don’t waste your money.

Star Citizen is a game that will continue with its unscoped development and probably will never release in a stable or playable form anytime soon, if ever.

It’s an already seen tale of a game too ambitious to be done in one go, with developers trying to do it anyway.

I’ve seen Bethesda games with fewer bugs.

Happy new year, btw.

QSL – NHK Japan Radio December 8 2024

Location: Vantaa, Finland

Date and time: 2024/12/08 11:45 (local time UTC+2) – 09:45 (UTC)

Frequency: 15290kHz

SINPO: 2/5

Equipment: Tecsun PL-680 + wire antenna

Content Heard:

Unsure, asked a friend and only words they could understand were “Tokyo 23”.

Notes: I cannot speak Japanese, sadly. Lots of static and interruptions.

Contact: NHK WORLD-JAPAN | Contact Us

Card received:

Back of postcard showing a “Thank You” message for confirming reception of NHK world Japan with the message translated into English.
The QSL card.
I tested Horizon Worlds so you don’t have to

In the middle of the year, I got a Quest Pro, mainly to use with Resonite for face and eye tracking.

With this standalone headset made by Facebook, came a small program that (allegedly) cost them billions to make: Horizon Worlds, their own platform. You know, the one without legs.

Well, I tested it for a bit.
Why? I was curious.
Will I move to Horizon Worlds anytime soon? Hell no.

Because I’m lazy, here is a list of pros and cons with the platform.

Pros:

  • Avatar have legs nowadays (wow)
  • Cool TTS accessibility feature

Cons:

  • Worlds are mostly empty
  • If the world is not empty, it’s probably full of kids
  • If the world isn’t full of kids, people have awful opinions about foreigners
  • 99% of worlds are generic (corporate art style in VR if you will)
  • Has a shitty in-game currency system
  • Half of the buttons on the Quest Pro controllers aren’t mapped
  • Even tho everything is completely baked in and can’t be modified, the game lags when you’re alone
  • For some reason, my hands stop tracking at all when in it
  • Available games are boring
  • Avatars are extremely limited in their expressions
  • You can’t uninstall it from the Quest

So yeah, that’s it for the review, as I imagined, it’s a big no from me.

Until next time.

Verkkokauppa.com DNS

Verkkokauppa.com is a chain of web and physical stores originating from (and limited to) Finland, akin of Amazon here if you will.

When trying to search for a USB-C computer mouse, IPvfoo told me that the website was accessible over IPv6, which I never noticed before.

I then queried the DNS server to see where it was hosted and:

Server:		127.0.0.53
Address:	127.0.0.53#53

Non-authoritative answer:
Name:	verkkokauppa.com
Address: 34.95.73.242Code language: CSS (css)

Weird, no v6 there, let’s try with the www subdomain now:

Server:		127.0.0.53
Address:	127.0.0.53#53

Non-authoritative answer:
www.verkkokauppa.com	canonical name = www.verkkokauppa.com.cdn.cloudflare.net.
Name:	www.verkkokauppa.com.cdn.cloudflare.net
Address: 104.18.33.183
Name:	www.verkkokauppa.com.cdn.cloudflare.net
Address: 172.64.154.73
Name:	www.verkkokauppa.com.cdn.cloudflare.net
Address: 2606:4700:4400::ac40:9a49
Name:	www.verkkokauppa.com.cdn.cloudflare.net
Address: 2606:4700:4400::6812:21b7Code language: PHP (php)

Turns out the www subdomain (which the apex redirects to) is proxied through CloudFlare, therefore offering IPv6 connectivity, and uses different nameservers.

The apex uses Netnod DNS (AS8674)

; <<>> DiG 9.20.4 <<>> NS verkkokauppa.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 35173
;; flags: qr rd ra; QUERY: 1, ANSWER: 5, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 512
;; QUESTION SECTION:
;verkkokauppa.com.		IN	NS

;; ANSWER SECTION:
verkkokauppa.com.	2986	IN	NS	nsu.dnsnode.net.
verkkokauppa.com.	2986	IN	NS	ns2.verkkokauppa.com.
verkkokauppa.com.	2986	IN	NS	nordic1.dnsnode.net.
verkkokauppa.com.	2986	IN	NS	ns1.verkkokauppa.com.
verkkokauppa.com.	2986	IN	NS	nsp.dnsnode.net.

;; Query time: 16 msec
;; SERVER: 192.168.1.1#53(192.168.1.1) (UDP)
;; WHEN: Sun Dec 29 17:41:39 EET 2024
;; MSG SIZE  rcvd: 150Code language: CSS (css)

The www subdomain uses CloudFlare (AS13335)

; <<>> DiG 9.20.4 <<>> NS www.verkkokauppa.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 41779
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 1, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 512
;; QUESTION SECTION:
;www.verkkokauppa.com.		IN	NS

;; ANSWER SECTION:
www.verkkokauppa.com.	1020	IN	CNAME	www.verkkokauppa.com.cdn.cloudflare.net.

;; AUTHORITY SECTION:
cloudflare.net.		1800	IN	SOA	ns1.cloudflare.net. dns.cloudflare.com. 2359389931 10000 2400 604800 1800

;; Query time: 16 msec
;; SERVER: 192.168.1.1#53(192.168.1.1) (UDP)
;; WHEN: Sun Dec 29 17:42:04 EET 2024
;; MSG SIZE  rcvd: 157Code language: CSS (css)

Now, back to finding my USB-C mouse.

A week of Framework

As per my usual tradition when I receive a new cool device, I have to write about it a week later, then either three months or a year later (as I did previously with my Index, or work MacBook Pro M1).

As you may know, my previous laptop was a ThinkPad x200.
It’s not exactly a young machine, being around 16 years old now.

As I started working on more demanding projects (mainly C# ones), the x200 simply wasn’t enough (it couldn’t even run a modern web browser any more).

This is why I decided to scout for a new laptop.

Fear not, the x200 is not going to waste! It will now be used mainly to test Libreboot and other stuff like that.

Now, I had a bunch of criteria for the new laptop:

  • Can last as long as the x200
  • Can run my IDEs correctly (namely Sublime Text and JetBrains Rider)
  • Has a proper GPU (to run VR stuff)
  • Has modern hardware in general

The Framework 16, tho expensive, checked a lot of those cases:

  • Can last long by sheer virtue of being repairable
  • Has modern hardware, therefore can run my IDEs correctly
  • Can be upgraded to have a dedicated GPU

So for around €2100 (ouch, my wallet D:), this is what you get with Framework:

  • AMD Ryzen 7 7840HS (8c16t)
  • Radeon 780M Graphics integrated graphics (decided to buy the GPU upgrade later)
  • 32GB of RAM (single stick to leave room for upgrades)
  • Wi-Fi 6
  • A 2560×1600@165hz display
  • 2x USB-C expansion cards
  • 2x USB-A expansion cards
  • 1x HDMI expansion card
  • 1x DisplayPort expansion card
  • 1TB of m.2 storage
  • Power supply
  • Keypad module

Overall, pretty good specs by my standards for a laptop.
Before you say anything: the HDMI is for a second screen, the DisplayPort is for VR headsets.

To save up some money, I also decided to take it as the DIY edition without any OS, and then install Fedora on it.
The laptop itself was painless to build, even fun. The only issue is my hands trembling when doing anything requiring a bit of precision (in this instance handling a screwdriver with small screws), but that’s a me issue.

There was a small issue on first boots where the keyboard wasn’t responding at all, but taking it apart and verifying all the connections one by one made it work.

Fedora is one of the supported OSes on the Framework, along Ubuntu and Windows. I would have gone with Arch, however I wanted a headache-free setup this time, which Fedora offered.

During this week we actually got a new BIOS upgrade for the 16 being 3.05, fixing some security issues and adding a cool new feature to extend the battery longevity.

Upgrading the BIOS was pretty much painless thanks to fwupdmgr and was easy as:

$ fwupdmgr refresh --force
$ fwupdmgr get-updates
$ fwupdmgr updateCode language: Bash (bash)

Then, being patient.
I remember having to fiddle with USB keys back a few years ago, so this CLI utility is much welcome.

The battery life itself is correct, never really running out when working on stuff.

Fedora itself also is a breeze to use, having GUIs for everything simplifying the task a lot.
I do miss a bit my good old XFCE4, but GNOME does the job just fine as well.

Another thing I totally forgot to do after the first install was to get the EasyEffects profile, which does makes a huge difference on the laptop’s audio.

Overall, I’m very satisfied with what I got, remains to see a few things:

  • Will new hardware upgrades come out (for instance, additional GPU modules)
  • Will any other companies start making expansion cards (instead of relying on Framework alone; though the community already made a lot of those)
  • Will Framework as a company remain in business long enough to offer the longevity I want

But those can only be answered with time. It goes without saying that most of the hardware replacement (or upgrades) (RAM, storage, etc) can be done with any off-the-shelf components and not just ones sold by Framework.

For now, I’ll keep using it, and I’ll see you peeps in either three months or a year (or both) if I don’t forget for the traditional update!

Deploying your own GitLab instance under 5 minutes

It’s no secret that I work around GitLab during my day job and that I generally love this software.
This blog post is therefore not biased at all in any way or form. (do I need to mark this further as sarcasm, or did everyone get the memo?)

For this quick tutorial, you’ll need:

  • Some machine where the instance will be hosted
  • Docker installed on the machine
  • Ability to read instructions

For this, we’ll be using docker compose which provides an easy way to bring services up from a configuration file.
This tutorial just provides a bare bones instance that you will need to configure further later.

Small note: for this to work, your system SSH daemon will need to run on something else than port 22.

The compose file for GitLab is really simple:

services:
  gitlab:
    image: gitlab/gitlab-ce:17.6.1-ce.0
    volumes:
      - ./gitlab/config:/etc/gitlab
      - ./gitlab/log:/var/log/gitlab
      - ./gitlab/data:/var/opt/gitlab
    ports:
      - "22:22"
      - "80:80"
      - "443:443"
Code language: YAML (yaml)

And there you go, name this file docker-compose.yml on your server and issue:

$ docker compose up -dCode language: Bash (bash)

After a few minutes, the GitLab instance should be reachable on the IP of your machine.

To reset the root password, use:

$ docker compose exec gitlab gitlab-rake "gitlab:password:reset[root]"Code language: Bash (bash)

Now, some few steps that are recommended to take after having a working instance:

  • Reverse-proxy GitLab and get HTTPS certificates for everything
  • Host a runner (to be able to utilize CI/CD)
  • Refine the Gitlab.rb configuration, most notably:

In a future blog post, I’ll show how to configure one cool feature of GitLab which is the service desk which can be useful for some projects.

Setting up WireGuard tunnels from a BGP router

I recently re-started my BGP shenanigans, and with that, re-setup some VPNs using WireGuard for my personal machines.

I basically use those to whitelist connections to certain applications to only the prefix used by my machines.

The host machine runs Debian and BIRD, and the end devices are diverse from standard Linux machines, to Windows desktops, to iOS devices.

First, the BIRD configuration is pretty trivial, just adding a route for the prefix via lo:

route 2a12:4946:9900:dead::/64 via "lo";Code language: PHP (php)

I’m aware my subnet configurations can be sub-optimal, but I’m just running this for fun, not for it to be perfect¨.

Then, generating WireGuard keys on the host (the package wireguard-tools will need to be installed):

$ umask 077
$ wg genkey > privatekey
$ wg pubkey < privatekey > publickeyCode language: Bash (bash)

Now, the WireGuard host configuration is pretty trivial:

[Interface]
Address = 2a12:4946:9900:dead::1/128
ListenPort = 1337
PrivateKey = myVeryPrivateKey=

The key generation on the client follows the same procedure, if not easier via a GUI. The configuration itself looks like this:

[Interface]
PrivateKey = myVerySecretKey=
Address = 2a12:4946:9900:dead::1337/128

[Peer]
PublicKey = serverPubKey=
AllowedIPs = ::/1, 8000::/1
Endpoint = [2a12:4946:9900:dead::1]:1337
PersistentKeepalive = 30
Code language: JavaScript (javascript)

Note that I’m using ::/1 and 8000::/1 in AllowedIPs on Windows as setting it to ::/0 kills IPv4 connectivity (that is sadly still needed) and local connectivity to stuff like my storage array. On Linux, ::/0 works as expected, letting IPv4 through correctly.

Now, we can add a Peer section into the server’s configuration:

[Peer]
# PC Client
PublicKey = clientPubKey=
AllowedIPs = 2a12:4946:9900:dead::1337/128
Code language: PHP (php)

Now you should be all set and ready to bring up the tunnel on both ends.

On the server (assuming your configuration file is named tunnels.conf):

$ systemctl enable wg-quick@tunnels
$ systemctl start wg-quick@tunnelsCode language: Bash (bash)

And on the client using the same procedure, or just clicking the “Connect” button on the GUI client.

I’ve had some cases where this all of this alone isn’t enough, and had to add the prefixes to lo.

For instance:

$ ip -6 add 2a12:4946:9900:dead::/64 dev lo

And in /etc/network/interfaces:

iface lo inet6 static
        address 2a12:4946:9900:dead::/64
Code language: JavaScript (javascript)

Tho I will admit, I had more issues setting this up than I should have, and most configs would benefit from being re-written. Admittedly, I executed and documented this procedure while being extremely tired, which of course causes some issues.

But at least, this works, and can be very useful when I’m connected to networks not offering IPv6 connectivity as well.

Newer Posts · Older Posts
Jae 2012-2025, CC BY-SA 4.0 unless stated otherwise.