A few months ago, I started using Signal again. The messenger evolved quite a lot since I last used it, for instance, usernames weren’t even a thing back then, and this is mainly what drove me out of the platform.
If you don’t know what Signal is, it’s quite simple: it’s an encrypted chat app. As it stands, it’s also the safest (broad sense there, please don’t hurt me) option at the moment as it has the most eyeballs on it and sane encryption. Soatok talks about that quite often on his blog as well.
Because I have no imagination whatsoever, here is how I use Signal (basically, the configuration keys I use for it in the “Privacy” section, please note that this is basically for the iOS version, no clue if the Android one has the same configuration keys).
Privacy
Phone number:
Who can see my phone number: nobody
Who can find me by number: nobody
Advanced:
Always relay calls: on
General:
Read receipts: off
Typing indicators: off
Disappearing messages: 4w
Hide screen in app switcher: on
Screen lock: on
Lock screen timeout: 5 minutes
Show calls in recent: off
Chats
General:
Generate links previews: off
Share contacts with iOS: off
Use phone contacts photos: off
Stories
General:
Turn off stories
Data usage
General:
Sent media quality: high
Coinciding with this post, I turned off the ability for new people to DM me on Telegram, from now on, personal contacts will have to be done through Signal.
If we have an existing DM and you want to switch to Signal, use that DM thread to ask me for my username. Otherwise, either email me yours or ping me on a common chat platform. Remember, none of us have to know each other’s phone number anymore, just setup a username if you haven’t already.
This means you can now build ARM programs natively on Linux without having to fiddle with weird cross-compilation.
One way to achieve that is through a Matrix. Considering the following workflow to build, then upload an artifact (taken from the YDMS Opus workflow I wrote):
And the same step is used by the ARM workflow, we will get an error that the artifact matching the name opus-linux already exists for this workflow run.
This is where a small conditional step can be added to set an environment variable with the desired name:
-name:Setdistnamerun:| if ${{ matrix.osver == 'ubuntu-24.04-arm' }}; then echo "distname=opus-linux-arm" >> "$GITHUB_ENV" else echo "distname=opus-linux" >> "$GITHUB_ENV" fiCode language:YAML(yaml)
We can then change our artifact upload step to use these new names:
As a bit of a sidetrack, you can also use checks like this to conditionally skip (or execute) steps depending on the architecture, using a if statement:
As I often mention, I use .NET a lot in general, as it’s fairly easy to use, has a huge ecosystem, and has evolved really positively in the past years (long gone are the days of Mono :D).
Another component of this is that .NET projects are incredibly easy to build and publish using GitLab CI/CD. Today, we’re gonna explore some ways of building and publishing a .NET project using just that.
Docker
Probably the most straightforward, considering a simple Dockerfile:
This will build, then publish the image to the GitLab container registry of the repo. It’s possible to also specify a different registry, but kinda useless as the default one is already excellent for most cases.
Regular build / NuGet build
This type of build just requires source itself without much additional configuration.
It will build the software, then either upload the resulting files as an artifact or publish it into the GitLab NuGet registry.
For those two, I can recommend setting up a cache policy like:
In this case, we use ** to avoid having to update the path every time we upgrade the .NET version (for instance, .NET 8 will put the build in the net8.0 directory, .NET 9 in net9.0, etc).
Now, we can also build and publish the solution to the NuGet registry:
As seen in this definition, this publish stage will only run on tag pushes, but it’s also possible to generate a version string with the current commit and pushing this as a nightly release.
As an additional step, but not really related to the build itself, I often activate the Secret, SAST and dependencies scanning as it can prevent really obvious mistakes. Doing so is also really trivial:
Given sessions in Resonite are hosted by the players themselves, IPv6 is very useful in this context as there are no needs to battle with CGNAT or other network shenanigans and restrictions ISPs might put in place to save up on IP space.
As full native IPv6 support is currently being worked on (see GH-143 for a more in-depth status), some parts already do support it.
Joining any session using a network string like lnl://[<IPv6 address>]:<port>/ is already supported, which only leaves the relays and bridges needing IPv6 support, a mod made by a community member already existing to solve this issue until official support is added.
In the end, I’m very confident that we will see full native IPv6 support land in Resonite this year, if not already in Q1, given this is actively being worked on.
Once those those two issues (relays and bridges + AAAA records missing) are addressed, the only thing missing IPv6 will be… the bug tracker, GitHub, which I already talked about in this article (spoiler, we ain’t seeing IPv6 from them anytime soon).
Also special thanks to ProbablePrime for looking into it!
If you are a Resonite player and are in the Discord guild, you might be familiar with the #active-sessions channel, in which a bot displays the 10 most popular sessions on Resonite as well as some stats.
What you might not know, is that I’m the author of this bot, that I originally started as just a small oneshot project to have some fun.
For some background, when Neos was still a thing (technically still is, but in what state), a bot like this was in the Discord guild, showing sessions and game stats like server metrics and session counts.
When Resonite was released, the channel was there, however, no metrics or posts were ever made, saying that the bot would be revived at some point in the future(TM).
At the time, I was a bit bored and wanted a new project, so I decided to start experimenting with .NET Discord bots and in term set myself the objective to re-create the original bot.
Why .NET? One reason being that I use .NET all the time as it’s fairly easy to use, the other one being that most of the Resonite community also knows .NET due to being used to make mods and whatnot.
The bot itself is fairly simple and divided in multiple parts built around the .NET Hosted Services:
Discord client service – handles the connectivity to Discord
Interaction service – provides the command handler for the few commands the bot has
Startup service – sets up some bot must-haves like the logger and other data
Message service – the core functionality of the bot that runs all the logic that makes the magic happen
Healthcheck service – this one is optional, but important when hosting the bot in Docker
Let’s go through all of those in detail and see why they were made that way.
The Discord-related services
I’m gonna group those together as they belong to the same component really: handling Discord-related shenanigans.
The bot has a few commands:
/info which shows some info about the bot
/setchannel which sets the channel in which you want the notifications
/which which shows which channel is set as the notification one
/unregister which unsets the notification channel
/setting allows you to toggle some settings like if you want thumbnails or not
All of those commands (with the exception of /info) are admin-only.
This part is honestly fairly boring and straightforward, the rest just passes a Discord bot token, connects to the chat service and tries to log in.
First off, the bot uses a relatively simple SQLite DB to avoid everything being hardcoded. The first versions were using direct channel IDs, but this is far from ideal if you want something modular without having to host multiple copies.
The DB basically stores the guild ID, channel ID and settings for the bot to send the updates in the right place, containing what we want.
Speaking of settings, there is only one so far: show the session thumbnails or not. The reason for this is a difference between the Discord & Resonite ToS. While nipples aren’t considered sexual on Resonite, they are on Discord, meaning having a session thumbnail showing nipples on Discord without gating the channel to 18+ users would be violating the ToS of the platform.
One thing I am quite proud of is how stable the bot it, nowadays it rarely, if ever, crashes alone, which it used to do quite often.
The bot is made to handle errors gracefully and never shut down or crash the program unless something really, really warrants it. When running this bot, all the errors that would normally crash it are instead logged with an error message and stacktrace to make it easy to debug.
Another thing to note is that the database schema hasn’t been updated since the bot basically released and touching it is considered a very last resort thing. Having the DB break during an upgrade would be disastrous, requiring all admins to re-set the notifications channel. As they say, if it ain’t broken, don’t fix it.
Out of all the variables in the mix, Discord is the most unstable one, having lots of outages, sometimes lasting hours at a time, just being slow for no reason whatsoever or thinking existing things (such as channels or messages) don’t exist even though they do.
This is why the whole checking logic exists, it will first check if the channel exists, if it does will check if the message exists, and if it does, try to update it. If it fails at any point, it will try again for a while then delete the message, try to re-send it, and ultimately, if this fails, delete the channel from the DB and the admin will have to re-set the notification channel again.
The re-try was implemented after some issues raised from Discord:
The bot would forget about the message (because Discord said it didn’t exist anymore) and would send a second one in the same channel or over and over until restarted
Sometimes the checks would fail on first try and delete everything gracefully without notifying anybody
Bot would crash because everything “ceased” to exist
On the Resonite side, if an error happens while contacting the API, the bot will just skip this cycle and try updating the next time (one minute by default). This used to crash the bot (whoops) in the early days.
The latest addition made was the Docker healthcheck, given recently the bot crashed in the main Resonite guild (GH-3521) and no monitoring was triggered.
Now the bot has a small HTTP server running, simply returning the date that a curl instance will check every 30 seconds.
The CI/CD
It’s no secret that I love GitLab as a software. I also work daily on and with it in my day-to-day job.
The CI/CD used in this project is extensive, but classic:
Secret detection
SAST scan
Dependency scanning
NuGet build
NuGet deployment
ARM64 and x86 Docker builds
Release publishing
The first three are kinda explicit, and will warn if any secrets have been pushed, if any unsafe codepaths are detected or if any dependencies needs updating.
Now the most important thing to highlight are the separated Docker builds for the two architectures. I originally tried combining the builds into a single one as you would do by specifying multiple architectures in buildx, however this did not work.
An error when building ARM on x86 (with virtualization) and vice versa would always arise, though the same command would work for other projects.
To avoid errors when doing things manually, the release process is also automated, triggering when a tag is detected. It will basically build as usual with the version tag and then publish the release following a markdown template. It will also automatically fill-in some details like image tags, etc from the changelog.
Now for the self-criticism: if I had to restart the project from scratch, I would probably opt into an even less complex design. Some services are really huge files that only increased in complexity with time.
Currently nothing too bad, but I think a refactor would be warranted to decrease complexity and make it more maintainable.
I wouldn’t touch the language though, since the bot’s footprint is really small, only about 30Mb to 50Mb of RAM used total during normal runtime.
In the end, this bot is a really fun project to make and maintain, and I’m extremely happy that it got canonized and used in an official manner.
The main difference from years ago is that now, we have fully-featured (and maintained) Docker images for Hugo, the one being selected in this instance being ghcr.io/hugomods/hugo, maintained by HugoMods.
Turns out the www subdomain (which the apex redirects to) is proxied through CloudFlare, therefore offering IPv6 connectivity, and uses different nameservers.
As per my usual tradition when I receive a new cool device, I have to write about it a week later, then either three months or a year later (as I did previously with my Index, or work MacBook Pro M1).
As you may know, my previous laptop was a ThinkPad x200. It’s not exactly a young machine, being around 16 years old now.
As I started working on more demanding projects (mainly C# ones), the x200 simply wasn’t enough (it couldn’t even run a modern web browser any more).
This is why I decided to scout for a new laptop.
Fear not, the x200 is not going to waste! It will now be used mainly to test Libreboot and other stuff like that.
Now, I had a bunch of criteria for the new laptop:
Can last as long as the x200
Can run my IDEs correctly (namely Sublime Text and JetBrains Rider)
Has a proper GPU (to run VR stuff)
Has modern hardware in general
The Framework 16, tho expensive, checked a lot of those cases:
Can last long by sheer virtue of being repairable
Has modern hardware, therefore can run my IDEs correctly
Can be upgraded to have a dedicated GPU
So for around €2100 (ouch, my wallet D:), this is what you get with Framework:
AMD Ryzen 7 7840HS (8c16t)
Radeon 780M Graphics integrated graphics (decided to buy the GPU upgrade later)
32GB of RAM (single stick to leave room for upgrades)
Wi-Fi 6
A 2560×1600@165hz display
2x USB-C expansion cards
2x USB-A expansion cards
1x HDMI expansion card
1x DisplayPort expansion card
1TB of m.2 storage
Power supply
Keypad module
Overall, pretty good specs by my standards for a laptop. Before you say anything: the HDMI is for a second screen, the DisplayPort is for VR headsets.
To save up some money, I also decided to take it as the DIY edition without any OS, and then install Fedora on it. The laptop itself was painless to build, even fun. The only issue is my hands trembling when doing anything requiring a bit of precision (in this instance handling a screwdriver with small screws), but that’s a me issue.
There was a small issue on first boots where the keyboard wasn’t responding at all, but taking it apart and verifying all the connections one by one made it work.
Fedora is one of the supported OSes on the Framework, along Ubuntu and Windows. I would have gone with Arch, however I wanted a headache-free setup this time, which Fedora offered.
During this week we actually got a new BIOS upgrade for the 16 being 3.05, fixing some security issues and adding a cool new feature to extend the battery longevity.
Upgrading the BIOS was pretty much painless thanks to fwupdmgr and was easy as:
Then, being patient. I remember having to fiddle with USB keys back a few years ago, so this CLI utility is much welcome.
The battery life itself is correct, never really running out when working on stuff.
Fedora itself also is a breeze to use, having GUIs for everything simplifying the task a lot. I do miss a bit my good old XFCE4, but GNOME does the job just fine as well.
Another thing I totally forgot to do after the first install was to get the EasyEffects profile, which does makes a huge difference on the laptop’s audio.
Overall, I’m very satisfied with what I got, remains to see a few things:
Will new hardware upgrades come out (for instance, additional GPU modules)
Will any other companies start making expansion cards (instead of relying on Framework alone; though the community already made a lot of those)
Will Framework as a company remain in business long enough to offer the longevity I want
But those can only be answered with time. It goes without saying that most of the hardware replacement (or upgrades) (RAM, storage, etc) can be done with any off-the-shelf components and not just ones sold by Framework.
For now, I’ll keep using it, and I’ll see you peeps in either three months or a year (or both) if I don’t forget for the traditional update!
It’s no secret that I work around GitLab during my day job and that I generally love this software. This blog post is therefore not biased at all in any way or form. (do I need to mark this further as sarcasm, or did everyone get the memo?)
For this quick tutorial, you’ll need:
Some machine where the instance will be hosted
Docker installed on the machine
Ability to read instructions
For this, we’ll be using docker compose which provides an easy way to bring services up from a configuration file. This tutorial just provides a bare bones instance that you will need to configure further later.
Small note: for this to work, your system SSH daemon will need to run on something else than port 22.