Stuff I'm Up To

Technical Ramblings

Dell XPS 15 9530 — March 30, 2024

Dell XPS 15 9530

The first day I arrived at my new job, sat on a desk waiting for me was a posh black box containing a brand new Dell XPS 15.

Previously, I’ve had no real use for a laptop, I’ve always had a desktop PC – mainly because I’ve had a desk dedicated for me. The new job is tight on desk space, far more people than desks. This means hot desking and working from home. Which means I have a works’ laptop.

First thing I did was to boot it from my Ventoy USB stick and install the latest Manjaro Linux. I was expecting a few driver issues, I hadn’t even investigated how suited to Linux this machine would be. When the installation completed, I began testing it out and have to say I’m thoroughly impressed. I haven’t found anything that doesn’t work as it should! Nvidia driver, Wi-fi, Bluetooth, fingerprint reader, touchpad, backlit keyboard, automatic screen lighting… it all just works.

CPU:
  Info: 14-core (6-mt/8-st) model: 13th Gen Intel Core i9-13900H bits: 64
    type: MST AMCP arch: Raptor Lake rev: 2 cache: L1: 1.2 MiB L2: 11.5 MiB
    L3: 24 MiB

I was expecting an i7, but the business does a lot of video processing and analytics. They went for the fastest processor available for the model. I don’t think I’ve burdened it at all, yet.

The XPS comes with a small USB-C to USB-A and HDMI expansion port. This worked out well for the office, as I can plug in the absolutely necessary Logitech unifying receiver. My preferred devices are the Logitech MX ERGO trackball and K860 keyboard.

At home, I wanted to connect both of my Iiyama 22″ monitors. I bought a UGreen USB-C expansion with two HDMI ports, two USB-C’s and two USB-A’s. I’m not a gamer – the 30Hz version is fine. Now I have all three displays active (including the laptop display).

Both the Logitech devices support switching between hosts. I have a receiver in my home PC and another in the laptop. For the monitors, I have the laptop plugged into DVI using an HDMI-to-DVI cable, and my PC uses HDMI. I can switch between PC and laptop without unplugging anything. WIN+L to lock the laptop, select device 1 on keyboard and mouse – and up comes the PC, and vice versa.

It’s fair to say I’m loving the laptop. I still hate laptop keyboards, touchpads and really find them uncomfortable to use on the move. But to easily relocate from home to office, it’s a dream.

Well done Dell.

Linting and Formatting with trunk.io — March 24, 2024

Linting and Formatting with trunk.io

With a new job, new place of work, comes new challenges.

The first task I set myself was to automate the deployment process for the video analytics software using Ansible. I like Ansible, it’s well-structured and relatively straight forward to understand. I’ve been using it for a while now, and it’s my go-to automation platform.

I installed a fresh instance of VSCode, leaving behind my previous sync’d config, which meant all new plugins – some of my previous plugins may not be required in the new role – I started fresh. I started to install Ansible and ansible-lint plugins and tripped over trunk.io/check by accident. It came up and suggested it would format and lint everything I needed so far.

Trunk Check runs 100+ tools to format, lint, static-analyze, and security-check dozens of languages and config formats. It will autodetect the best tools to run for your repo, then run them and provide results inline in VSCode.

I installed it and continued work on my Ansible project, building inventory and task files. Nothing really seemed to be doing much – until I tried doing a git commit. Trunk installed a series of git hooks, one of them a commit hook, and it started trying to check my code. There were far more failures that there should be. I know my code isn’t great, but it would not commit because of way too many failures.

When I looked at the log files, I was seeing that most of the checks were failing because there was a missing dependency for libcrypto.so.1. Investigation led me to install libxcrypt-compat on Manjaro. I then manually ran trunk check and lots of things started to happen. More plugins got installed. It then successfully checked and formatted my code, but showed I had much to fix.

One of the plugins it installed is called checkov. Apart from the usual missing LF at EOF and trailing spaces, it came up with CKV2_ANSIBLE_3 – Ensure block is handling task errors properly. I had no idea what it meant. This is why I like linting and check tools, it helps you learn best practices. For all this time, I did not do any form of error handling in Ansible. All I had to do was to ensure I added a rescue: stanza to each block:, to ensure any error that was generated was responded to. For now, a simple response is all I needed, ie.

- name: My task
block:
...
rescue:
- name: Something went wrong
ansible.builtin.debug:
msg: An error occured
when: not ansible_check_mode

Caddy and GoAccess — March 16, 2024

Caddy and GoAccess

GoAccess is a great Nginx log file analyser that I was using with Nginx Proxy Manager. Wouldn’t it be great to carry on using it with Caddy?

Caddy has a log formatting module that you can use to format the log output into the same as the “combined” format used by Nginx and Apache. This means All I need to is change the config for the goaccess format to “COMBINED” and set the output to a folder within my caddy webserver.

GoAccess

goaccess.conf (stripped of comments)

time-format %T
date-format %d/%b/%Y
log-format COMBINED
config-dialog false
hl-header true
json-pretty-print false
no-color false
no-column-names false
no-csv-summary false
no-progress false
no-tab-scroll false
with-mouse false
real-time-html true
ws-url wss://sub.domain.tld:443/ws
log-file /var/log/caddy/sub.domain.tld.log
agent-list false
with-output-resolver false
http-method yes
http-protocol yes
output /usr/share/caddy/html/access/index.html
no-query-string false
no-term-resolver false
444-as-404 false
4xx-to-unique-count false
all-static-files false
double-decode false
enable-panel GEO_LOCATION
ignore-crawlers false
crawlers-only false
unknowns-as-crawlers false
ignore-panel REFERRERS
ignore-panel KEYPHRASES
real-os true
static-file .css
static-file .js
static-file .jpg
static-file .png
static-file .gif
static-file .ico
static-file .jpeg
static-file .pdf
static-file .csv
static-file .mpeg
static-file .mpg
static-file .swf
static-file .woff
static-file .woff2
static-file .xls
static-file .xlsx
static-file .doc
static-file .docx
static-file .ppt
static-file .pptx
static-file .txt
static-file .zip
static-file .ogg
static-file .mp3
static-file .mp4
static-file .exe
static-file .iso
static-file .gz
static-file .rar
static-file .svg
static-file .bmp
static-file .tar
static-file .tgz
static-file .tiff
static-file .tif
static-file .ttf
static-file .flv
static-file .dmg
static-file .xz
static-file .zst
geoip-database /usr/share/GeoIP/GeoIP.mmdb

Caddy

Install the caddy transform encoder module as detailed here: https://github.com/caddyserver/transform-encoder

Configure caddy to output the logs in Caddyfile and configure the webserver to serve the output index.html and proxy goaccess websocket.

Caddyfile

Taken from the caddy transform decoder documentation.

This will output combined format log files for all web servers where you use import the sub-domain config.

{
servers :443 {
name myServerName
}
}

(subdomain-log) {
log {
format transform `{request>remote_ip} - {user_id} [{ts}] "{request>method} {request>uri} {request>proto}" {status} {size} "{request>headers>Referer>[0]}" "{request>headers>User-Agent>[0]}"` {
time_format "02/Jan/2006:15:04:05 -0700"
}
hostnames {args[0]}
output file /var/log/caddy/{args[0]}.log
}
}

import /etc/caddy/conf.d/*

conf.d/mysite.conf

This should add a path /access to your site and serve web sockets for real-time updates from the goaccess service.

Add the following into your site config.

  handle /access/* {
root * /usr/share/caddy/html/access
try_files * /index.html
file_server
}

reverse_proxy /ws {
to 127.0.0.1:7890
}

import subdomain-log sub.domain.tld

References

https://github.com/caddyserver/transform-encoder

Caddy in Production — March 15, 2024

Caddy in Production

To use caddy in production, I needed to make sure it catered for the features I use with Nginx. I need to serve subdomain and handle putting sites into maintenance to show a visitor a custom 503 (service unavailable) page.

When you install caddy as a package (mine is on Manjaro using pamac), you get the folders created to handle the config (/etc/caddy), and a systemd service file. The persistent parts like certificates are stored in /var/lib/caddy.

In the /etc/caddy/Caddyfile you will find it has an import directive that will add files from under /etc/caddy/conf.d. Now I can keep all of my site configs in separate files, just like I do with Nginx.

It’s important to remember that some things in the caddy config must be in the correct sequence, and that all files in the conf.d folder are loaded alphabetically. If you are going to include a global section I would include that in the actual /etc/caddy/Caddyfile as putting a global section in a conf.d file, would load it in the wrong order, unless you name your files to keep a global section as alphabetically first.

/etc/caddy/Caddyfile

{
servers :443 {
name myServerName
}
log {
output file /var/log/caddy/access.log {
roll_size 1gb
roll_keep 10
roll_keep_for 2160h
}
}
}

import /etc/caddy/conf.d/*
  • I had trouble trying to get the log into /var/log/caddy for some reason. Solved by editing the caddy.service file and adding the path to ReadWritePaths, eg. ReadWritePaths=/var/lib/caddy /var/log/caddy

conf.d/mysite.conf

mysite.domain.tld {
# error 503 # Maintenance Mode

redir /tv /tv/

reverse_proxy /* {
to 127.0.0.1:3004
health_uri http://127.0.0.1:3004/
}
reverse_proxy /tv/* {
to 127.0.0.1:8989/tv
health_uri http://127.0.0.1:8989/tv
}

tls {
dns cloudflare {$CLOUDFLARE_API_KEY}
resolvers 1.1.1.1
}

handle_errors {
root * /usr/share/caddy/html
@custom_err file /{err.status_code}.html /50x.html
handle @custom_err {
rewrite * {file_match.relative}
file_server
}
respond "{err.status_code} {err.status_text}"
}
}

This will use the Cloudflare DNS API to get a certificate for https://mysite.domain.tld. It will operate a reverse proxy for the paths / and /tv and direct them to the respective to directive in the matching reverse_proxy section.

The handle_errors section will look for a file with the name matching the HTTP status code in the root path and serve the one that matches, or resort to 50x.html. This means that all I need to do to put the site into maintenance mode is to uncomment the error 503 line at the beginning of the file, and reload caddy.

What I found nice with this approach is if I stop an underlying Docker service that the reverse proxy is serving, it automatically handles putting that site into maintenance mode.

AWS SSH using SSM — March 14, 2024

AWS SSH using SSM

You can access a closed off AWS EC2 instance using SSH by using SSM as a proxy. This means no ports need be exposed from your EC2 at all.

Configure the AWS Client

aws configure

Specify your user access key, secret and region.

Connect to the EC2 instance

aws ssm start-session --target [instance_id]

Connect to SSH through an SSM port forward

aws ssm start-session --target [instance_id] \
--document-name AWS-StartPortForwardingSession \
--parameters '{"portNumber":["22"], "localPortNumber":["2222"]}'

ssh -p 2222 -i ~/.ssh/id_rsa root@localhost

References

https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager-getting-started-enable-ssh-connections.html

https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager-working-with-sessions-start.html#sessions-start-ssh

AWS S3 Bucket Service —

AWS S3 Bucket Service

Previously I have used s3fs as this supported mounting in fstab using and access key and secret. This S3 Mountpoint by Amazon can use IAM for a more integrated authentication approach.

Install or update to the latest version of the AWS CLI – AWS Command Line Interface (amazon.com)

Installing S3 Mountpoint – Amazon Simple Storage Service

Pre-requisites:

apt-get install unzip libfuse2

Install awscli

curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install
/usr/local/bin/aws --version

Install Mountpoint

wget https://s3.amazonaws.com/mountpoint-s3-release/latest/x86_64/mount-s3.deb
dpkg -i mount-s3.deb 

Configure s3 service

mkdir /mnt/s3
vim /etc/systemd/system/s3.service
systemctl enable --now s3.service
systemctl start  s3.service
systemctl status s3.service

Create /etc/systemd/system/s3.service as below:

[Unit]
Description=Mountpoint for Amazon S3 mount
Wants=network.target
AssertPathIsDirectory=/mnt/s3

[Service]
Type=forking
User=root
Group=root
ExecStart=/usr/bin/mount-s3 [bucket] /mnt/s3
ExecStop=/usr/bin/fusermount -u /mnt/s3

[Install]
WantedBy=remote-fs.target

caddy — March 13, 2024

caddy

Caddy is a “server of servers”, but probably more recognised as a reverse proxy or web server.

It’s memory safe, so likely to gain traction because of the recent call for memory safe software development.

It seems easy to get going, but gave me some challenges. It’s HTTPS out of the box and will handle creating or obtaining certificates to satisfy the domain you are serving. Which means it handles the ACME calls to the likes of Let’s Encrypt to get a certificate, without needing certbot.

This post is aimed at being a simple how to get it going guide. There’s clearly more to do, but once you have it running ad getting a certificate you can then step it up.

Install caddy

pamac install caddy-cloudflare

This pulls in xcaddy to handle the use and build of plugins.

Cloudflare DNS

I wanted to try it out using Cloudflare’s DNS and the caddy plugin. This is where it got tricky. But I find it always does, and one article says to use a Cloudflare token, another an API key. When I used the API key that I was using for certbot, I found it returned an error 6003. Which I now know means you’re using the wrong sort of key/token.

First, I went to Cloudflare and into my profile and into “API Tokens”. For caddy, I needed to create a “Token” that would be allowed to edit a specific DNS zone. I then copied the token (it’s a one time display, so keep it safe).

A Brief Config

Create a Caddyfile to configure a simple reverse proxy to localhost port 3004.

subdomain.domain.tld {
reverse_proxy 127.0.0.1:3004

tls {
dns cloudflare {$CLOUDFLARE_API_KEY}
resolvers 1.1.1.1
}
}

Start the caddy service. You will need sudo because we are starting the service on protected ports 80, 443.

sudo CLOUDFLARE_API_KEY=SuperSecretKey caddy run

If you got it right, it will start the service and go fetch a certificate.

Because we used sudo, the configuration and certificates get stored in /root/.local/share/caddy

References

https://samjmck.com/en/blog/using-caddy-with-cloudflare