Stuff I'm Up To

Technical Ramblings

Apt Version Pinning — February 7, 2019

Apt Version Pinning

Today after running some apt upgrades my Laravel development environment failed to compile because of a newer version of nodejs than I currently require.

Module build failed: ModuleBuildError: Module build failed: Error: Missing binding /home/paulb/itsm/node_modules/node-sass/vendor/linux-x64-64/binding.node
Node Sass could not find a binding for your current environment: Linux 64-bit with Node.js 10.x

Found bindings for the following environments:
  - Linux 64-bit with Node.js 8.x

This usually happens because your environment has changed since running `npm install`.
Run `npm rebuild node-sass` to download the binding for your current environment.
Continue reading
Gitahead — February 1, 2019
apt-get Hash Sum Mismatch #2 — January 16, 2019

apt-get Hash Sum Mismatch #2

I’m still not sure why I’m getting this problem occur again. But when running apt-get upgrade the upgrades fail with a message like this:

Get:8 stretch/updates/main amd64 libudev1 amd64 232-25+deb9u8 [125 kB]
Err:8 stretch/updates/main amd64 libudev1 amd64 232-25+deb9u8
Hash Sum mismatch
Hashes of expected file:
SHA1:6590379bbc85f8d90c05a1b32cd27dac49431b7a [weak]
MD5Sum:40ace91d2e4c633f89d1571b3022dcdd [weak]
Filesize:125364 [weak]
Hashes of received file:
SHA1:7c501c7b49f4fe93d78309f5b5c635f1db487989 [weak]
MD5Sum:9b8faa999b5db9581ef0df62f697e4df [weak]
Filesize:877368 [weak]
Last modification reported: Sat, 08 Dec 2018 08:05:18 +0000

To resolve it I resorted to bypassing any caching and use apt to pull the update and upgrade:

$ sudo apt -o Acquire::https::No-Cache=True -o Acquire::http::No-Cache=True update
$ sudo apt -o Acquire::https::No-Cache=True -o Acquire::http::No-Cache=True upgrade
Public Key from Private Key — January 3, 2019

Public Key from Private Key

I fall over this every so often. I have the private key file but would either have to trawl servers for authorized_keys files to get the public password or remember how to obtain the public key from the private key.

Time to document it here so I don’t have to hunt for it with Google again.

For an RSA PEM format public key

$ openssl rsa -in private.key -pubout

-----END PUBLIC KEY-----

For an SSH putty friendly version

$ ssh-keygen -y -f private.key

php 7.0 on Debian Buster — January 2, 2019

php 7.0 on Debian Buster

Actually this is more about any version of php (5.6, 7.0, 7.1, 7.2) on buster. Php source has taken on a bit of a split and the standard repositories only deal with the one supported version for the current release of Debian you are using.

This means that on Debian 9 (buster/sid) the only version available from the Debian repository is php 7.3.

Our current production systems are Debian 9 stretch and only support php 7.0 and therefore only Laravel 5.5. In order to bring my development platform down to php 7.0 I must use a non-standard repository.

Ondřej Surý has been packaging php for Debian and Ubuntu and distributing them. To get them you need to add his key and repository into your aptitude:

$ wget -q -O- | sudo apt-key add -
$ echo "deb stretch main" | sudo tee /etc/apt/sources.list.d/php.list

Now you can add in whatever version of php you’d like even 5.6. eg.

$ sudo apt-get install php7.0-fpm php7.0-mbstring php7.0-zip php7.0-mysql php7.0-sqlite3 php7.0-dev php-pear

If you already had 7.3 installed nothing will have changed yet and when you type php from the command line you’ll see it still runs version 7.3.

$ php -v

PHP 7.3.0-2+0~20181217092659.24+stretch~1.gbp54e52f (cli) (built: Dec 17 2018 09:26:59) ( NTS )
Copyright (c) 1997-2018 The PHP Group
Zend Engine v3.3.0-dev, Copyright (c) 1998-2018 Zend Technologies
with Zend OPcache v7.3.0-2+0~20181217092659.24+stretch~1.gbp54e52f, Copyright (c) 1999-2018, by Zend Technologies

To switch back to 7.0 use the following and you’ll see your php go back to 7.0. Switch back in the same way, but replace 7.0 with 7.3.

$ sudo update-alternatives --set php /usr/bin/php7.0

update-alternatives: using /usr/bin/php7.0 to provide /usr/bin/php (php) in manual mode
$ php -v

PHP 7.0.33-1+0~20181208203126.8+stretch~1.gbp2ff763 (cli) (built: Dec 8 2018 20:31:26) ( NTS )
Copyright (c) 1997-2017 The PHP Group
Zend Engine v3.0.0, Copyright (c) 1998-2017 Zend Technologies
with Zend OPcache v7.0.33-1+0~20181208203126.8+stretch~1.gbp2ff763, Copyright (c) 1999-2017, by Zend Technologies

VMWare Horizon Load Balancing — November 21, 2018

VMWare Horizon Load Balancing

We’re in the process of installing a new Horizon 7 infrastructure  and as part of the process the vendor added load balancers all over the place. I asked with question of why not use an Open Source solution for that?

My go to web server, proxy, load balancer is Nginx and as we already have a HA pair setup I thought we’d try to use that – even if it meant putting in a new one dedicated to the task in the longer term.

As the plan is to use a load balancer in front of the connection servers and the only tunnelling that will take place will be for external systems, our requirement will be to LB the https traffic (TCP 443) for the authentication. The PCoIP/Blast traffic will be directed straight to the ESX Host/client.

The previous document on load balancing with Nginx means I only need to add in the config needed for horizon. By using the same syncing of config it immediately becomes available on the secondary load balancer.

I created a new config file /etc/nginx/sites-available/horizon and then as standard, symbolic link it to sites-enabled to make it live.

upstream connectionservers {
server {
listen 443 ssl;
server_name horizon.domain.tld;
location ~ / {
proxy_pass https://connectionservers;

This adds our two connection servers into an upstream group called connectionservers which I then point the  proxy_pass  directive to.

The ip_hash directive ensures we have session stickiness based on the clients IP address. When a client connects they’ll stay directed to the connection server they were given until and unless the connection server becomes unavailable.


Within the nginx.conf ensure you have the reverse proxy options set in the http {} section:

enable reverse proxy
proxy_redirect off;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwared-For $proxy_add_x_forwarded_for;
client_max_body_size 10m;
client_body_buffer_size 128k;
client_header_buffer_size 64k;
proxy_connect_timeout 90;
proxy_send_timeout 90;
proxy_read_timeout 90;
proxy_buffer_size 16k;
proxy_buffers 32 16k;
proxy_busy_buffers_size 64k;

The SSL configuration on the HA pair is standard throughout all of our servers that it “proxies” for. We have a wildcard certificate and the HA proxies only services under *.domain.tld – our horizon.domain.tld fits this pattern so no changes necessary.

All the standard Nginx SSL related security settings for certificate, stapling, ciphers, HSTS are located in our /etc/nginx/snippets/ssl.conf file and is included in the nginx.conf using:

include snippets/ssl.conf


ssl_certificate /etc/ssl/certs/wildcard.pem;
ssl_certificate_key /etc/ssl/private/wildcard_key.cer;
ssl_dhparam /etc/ssl/private/dhparam.pem;

add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:50m;
ssl_session_tickets off;

# modern configuration. tweak to your needs.
ssl_protocols TLSv1.2;
ssl_prefer_server_ciphers on;

# OCSP Stapling ---
# fetch OCSP records from URL in ssl_certificate and cache them
ssl_stapling on;
ssl_stapling_verify on;

add_header X-Content-Type-Options nosniff;
add_header Accept "*";
add_header Access-Control-Allow-Methods "GET, POST, PUT";
add_header Access-Control-Expose-Headers "Authorization";
add_header X-Frame-Options SAMEORIGIN;
add_header X-XSS-Protection "1; mode=block";

proxy_cookie_path / "/; HTTPOnly; Secure";

Note: Depending on your requirements for other system you may need to include content security policy settings to satisfy CORS (Cross Origin Resource Sharing). In fact you MUST do this to allow Chrome and Firefox to work with Blast over HTML.

In our PCoIP client we add the new server as horizon.domain.tld and we get through the authentication and on to the selection of the available pools. So clearly the load balancing is doing the job. You can check the /var/log/nginx/access.log to confirm.

If you miss out the ip_hash directive for session stickiness you’ll find you can’t get past the authentication stage.

Syncing Config Files Between Servers —

Syncing Config Files Between Servers

Having setup a pair of load balancers I wanted to ensure the Nginx configuration from one system was replicated to the secondary where changes were made on the primary.

In this instance my weapon of choice is lsyncd. It seems quite old, but stable. It’s a layer over rsync and ssh that monitors a directory for changes and copies them over ssh to a target server.

Getting it working is pretty straight forward once you crack the configuration.

Install it using:

$ sudo apt-get install lsyncd

Then you’ll need to create the config file:

$ sudo mkdir /etc/lsyncd
$ sudo mkdir /var/log/lsyncd
$ sudo vi /etc/lsyncd/lsyncd.conf.lua

This is what my final config looked like:

settings {
  logfile = "/var/log/lsyncd/lsyncd.log",
  statusFile = "/var/log/lsyncd/lsyncd.status"

sync {
  source = "/etc/nginx",
  host = "loadbal02.domain.local",
  targetdir = "/etc/nginx";

Outside of this config I needed to setup an ssh key for the root user from loadbal01 to logon to loadbal02. When you generate the key DO NOT specify a password for the key, or the process wont work.

$ sudo ssh-keygen
$ sudo cat /root/.ssh/

Then I copy the output and paste it into /root/.ssh/authorized_keys on loadbal02. Many ways of doing this scp, copy/paste, etc.

Then just to ensure it works by connecting at least once to the target host as root using ssh.

$ sudo ssh root@loadbal02.domain.local

This will ask you to trust and save the hosts id in the ~/.ssh/known_hosts file so it will be trusted in future. Make sure you connect to EXACTLY the same host name as you are putting into the config file eg. “loadbal02.domain.local” as the host id for “loadbal02″does not match the FQDN and you will get a service error like this:

recursive startup rsync: /root/sync/ -> loadbal02.domain.local:/root/sync/
Host key verification failed.

Start the service using:

$ sudo systemctl start lsyncd

and monitor the status and log file to make sure it’s doing what you expect.

$ sudo cat /var/log/lsyncd/lsyncd.status 

Lsyncd status report at Tue Nov 20 15:33:21 2018
Sync1 source=/etc/nginx/
There are 0 delays
Inotify watching 7 directories
1: /etc/nginx/
2: /etc/nginx/sites-available/
3: /etc/nginx/modules-available/
4: /etc/nginx/modules-enabled/
5: /etc/nginx/sites-enabled/
6: /etc/nginx/conf.d/
7: /etc/nginx/snippets/

Restarting Nginx on the Secondary

After the files have copied, you need to tell Nginx on the secondary that the config has changed. Then it needs to reload the config so it’s running as up to date as the primary.

For this I use inotify-tools by installing them on loadbal02:

$ sudo apt-get install inotify-tools

Next I created a shell script that monitors the config and reloads the service.

I created a file called /usr/sbin/ and set it as executable.

$ sudo touch /usr/sbin/
$ sudo chmod 700 /usr/sbin/
$ sudo vi /usr/sbin/

This is the content of my script:

while true; do
inotifywait -q -e modify -e close_write -e delete -r /etc/nginx/
systemctl reload nginx

It monitors the /etc/nginx/ folder (-r recursively) and any event like modify, close_write or delete will cause the script to continue and reload nginx, then loop around to wait for any more changes.

Next I made sure my script ran every time the server rebooted using crond.

$ sudo crontab -e

Added in a line to run the script at reboot:

@reboot /bin/bash /usr/sbin/

That’s it. Following a reboot the script runs happily. To monitor it you can look at the nginx error log (/var/log/nginx/error.log). This will show a process started event like:

2018/11/21 09:16:57 [notice] 736#736: signal process started

The only downside of this would be if I spanner the config on primary the service reload on the secondary will fail. eg.

2018/11/21 09:25:20 [emerg] 1819#1819: unknown directive "banana" in /etc/nginx/nginx.conf:1

This isn’t such a concern unless the primary happens to fail whilst you’re editing it. The most important part is:

Test your config before you reload the primary server!

$ sudo nginx -t
nginx: [emerg] unknown directive "banana" in /etc/nginx/nginx.conf:1
nginx: configuration file /etc/nginx/nginx.conf test failed

Then fix any issues before doing a systemctl reload.

Debian Upgrading from an ISO File — November 7, 2018

Debian Upgrading from an ISO File

Kind of an unusual situation, but I have a Debian jessie box that has a terrible <2MB Internet connection, no CD/DVD and the USB stick I have I don’t want to overwrite and make bootable – it already has things on it I need. But it does have the capacity to hold the Debian DVD ISO #1.

How do you upgrade Debian from an ISO without being bootable?

Mount the USB Sick

First mount the USB Drive into your stretch environment so you can use the ISO it contains. You may want to check the output of dmesg to see what device name your stick has been given.

$ sudo mkdir /mnt/usb
$ sudo mount /dev/sdb1 /mnt/usb

Now we can see the ISO in the /mnt/usb folder

$ sudo ls -lh /mnt/usb

total 12G
-r-------- 2 root root 3.4G Nov 7 11:25 debian-9.5.0-amd64-DVD-1.iso

Mount the ISO

We can then mount the ISO into another folder under /mnt

$ sudo mkdir /mnt/iso

$ sudo mount -t iso9660 -o loop /mnt/usb/debian-9.5.0-amd64-DVD-1.iso /mnt/iso

We have a mounted ISO

$ sudo ls -lh /mnt/iso

total 1.5M
-r--r--r-- 1 root root 146 Jul 14 11:27 autorun.inf
dr-xr-xr-x 1 root root 2.0K Jul 14 11:27 boot

Edit Your Installation Sources

Next we edit the file /etc/apt/sources.list so it only contains the path to our ISO to install from. Take a copy of the original one or just comment out the existing lines.

deb file:///mnt/iso stretch main contrib

You may also want to check any source list files under sources.list.d and move them out whilst you upgrade.

Carry Out the Upgrade

Just continue as you normally would using upgrade/dist-upgrade to deliver your new OS. Making sure you do an update first so you read your news sources file.

$ sudo apt-get update

$ sudo apt-get upgrade

$ sudo apt-get dist-upgrade

Because you’re not getting the install from the internet and apt isn’t able to trust the source, you will have to accept to install the upgrades by ignoring the security warning.

When you’re done make sure you uncomment/put back the sources.list to point at the internet and replace the version with the new one eg. change jessie to stretch.

DBeaver – SQL GUI — November 6, 2018

DBeaver – SQL GUI

I’ve used a few SQL GUI’s over the years, SQuirreL, DBVisualizer, HeidiSQL, MySQL Workbench, but the one that stands out recently is DBeaver.

It’s got a community and enterprise edition. The community does everything I need and connects to all the SQL servers we use, Microsoft SQL, MySQL, Postgres/PostGIS.

Being Java based it’s cross platform, so you can use it in Windows too.

RADIUS Testing — November 5, 2018

RADIUS Testing

We have a need to authenticate a couple of devices via our Wifi access points with a RADIUS server. Right now I wanted to test things out using a MAC address authentication process. But for some reason we can’t get it working on the AP’s.

How do I test the RADIUS authentication policies are correct?

I recall using a RADCHECK program in Windows many years ago and figured Linux would probably have something similar. Sure enough a quick search means I can install freeradius-utils which includes radtest and radclient.

I needed to pass a number of RADIUS attributes and values with my test call and this is how I did it:

$ cat << EOF | radclient -x [radisuserver] auth [supersecretkey]
User-Name = 6894244B56EB
User-Password = 6894244B56EB
NAS-Port-Type = 19
NAS-Port = 0
Calling-Station-Id = SSID

This spoofed an auth call to the RADIUS server using the specified MAC address as user name and password and pretended the call was from a NAS-Port-Type of Wireless - 802.1x (19). I got the table of values from here:

Statement Option NAS-Port-Type Value Description


Number that indicates either the IANA-assigned value for the RADIUS port type or a custom number-to-port type defined by the user


Asymmetric DSL, carrierless amplitude phase (CAP) modulation


Asymmetric DSL, discrete multitone (DMT)








Fiber Distributed Data Interface


G.3 Fax


HDLC Clear Channel


Inter-Access Point Protocol (IAPP)




ISDN Synchronous


ISDN Async V.110


ISDN Async V.120


Personal Handyphone System (PHS) Internet Access Forum Standard


Symmetric DSL




Token Ring




Other wireless


Wireless 1xEV


Wireless code division multiple access (CDMA) 2000


Wireless 802.11


Wireless universal mobile telecommunications system (UMTS)






DSL of unknown type


ESXi 6.0 to 6.5 Upgrade — October 14, 2018

ESXi 6.0 to 6.5 Upgrade

This weekend has turned out to be a challenge. Upgrading our VMware Horizon 7 estate to the latest release involved upgrading all the components from connection servers, security server, composer, vCenter and vSphere hosts.

Last weekend was upgrading the connection servers, security server and composer. This weekend is vCenter and the vSphere hosts.

99.9% of the skills required are really about how strong your Google Fu is!


My skills include Google-fu and Duck-Jitsu, but I’m a Bing-do novice!

Continue reading

Jira Token Error, Leading to Fisheye + Crucible Failure — October 12, 2018

Jira Token Error, Leading to Fisheye + Crucible Failure

Today Jira fell over. Not sure why, but the result was a token error which refused to let my user or admin accounts ability to login properly. I managed to logon, but none of the dashboard or menu items worked as it had this persistent token error.

I ended up rebooting the server and restart the two services for Jira and Confluence.

These use the regular service start and stop with systemctl start jira and systemctl start confluence.

Crucible however, uses a start and stop sh script.

As root starting Crucible from

# /home/crucible/fecru-4.4.7/bin/

Caused some very strange behaviour.

First thing I noticed was that I had lost all of the configuration and it had reverted to a blank database and launched the setup program when I visited the URL! Something clearly wrong there.

Next I thought I’d run it as the crucible user I setup for this purpose.

# sudo -u crucible /home/crucible/fecru-4.4.7/bin/

Even worse! Now not only was it empty but the log had all kinds of permissions errors.

The clue was, but what log am I looking at? I ended up with logs in the ~/fecru-4.4.7/var/log folder AND in ~/instance/var/log folder, but with different dates. It looks like I spannered the install somehow as logs should only be in the instance folder. Although I ran the script it must have been as the root user and therefore created my config under fecru* NOT instance. When I then ran it as the crucible user using sudo it did the same, but all the files were owned by root and caused the permission errors.

The outcome showed that the problem related to running sudo and not maintaining the environment variable for FECRU_INST which points to the instance folder. I fixed this by running visudo and set the rule to keep certain environment variables – in the same way as I would for a proxy server.

Edit the line:

Defaults        env_keep += "ftp_proxy http_proxy https_proxy no_proxy FISHEYE_INST"

I then had to make sure I moved the config.xml file and data folder that had been erroneously created under fecru into instance.

# mv fecru-4.4.7/config.xml instance
# mv fecru-4.4.7/data instance

Now when I run sudo for the crucible user it keeps the environment setting pointing to the install path, the instance path and all contained files must belong to crucible:crucible so chmod them:

# chmod crucible:crucible instance/* -R

Finally starting crucible with:

# sudo -u crucible /home/crucible/fecru-4.4.7/bin/

All is good once again.


  • fecru = FishEye + CRUcible