Stuff I'm Up To

Technical Ramblings

Apt Version Pinning — February 7, 2019

Apt Version Pinning

Today after running some apt upgrades my Laravel development environment failed to compile because of a newer version of nodejs than I currently require.

Module build failed: ModuleBuildError: Module build failed: Error: Missing binding /home/paulb/itsm/node_modules/node-sass/vendor/linux-x64-64/binding.node
Node Sass could not find a binding for your current environment: Linux 64-bit with Node.js 10.x

Found bindings for the following environments:
  - Linux 64-bit with Node.js 8.x

This usually happens because your environment has changed since running `npm install`.
Run `npm rebuild node-sass` to download the binding for your current environment.
Continue reading
Advertisements
PHPUnit – Version Mismatch — February 4, 2019

PHPUnit – Version Mismatch

As our codebase matures we return to develop unit tests to ensure our QA process captures any code changes that may have altered the functionality of the product.

When calling PHPUnit on Windows or Linux we ran into some issues relating to the version of PHPUnit we had installed.

On Windows it was an ancient PHPUnit version 3 and on Linux It was running version 7. Neither of which were compatible with our Laravel 5.5 project which uses php version 7.0.

In order to use PHPUnit with our project we must use PHPUnit version 6 (see Supported Versions)

What I hadn’t realised is that we had installed PHPUnit both locally into the OS and with our Laravel project so it exists in composer.json and gets installed under the projects ./vendor. The version installed in the OS path is the version that isn’t compatible with our project, but because it’s in our path it’s taking precedence over our project installed version.

To run the project version we just need to be specific in how we call it.

$ ./vendor/phpunit/phpunit/phpunit
PHPUnit 6.5.13 by Sebastian Bergmann and contributors.

....F                                                               5 / 5 (100%)

Time: 345 ms, Memory: 16.00MB

There was 1 failure:

1) Tests\Unit\Finance\CostCodeTest::testApiGetCostCodes
Expected status code 401 but received 200.
Failed asserting that false is true.

/home/user/itsm/vendor/laravel/framework/src/Illuminate/Foundation/Testing/TestResponse.php:78
/home/user/itsm/tests/Unit/Finance/CostCodeTest.php:37

FAILURES!
Tests: 5, Assertions: 14, Failures: 1.

Gitahead — February 1, 2019
What’s in My EDC — January 30, 2019

What’s in My EDC

EDC All Packed

Every Day Carry is an up and coming buzz, but we’ve all probably been doing it for years. I’ve always had a few essentials in the car that includes a multi-tool, torch, a pen and packet of paracetamol.

I just decided to pad it out a bit and include some more useful stuff that could be grabbed from the car and chucked into an away day bag or rucksack. This is just a list of what goes into it.

The pack I chose to bundle this into comes from Amazon for £8. It’s a 1000D nylon pouch with two zip pockets, a buckle pocket and most importantly Molle fastenings. The Molle fastenings mean it’s able to be attached to any other Molle on a rucksack or vest. In this case the Molle has also been thought out to allow it to fit on to a regular belt.

I’m finding this an evolutionary process. Things I hadn’t thought put into my EDC are being added as I discover them and things I have are replaced by better or simpler versions.

What’s inside
Continue reading
apt-get Hash Sum Mismatch #2 — January 16, 2019

apt-get Hash Sum Mismatch #2

I’m still not sure why I’m getting this problem occur again. But when running apt-get upgrade the upgrades fail with a message like this:

Get:8 http://security.debian.org stretch/updates/main amd64 libudev1 amd64 232-25+deb9u8 [125 kB]
Err:8 http://security.debian.org stretch/updates/main amd64 libudev1 amd64 232-25+deb9u8
Hash Sum mismatch
Hashes of expected file:
SHA256:189bfac6bfeda64bc16c74614bf524b2c431e7b6c4e3a4f786b927b84afdc889
SHA1:6590379bbc85f8d90c05a1b32cd27dac49431b7a [weak]
MD5Sum:40ace91d2e4c633f89d1571b3022dcdd [weak]
Filesize:125364 [weak]
Hashes of received file:
SHA256:7e4f1f0e1cbcb164ddf5fd1a6d22641d91fff812220f28654a1a007749be6bac
SHA1:7c501c7b49f4fe93d78309f5b5c635f1db487989 [weak]
MD5Sum:9b8faa999b5db9581ef0df62f697e4df [weak]
Filesize:877368 [weak]
Last modification reported: Sat, 08 Dec 2018 08:05:18 +0000

To resolve it I resorted to bypassing any caching and use apt to pull the update and upgrade:

$ sudo apt -o Acquire::https::No-Cache=True -o Acquire::http::No-Cache=True update
$ sudo apt -o Acquire::https::No-Cache=True -o Acquire::http::No-Cache=True upgrade
Laravel 5.5 HMR and Windows — January 15, 2019

Laravel 5.5 HMR and Windows

Using HMR in Chrome on Linux is faultless, but on Windows HMR fails to start in the browser.

Looking at the entries in the bowsers script tags they seem a bit goofy. There’s leading slashes and spaces before the script filename.

It seems this is a popular issue. We hunted around for quite a few pointers to resolve this.

https://github.com/JeffreyWay/laravel-mix/issues/1437

The only thing we changed was line 90 of Entry.js to add on the extra replace(/^\//, ''); A restart of yarn hot and a browser refresh and we were good to go. HMR and WDS show in the Chrome console as expected and changes to code are now dynamic.

Public Key from Private Key — January 3, 2019

Public Key from Private Key

I fall over this every so often. I have the private key file but would either have to trawl servers for authorized_keys files to get the public password or remember how to obtain the public key from the private key.

Time to document it here so I don’t have to hunt for it with Google again.

For an RSA PEM format public key

$ openssl rsa -in private.key -pubout

-----BEGIN PUBLIC KEY-----
MIIBIDA ...
-----END PUBLIC KEY-----

For an SSH putty friendly version

$ ssh-keygen -y -f private.key

ssh-rsa AAAAB3NzaC1yc2EAAAABJQAAAQE ...
php 7.0 on Debian Buster — January 2, 2019

php 7.0 on Debian Buster

Actually this is more about any version of php (5.6, 7.0, 7.1, 7.2) on buster. Php source has taken on a bit of a split and the standard repositories only deal with the one supported version for the current release of Debian you are using.

This means that on Debian 9 (buster/sid) the only version available from the Debian repository is php 7.3.

Our current production systems are Debian 9 stretch and only support php 7.0 and therefore only Laravel 5.5. In order to bring my development platform down to php 7.0 I must use a non-standard repository.

Ondřej Surý has been packaging php for Debian and Ubuntu and distributing them. To get them you need to add his key and repository into your aptitude:

$ wget -q https://packages.sury.org/php/apt.gpg -O- | sudo apt-key add -
$ echo "deb https://packages.sury.org/php/ stretch main" | sudo tee /etc/apt/sources.list.d/php.list

Now you can add in whatever version of php you’d like even 5.6. eg.

$ sudo apt-get install php7.0-fpm php7.0-mbstring php7.0-zip php7.0-mysql php7.0-sqlite3 php7.0-dev php-pear

If you already had 7.3 installed nothing will have changed yet and when you type php from the command line you’ll see it still runs version 7.3.

$ php -v

PHP 7.3.0-2+0~20181217092659.24+stretch~1.gbp54e52f (cli) (built: Dec 17 2018 09:26:59) ( NTS )
Copyright (c) 1997-2018 The PHP Group
Zend Engine v3.3.0-dev, Copyright (c) 1998-2018 Zend Technologies
with Zend OPcache v7.3.0-2+0~20181217092659.24+stretch~1.gbp54e52f, Copyright (c) 1999-2018, by Zend Technologies

To switch back to 7.0 use the following and you’ll see your php go back to 7.0. Switch back in the same way, but replace 7.0 with 7.3.

$ sudo update-alternatives --set php /usr/bin/php7.0

update-alternatives: using /usr/bin/php7.0 to provide /usr/bin/php (php) in manual mode
$ php -v

PHP 7.0.33-1+0~20181208203126.8+stretch~1.gbp2ff763 (cli) (built: Dec 8 2018 20:31:26) ( NTS )
Copyright (c) 1997-2017 The PHP Group
Zend Engine v3.0.0, Copyright (c) 1998-2017 Zend Technologies
with Zend OPcache v7.0.33-1+0~20181208203126.8+stretch~1.gbp2ff763, Copyright (c) 1999-2017, by Zend Technologies

Damn that Proxy! — December 12, 2018

Damn that Proxy!

In Windows when you run into an application that doesn’t use proxy settings and doesn’t look at the environmental variable,  IE or netsh settings, then you’re kind of stuck when you must send web traffic through a proxy.

That was until we discovered proxycap.

Proxy cap is a very flexible solution that can add specific rules for various requirements. It will then intercept matching traffic and direct it to the proxy without the application even realising there is a proxy.

The example we based this on is the application Bluestacks, not being able to proxy. When we Goggled a solution we came up with posts about using proxycap. We could then add in rules only for the programs bluestacks.exe and hd-player.exe using https to be intercepted and Bluestacks would then work – even though it knew nothing about proxies.

Proxycap seems very clever in that it seems to just modify the Windows firewall to make the magic happen. It’s very flexible in that you could even set different apps to use different proxies. It also supports authentication.

It’s a commercial product, but sometimes you just have to pay the price.

Cross Origin Resource Sharing and Content Security Policy — December 4, 2018

Cross Origin Resource Sharing and Content Security Policy

Got to love having a vendor carrying out half a job… again.

Having installed a new VMWare Horizon environment for Windows 10, I thought we’d at least have Blast available via HTML  – which we don’t currently have in our Windows 7 Horizon setup.

During the install I setup a load balancer which only really handles the authentication process. This worked fine using IE or Edge, at which point I guess the vendor decided that’s enough testing and it’s considered functional. After they left I fired up my Chrome browser and found it didn’t work. So I tried Firefox with the same non-functional result.

Checking the console log in Firefox I see:

Content Security Policy: The page's settings blocked the loading of a resource at wss://192.168.61.12:22443/d/36BC344E-DAD5-4EA5-A44C-12456F74432D/?vauth=LaQJrs2RppeiZGX9gOtj75vekprtuEDcgD2C6tba ("default-src").

A trawl of VMWare documentation results in: https://docs.vmware.com/en/VMware-Horizon-7/7.6/horizon-security/GUID-FD679D1D-E037-4EDF-A96F-F0CD85FFE724.html

Now all I have to do is translate that to Nginx so I can put that into the config.

Editing my ssl/snippets.conf file and changing the CSP header, I added the missing parts for wss: and blob: to end up with:

add_header Content-Security-Policy "default-src 'self' wss:; script-src 'self' 'unsafe-inline' 'unsafe-eval'; img-src 'self'; style-src 'self' 'unsafe-inline'; font-src 'self'; frame-src https://horizon.domain.tld blob:; object-src 'none' blob:; connect-src 'self' wss:; child-src 'self' blob:;";

A reload of Nginx and a refresh/reload on the browser and I’m into the Horizon Desktop!

References

https://docs.vmware.com/en/VMware-Horizon-7/7.6/horizon-security/GUID-94DAC7B8-70A3-4A91-8E70-2B2591B82866.html

https://www.owasp.org/index.php/Content_Security_Policy_Cheat_Sheet

VMWare Horizon Load Balancing — November 21, 2018

VMWare Horizon Load Balancing

We’re in the process of installing a new Horizon 7 infrastructure  and as part of the process the vendor added load balancers all over the place. I asked with question of why not use an Open Source solution for that?

My go to web server, proxy, load balancer is Nginx and as we already have a HA pair setup I thought we’d try to use that – even if it meant putting in a new one dedicated to the task in the longer term.


As the plan is to use a load balancer in front of the connection servers and the only tunnelling that will take place will be for external systems, our requirement will be to LB the https traffic (TCP 443) for the authentication. The PCoIP/Blast traffic will be directed straight to the ESX Host/client.

The previous document on load balancing with Nginx means I only need to add in the config needed for horizon. By using the same syncing of config it immediately becomes available on the secondary load balancer.

I created a new config file /etc/nginx/sites-available/horizon and then as standard, symbolic link it to sites-enabled to make it live.

upstream connectionservers {
ip_hash;
server 192.168.0.236:443;
server 192.168.0.237:443;
}
server {
listen 443 ssl;
server_name horizon.domain.tld;
location ~ / {
proxy_pass https://connectionservers;
}
}

This adds our two connection servers into an upstream group called connectionservers which I then point the  proxy_pass  directive to.

The ip_hash directive ensures we have session stickiness based on the clients IP address. When a client connects they’ll stay directed to the connection server they were given until and unless the connection server becomes unavailable.

nginx.conf

Within the nginx.conf ensure you have the reverse proxy options set in the http {} section:

enable reverse proxy
proxy_redirect off;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwared-For $proxy_add_x_forwarded_for;
client_max_body_size 10m;
client_body_buffer_size 128k;
client_header_buffer_size 64k;
proxy_connect_timeout 90;
proxy_send_timeout 90;
proxy_read_timeout 90;
proxy_buffer_size 16k;
proxy_buffers 32 16k;
proxy_busy_buffers_size 64k;

The SSL configuration on the HA pair is standard throughout all of our servers that it “proxies” for. We have a wildcard certificate and the HA proxies only services under *.domain.tld – our horizon.domain.tld fits this pattern so no changes necessary.

All the standard Nginx SSL related security settings for certificate, stapling, ciphers, HSTS are located in our /etc/nginx/snippets/ssl.conf file and is included in the nginx.conf using:

include snippets/ssl.conf

snippets/ssl.conf

ssl_certificate /etc/ssl/certs/wildcard.pem;
ssl_certificate_key /etc/ssl/private/wildcard_key.cer;
ssl_dhparam /etc/ssl/private/dhparam.pem;

add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:50m;
ssl_session_tickets off;

# modern configuration. tweak to your needs.
ssl_protocols TLSv1.2;
ssl_ciphers 'ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256';
ssl_prefer_server_ciphers on;

# OCSP Stapling ---
# fetch OCSP records from URL in ssl_certificate and cache them
ssl_stapling on;
ssl_stapling_verify on;

add_header X-Content-Type-Options nosniff;
add_header Accept "*";
add_header Access-Control-Allow-Methods "GET, POST, PUT";
add_header Access-Control-Expose-Headers "Authorization";
add_header X-Frame-Options SAMEORIGIN;
add_header X-XSS-Protection "1; mode=block";

proxy_cookie_path / "/; HTTPOnly; Secure";

Note: Depending on your requirements for other system you may need to include content security policy settings to satisfy CORS (Cross Origin Resource Sharing). In fact you MUST do this to allow Chrome and Firefox to work with Blast over HTML.

In our PCoIP client we add the new server as horizon.domain.tld and we get through the authentication and on to the selection of the available pools. So clearly the load balancing is doing the job. You can check the /var/log/nginx/access.log to confirm.

If you miss out the ip_hash directive for session stickiness you’ll find you can’t get past the authentication stage.

Syncing Config Files Between Servers —

Syncing Config Files Between Servers

Having setup a pair of load balancers I wanted to ensure the Nginx configuration from one system was replicated to the secondary where changes were made on the primary.

In this instance my weapon of choice is lsyncd. It seems quite old, but stable. It’s a layer over rsync and ssh that monitors a directory for changes and copies them over ssh to a target server.


Getting it working is pretty straight forward once you crack the configuration.

Install it using:

$ sudo apt-get install lsyncd

Then you’ll need to create the config file:

$ sudo mkdir /etc/lsyncd
$ sudo mkdir /var/log/lsyncd
$ sudo vi /etc/lsyncd/lsyncd.conf.lua

This is what my final config looked like:

settings {
  logfile = "/var/log/lsyncd/lsyncd.log",
  statusFile = "/var/log/lsyncd/lsyncd.status"
}

sync {
  default.rsyncssh,
  source = "/etc/nginx",
  host = "loadbal02.domain.local",
  targetdir = "/etc/nginx";
}

Outside of this config I needed to setup an ssh key for the root user from loadbal01 to logon to loadbal02. When you generate the key DO NOT specify a password for the key, or the process wont work.

$ sudo ssh-keygen
$ sudo cat /root/.ssh/id_rsa.pub

Then I copy the output and paste it into /root/.ssh/authorized_keys on loadbal02. Many ways of doing this scp, copy/paste, etc.

Then just to ensure it works by connecting at least once to the target host as root using ssh.

$ sudo ssh root@loadbal02.domain.local

This will ask you to trust and save the hosts id in the ~/.ssh/known_hosts file so it will be trusted in future. Make sure you connect to EXACTLY the same host name as you are putting into the config file eg. “loadbal02.domain.local” as the host id for “loadbal02″does not match the FQDN and you will get a service error like this:

recursive startup rsync: /root/sync/ -> loadbal02.domain.local:/root/sync/
Host key verification failed.

Start the service using:

$ sudo systemctl start lsyncd

and monitor the status and log file to make sure it’s doing what you expect.

$ sudo cat /var/log/lsyncd/lsyncd.status 

Lsyncd status report at Tue Nov 20 15:33:21 2018
Sync1 source=/etc/nginx/
There are 0 delays
Excluding:
nothing.
Inotify watching 7 directories
1: /etc/nginx/
2: /etc/nginx/sites-available/
3: /etc/nginx/modules-available/
4: /etc/nginx/modules-enabled/
5: /etc/nginx/sites-enabled/
6: /etc/nginx/conf.d/
7: /etc/nginx/snippets/

Restarting Nginx on the Secondary

After the files have copied, you need to tell Nginx on the secondary that the config has changed. Then it needs to reload the config so it’s running as up to date as the primary.

For this I use inotify-tools by installing them on loadbal02:

$ sudo apt-get install inotify-tools

Next I created a shell script that monitors the config and reloads the service.

I created a file called /usr/sbin/inotify_nginx.sh and set it as executable.

$ sudo touch /usr/sbin/inotify_nginx.sh
$ sudo chmod 700 /usr/sbin/inotify_nginx.sh
$ sudo vi /usr/sbin/inotify_nginx.sh

This is the content of my script:

!/bin/sh
while true; do
inotifywait -q -e modify -e close_write -e delete -r /etc/nginx/
systemctl reload nginx
done

It monitors the /etc/nginx/ folder (-r recursively) and any event like modify, close_write or delete will cause the script to continue and reload nginx, then loop around to wait for any more changes.

Next I made sure my script ran every time the server rebooted using crond.

$ sudo crontab -e

Added in a line to run the script at reboot:

@reboot /bin/bash /usr/sbin/inotify_nginx.sh

That’s it. Following a reboot the script runs happily. To monitor it you can look at the nginx error log (/var/log/nginx/error.log). This will show a process started event like:

2018/11/21 09:16:57 [notice] 736#736: signal process started

The only downside of this would be if I spanner the config on primary the service reload on the secondary will fail. eg.

2018/11/21 09:25:20 [emerg] 1819#1819: unknown directive "banana" in /etc/nginx/nginx.conf:1

This isn’t such a concern unless the primary happens to fail whilst you’re editing it. The most important part is:

Test your config before you reload the primary server!

$ sudo nginx -t
nginx: [emerg] unknown directive "banana" in /etc/nginx/nginx.conf:1
nginx: configuration file /etc/nginx/nginx.conf test failed

Then fix any issues before doing a systemctl reload.