Stuff I'm Up To

Technical Ramblings

Nginx and LDAP Authentication — July 11, 2020

Nginx and LDAP Authentication

We want a little more control over some of our reverse proxies and wanted to place a little extra burden on the users as possible. To do this we chose to use the same passwords for authentication as we do everywhere else – hence LDAP.

Thankfully Nginx have decided to include the module gx_http_auth_request_module in both the Nginx Plus and Open Source.

The prerequisite http_auth_request module is included in both NGINX Plus packages and prebuilt NGINX binaries.

Nginx

The documentation on implementing this walks you through a reference implementation which can be long winded. I tried to make it simpler with this article.

Continue reading
Nginx Configuration Synchronisation — May 25, 2020

Nginx Configuration Synchronisation

Back when I built the Nginx failovers using Nginx and Keepalived I also required that should the config change on the master then the config would automatically be copied to the backup.

There are some important things you need to do for this to work correctly and not put your failover at risk of failing. The last thing you want to do is bork you master servers config and automatically copy a filed config to the backup server and screw that one up too.

Continue reading
Asterisk + WebRTC — April 16, 2020

Asterisk + WebRTC

Enable WebRTC so you can use a plain old HTML5 browser to make calls.

I had already configured Asterisk’s http server to use my Let’s Encrypt certificates. This was pretty much redundant for http usage as I always put systems behind an Nginx reverse proxy where I can.

http.conf

[general]
servername=pbx.domain.tld
enabled=yes
bindaddr=0.0.0.0
bindport=8088
tlsenable=yes            ; enable tls - default no.
tlsbindaddr=0.0.0.0:8089 ; address and port to bind to - default is bindaddr and port 8089.
tlscertfile=/etc/asterisk/keys/fullchain1.pem ; path to the certificate file (*.pem) only.
tlsprivatekey=/etc/asterisk/keys/privkey1.pem ; path to private key file (*.pem) only.

/etc/nginx/conf.d/asterisk.conf

Snippets added into the nginx.conf to proxy to the asterisk /ws path.

Note the use of the non-https port for the upstream asterisk.

upstream asterisk {
  server 127.0.0.1:8088;
}
server {
  ...
  location /ws {
    proxy_buffers 8 32k;
    proxy_buffer_size 64k;
    proxy_pass http://asterisk/ws;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header Host $http_host;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_http_version 1.1;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection "upgrade";
    proxy_read_timeout 999999999;
  }
}

pjsip.conf

[transport-wss]
type=transport
protocol=wss
bind=0.0.0.0

ps_aors

Set the max_contacts to 5

ps_endpoints

Set dtls_auto_generate_cert to yes, webrtc to yes

References

https://wiki.asterisk.org/wiki/display/AST/Configuring+Asterisk+for+WebRTC+Clients

https://wiki.asterisk.org/wiki/display/AST/WebRTC+tutorial+using+SIPML5

https://www.bidon.ca/fr/notes/asterisk-webrtc

PXE Boot and Linux Mint — March 2, 2020
Laravel, Nuxt.js and Nginx — October 2, 2019

Laravel, Nuxt.js and Nginx

Whilst experimenting with Nuxt.js (A Vue.js framework) as a front end client for Laravel I discovered I was going to face some issues with CORS, certificates for HTTPS and the whole serving the client over port 3000 and the API over port 80 thing.

In the development environment this isn’t so bad as I can run both the Laravel artisan web server and serve Nuxt.js and have them talk to each other – within reason. The problems started when I wanted to use Social Sign In using Facebook, Google etc. The callback from the OAuth process would fire, but the client would fail with CORS errors as I would have to redirect the client using the API from the OAuth callback.

To resolve this issue I tried adding in a CORS module for Laravel and setting the values appropriately, but still failed.

So I began thinking what this would look like in production. I wouldn’t want to serve the API and client separately and I’d probably put them both behind a reverse proxy, so let’s look at using Nginx.

Continue reading
VMWare Horizon Load Balancing — November 21, 2018

VMWare Horizon Load Balancing

We’re in the process of installing a new Horizon 7 infrastructure  and as part of the process the vendor added load balancers all over the place. I asked with question of why not use an Open Source solution for that?

My go to web server, proxy, load balancer is Nginx and as we already have a HA pair setup I thought we’d try to use that – even if it meant putting in a new one dedicated to the task in the longer term.


As the plan is to use a load balancer in front of the connection servers and the only tunnelling that will take place will be for external systems, our requirement will be to LB the https traffic (TCP 443) for the authentication. The PCoIP/Blast traffic will be directed straight to the ESX Host/client.

The previous document on load balancing with Nginx means I only need to add in the config needed for horizon. By using the same syncing of config it immediately becomes available on the secondary load balancer.

I created a new config file /etc/nginx/sites-available/horizon and then as standard, symbolic link it to sites-enabled to make it live.

upstream connectionservers {
ip_hash;
server 192.168.0.236:443;
server 192.168.0.237:443;
}
server {
listen 443 ssl;
server_name horizon.domain.tld;
location ~ / {
proxy_pass https://connectionservers;
}
}

This adds our two connection servers into an upstream group called connectionservers which I then point the  proxy_pass  directive to.

The ip_hash directive ensures we have session stickiness based on the clients IP address. When a client connects they’ll stay directed to the connection server they were given until and unless the connection server becomes unavailable.

nginx.conf

Within the nginx.conf ensure you have the reverse proxy options set in the http {} section:

enable reverse proxy
proxy_redirect off;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwared-For $proxy_add_x_forwarded_for;
client_max_body_size 10m;
client_body_buffer_size 128k;
client_header_buffer_size 64k;
proxy_connect_timeout 90;
proxy_send_timeout 90;
proxy_read_timeout 90;
proxy_buffer_size 16k;
proxy_buffers 32 16k;
proxy_busy_buffers_size 64k;

The SSL configuration on the HA pair is standard throughout all of our servers that it “proxies” for. We have a wildcard certificate and the HA proxies only services under *.domain.tld – our horizon.domain.tld fits this pattern so no changes necessary.

All the standard Nginx SSL related security settings for certificate, stapling, ciphers, HSTS are located in our /etc/nginx/snippets/ssl.conf file and is included in the nginx.conf using:

include snippets/ssl.conf

snippets/ssl.conf

ssl_certificate /etc/ssl/certs/wildcard.pem;
ssl_certificate_key /etc/ssl/private/wildcard_key.cer;
ssl_dhparam /etc/ssl/private/dhparam.pem;

add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:50m;
ssl_session_tickets off;

# modern configuration. tweak to your needs.
ssl_protocols TLSv1.2;
ssl_ciphers 'ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256';
ssl_prefer_server_ciphers on;

# OCSP Stapling ---
# fetch OCSP records from URL in ssl_certificate and cache them
ssl_stapling on;
ssl_stapling_verify on;

add_header X-Content-Type-Options nosniff;
add_header Accept "*";
add_header Access-Control-Allow-Methods "GET, POST, PUT";
add_header Access-Control-Expose-Headers "Authorization";
add_header X-Frame-Options SAMEORIGIN;
add_header X-XSS-Protection "1; mode=block";

proxy_cookie_path / "/; HTTPOnly; Secure";

Note: Depending on your requirements for other system you may need to include content security policy settings to satisfy CORS (Cross Origin Resource Sharing). In fact you MUST do this to allow Chrome and Firefox to work with Blast over HTML.

In our PCoIP client we add the new server as horizon.domain.tld and we get through the authentication and on to the selection of the available pools. So clearly the load balancing is doing the job. You can check the /var/log/nginx/access.log to confirm.

If you miss out the ip_hash directive for session stickiness you’ll find you can’t get past the authentication stage.

Syncing Config Files Between Servers —

Syncing Config Files Between Servers

Having setup a pair of load balancers I wanted to ensure the Nginx configuration from one system was replicated to the secondary where changes were made on the primary.

In this instance my weapon of choice is lsyncd. It seems quite old, but stable. It’s a layer over rsync and ssh that monitors a directory for changes and copies them over ssh to a target server.


Getting it working is pretty straight forward once you crack the configuration.

Install it using:

$ sudo apt-get install lsyncd

Then you’ll need to create the config file:

$ sudo mkdir /etc/lsyncd
$ sudo mkdir /var/log/lsyncd
$ sudo vi /etc/lsyncd/lsyncd.conf.lua

This is what my final config looked like:

settings {
  logfile = "/var/log/lsyncd/lsyncd.log",
  statusFile = "/var/log/lsyncd/lsyncd.status"
}

sync {
  default.rsyncssh,
  source = "/etc/nginx",
  host = "loadbal02.domain.local",
  targetdir = "/etc/nginx";
}

Outside of this config I needed to setup an ssh key for the root user from loadbal01 to logon to loadbal02. When you generate the key DO NOT specify a password for the key, or the process wont work.

$ sudo ssh-keygen
$ sudo cat /root/.ssh/id_rsa.pub

Then I copy the output and paste it into /root/.ssh/authorized_keys on loadbal02. Many ways of doing this scp, copy/paste, etc.

Then just to ensure it works by connecting at least once to the target host as root using ssh.

$ sudo ssh root@loadbal02.domain.local

This will ask you to trust and save the hosts id in the ~/.ssh/known_hosts file so it will be trusted in future. Make sure you connect to EXACTLY the same host name as you are putting into the config file eg. “loadbal02.domain.local” as the host id for “loadbal02″does not match the FQDN and you will get a service error like this:

recursive startup rsync: /root/sync/ -> loadbal02.domain.local:/root/sync/
Host key verification failed.

Start the service using:

$ sudo systemctl start lsyncd

and monitor the status and log file to make sure it’s doing what you expect.

$ sudo cat /var/log/lsyncd/lsyncd.status 

Lsyncd status report at Tue Nov 20 15:33:21 2018
Sync1 source=/etc/nginx/
There are 0 delays
Excluding:
nothing.
Inotify watching 7 directories
1: /etc/nginx/
2: /etc/nginx/sites-available/
3: /etc/nginx/modules-available/
4: /etc/nginx/modules-enabled/
5: /etc/nginx/sites-enabled/
6: /etc/nginx/conf.d/
7: /etc/nginx/snippets/

Restarting Nginx on the Secondary

After the files have copied, you need to tell Nginx on the secondary that the config has changed. Then it needs to reload the config so it’s running as up to date as the primary.

For this I use inotify-tools by installing them on loadbal02:

$ sudo apt-get install inotify-tools

Next I created a shell script that monitors the config and reloads the service.

I created a file called /usr/sbin/inotify_nginx.sh and set it as executable.

$ sudo touch /usr/sbin/inotify_nginx.sh
$ sudo chmod 700 /usr/sbin/inotify_nginx.sh
$ sudo vi /usr/sbin/inotify_nginx.sh

This is the content of my script:

!/bin/sh
while true; do
inotifywait -q -e modify -e close_write -e delete -r /etc/nginx/
systemctl reload nginx
done

It monitors the /etc/nginx/ folder (-r recursively) and any event like modify, close_write or delete will cause the script to continue and reload nginx, then loop around to wait for any more changes.

Next I made sure my script ran every time the server rebooted using crond.

$ sudo crontab -e

Added in a line to run the script at reboot:

@reboot /bin/bash /usr/sbin/inotify_nginx.sh

That’s it. Following a reboot the script runs happily. To monitor it you can look at the nginx error log (/var/log/nginx/error.log). This will show a process started event like:

2018/11/21 09:16:57 [notice] 736#736: signal process started

The only downside of this would be if I spanner the config on primary the service reload on the secondary will fail. eg.

2018/11/21 09:25:20 [emerg] 1819#1819: unknown directive "banana" in /etc/nginx/nginx.conf:1

This isn’t such a concern unless the primary happens to fail whilst you’re editing it. The most important part is:

Test your config before you reload the primary server!

$ sudo nginx -t
nginx: [emerg] unknown directive "banana" in /etc/nginx/nginx.conf:1
nginx: configuration file /etc/nginx/nginx.conf test failed

Then fix any issues before doing a systemctl reload.

JIRA, Confluence and Nginx — September 15, 2018

JIRA, Confluence and Nginx

With Atlassian Jira Software and Confluence installed onto the same server I thought I’d investigate setting things up so we don’t have to use the default TCP port type of access over HTTP. instead let’s setup a reverse proxy using HTTPS over TCP 443 that forwards to the TCP 8080 and 8090 ports.

The aim is to get Jira accessible as https://jira.domain.local and Confluence as https://jira.domain.local/confluence.

Continue reading

When is a Question Mark not a ? — August 29, 2018

When is a Question Mark not a ?

That’s a morning of smashing my face on the desk again. I deployed my dev program onto a production system and then started crying as it stopped working as it should.

It seemed that none of my query string parameters were making it through to the controller. I called up some debugging and dumped out my $request and $request->all() etc. and discovered that the parameters although shown in the browser dev window went AWOL between server and controller. On my dev environment it all acted as it should.

So there must be something different. PHP v7.2 on dev and v7.0 or production maybe? No, much simpler than that. None of the Laracasts and Laravel related Googling pulled up any particular clues. It wasn’t until I looked at Nginx and parameters not being passed to PHP that I got a hit.

https://serverfault.com/questions/685525/nginx-php-fpm-query-parameters-wont-be-passed-to-php

The answer was as simple as adding in the $is_args into my Nginx virtual server config.

location / {
  try_files %uri $uri/ /index.php$is_args$query_string;
}

Up until now I guess I’ve been using routing with the parameters as part of the URI. Now I’m using some query string parameters I need to put in the ?, which is the $is_args variable.

So why not a problem in dev? Because I’m not using Nginx, I just use artisan serve to debug my development program.

Nginx and Keepalived — May 15, 2018

Nginx and Keepalived

I have a need to deploy a High Availability Load Balanced reverse proxy solution. We have a back end web service that requires resilience. To achieve this I’ve been looking at Nginx and Keepalived. The Nginx Plus product appears to contain high availability support – but we’re in the realms of zero budget and open source/community supported products.

The front end reverse proxy I’ll use is Nginx, but it could be anything. The clever part is going to be using keepalived to pass a single IP address between two servers.

Continue reading
Nginx, Not Just a Web Server — October 26, 2016

Nginx, Not Just a Web Server

Nginx is capable of more than serving web pages. It can load balance, cache and act as a reverse proxy.

We recently had need to access two web services on the same server through a single interface. This is where the reverse proxy came in.

  • Service A runs on port 9010
  • Service B runs on port 9020
  • Access to both services needs to be via a single front end using traditional http over port 80

Not ideal, but it’s not my system design, just a challenge we need to face. The way we tackled it was using an Nginx reverse proxy and split the calls to specific URL paths on each web service to the relevant underlying back end service.

Continue reading

Comodo SSL Certs & Android — October 10, 2016

Comodo SSL Certs & Android

After buying a cheap SSL certificate I found I’d missed something important during the install.

Usually it’s just a case of copy the certificate and key files to /etc/ssl/certs and /etc/ssl/private, respectively and then pointing the Nginx config at them to get it working.

Well all was well in the GUI world of Linux and Windows browsers. But My Android said the certificate wasn’t trusted. Looks like there’s some CA intermediates that need sorting.

Continue reading