Stuff I'm Up To

Technical Ramblings

Damn that Proxy! — December 12, 2018

Damn that Proxy!

In Windows when you run into an application that doesn’t use proxy settings and doesn’t look at the environmental variable,  IE or netsh settings, then you’re kind of stuck when you must send web traffic through a proxy.

That was until we discovered proxycap.

Proxy cap is a very flexible solution that can add specific rules for various requirements. It will then intercept matching traffic and direct it to the proxy without the application even realising there is a proxy.

The example we based this on is the application Bluestacks, not being able to proxy. When we Goggled a solution we came up with posts about using proxycap. We could then add in rules only for the programs bluestacks.exe and hd-player.exe using https to be intercepted and Bluestacks would then work – even though it knew nothing about proxies.

Proxycap seems very clever in that it seems to just modify the Windows firewall to make the magic happen. It’s very flexible in that you could even set different apps to use different proxies. It also supports authentication.

It’s a commercial product, but sometimes you just have to pay the price.

Advertisements
Cross Origin Resource Sharing and Content Security Policy — December 4, 2018

Cross Origin Resource Sharing and Content Security Policy

Got to love having a vendor carrying out half a job… again.

Having installed a new VMWare Horizon environment for Windows 10, I thought we’d at least have Blast available via HTML  – which we don’t currently have in our Windows 7 Horizon setup.

During the install I setup a load balancer which only really handles the authentication process. This worked fine using IE or Edge, at which point I guess the vendor decided that’s enough testing and it’s considered functional. After they left I fired up my Chrome browser and found it didn’t work. So I tried Firefox with the same non-functional result.

Checking the console log in Firefox I see:

Content Security Policy: The page's settings blocked the loading of a resource at wss://192.168.61.12:22443/d/36BC344E-DAD5-4EA5-A44C-12456F74432D/?vauth=LaQJrs2RppeiZGX9gOtj75vekprtuEDcgD2C6tba ("default-src").

A trawl of VMWare documentation results in: https://docs.vmware.com/en/VMware-Horizon-7/7.6/horizon-security/GUID-FD679D1D-E037-4EDF-A96F-F0CD85FFE724.html

Now all I have to do is translate that to Nginx so I can put that into the config.

Editing my ssl/snippets.conf file and changing the CSP header, I added the missing parts for wss: and blob: to end up with:

add_header Content-Security-Policy "default-src 'self' wss:; script-src 'self' 'unsafe-inline' 'unsafe-eval'; img-src 'self'; style-src 'self' 'unsafe-inline'; font-src 'self'; frame-src https://horizon.domain.tld blob:; object-src 'none' blob:; connect-src 'self' wss:; child-src 'self' blob:;";

A reload of Nginx and a refresh/reload on the browser and I’m into the Horizon Desktop!

References

https://docs.vmware.com/en/VMware-Horizon-7/7.6/horizon-security/GUID-94DAC7B8-70A3-4A91-8E70-2B2591B82866.html

https://www.owasp.org/index.php/Content_Security_Policy_Cheat_Sheet

VMWare Horizon Load Balancing — November 21, 2018

VMWare Horizon Load Balancing

We’re in the process of installing a new Horizon 7 infrastructure  and as part of the process the vendor added load balancers all over the place. I asked with question of why not use an Open Source solution for that?

My go to web server, proxy, load balancer is Nginx and as we already have a HA pair setup I thought we’d try to use that – even if it meant putting in a new one dedicated to the task in the longer term.


As the plan is to use a load balancer in front of the connection servers and the only tunnelling that will take place will be for external systems, our requirement will be to LB the https traffic (TCP 443) for the authentication. The PCoIP/Blast traffic will be directed straight to the ESX Host/client.

The previous document on load balancing with Nginx means I only need to add in the config needed for horizon. By using the same syncing of config it immediately becomes available on the secondary load balancer.

I created a new config file /etc/nginx/sites-available/horizon and then as standard, symbolic link it to sites-enabled to make it live.

upstream connectionservers {
ip_hash;
server 192.168.0.236:443;
server 192.168.0.237:443;
}
server {
listen 443 ssl;
server_name horizon.domain.tld;
location ~ / {
proxy_pass https://connectionservers;
}
}

This adds our two connection servers into an upstream group called connectionservers which I then point the  proxy_pass  directive to.

The ip_hash directive ensures we have session stickiness based on the clients IP address. When a client connects they’ll stay directed to the connection server they were given until and unless the connection server becomes unavailable.

nginx.conf

Within the nginx.conf ensure you have the reverse proxy options set in the http {} section:

enable reverse proxy
proxy_redirect off;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwared-For $proxy_add_x_forwarded_for;
client_max_body_size 10m;
client_body_buffer_size 128k;
client_header_buffer_size 64k;
proxy_connect_timeout 90;
proxy_send_timeout 90;
proxy_read_timeout 90;
proxy_buffer_size 16k;
proxy_buffers 32 16k;
proxy_busy_buffers_size 64k;

The SSL configuration on the HA pair is standard throughout all of our servers that it “proxies” for. We have a wildcard certificate and the HA proxies only services under *.domain.tld – our horizon.domain.tld fits this pattern so no changes necessary.

All the standard Nginx SSL related security settings for certificate, stapling, ciphers, HSTS are located in our /etc/nginx/snippets/ssl.conf file and is included in the nginx.conf using:

include snippets/ssl.conf

snippets/ssl.conf

ssl_certificate /etc/ssl/certs/wildcard.pem;
ssl_certificate_key /etc/ssl/private/wildcard_key.cer;
ssl_dhparam /etc/ssl/private/dhparam.pem;

add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:50m;
ssl_session_tickets off;

# modern configuration. tweak to your needs.
ssl_protocols TLSv1.2;
ssl_ciphers 'ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256';
ssl_prefer_server_ciphers on;

# OCSP Stapling ---
# fetch OCSP records from URL in ssl_certificate and cache them
ssl_stapling on;
ssl_stapling_verify on;

add_header X-Content-Type-Options nosniff;
add_header Accept "*";
add_header Access-Control-Allow-Methods "GET, POST, PUT";
add_header Access-Control-Expose-Headers "Authorization";
add_header X-Frame-Options SAMEORIGIN;
add_header X-XSS-Protection "1; mode=block";

proxy_cookie_path / "/; HTTPOnly; Secure";

Note: Depending on your requirements for other system you may need to include content security policy settings to satisfy CORS (Cross Origin Resource Sharing). In fact you MUST do this to allow Chrome and Firefox to work with Blast over HTML.

In our PCoIP client we add the new server as horizon.domain.tld and we get through the authentication and on to the selection of the available pools. So clearly the load balancing is doing the job. You can check the /var/log/nginx/access.log to confirm.

If you miss out the ip_hash directive for session stickiness you’ll find you can’t get past the authentication stage.

Syncing Config Files Between Servers —

Syncing Config Files Between Servers

Having setup a pair of load balancers I wanted to ensure the Nginx configuration from one system was replicated to the secondary where changes were made on the primary.

In this instance my weapon of choice is lsyncd. It seems quite old, but stable. It’s a layer over rsync and ssh that monitors a directory for changes and copies them over ssh to a target server.


Getting it working is pretty straight forward once you crack the configuration.

Install it using:

$ sudo apt-get install lsyncd

Then you’ll need to create the config file:

$ sudo mkdir /etc/lsyncd
$ sudo mkdir /var/log/lsyncd
$ sudo vi /etc/lsyncd/lsyncd.conf.lua

This is what my final config looked like:

settings {
  logfile = "/var/log/lsyncd/lsyncd.log",
  statusFile = "/var/log/lsyncd/lsyncd.status"
}

sync {
  default.rsyncssh,
  source = "/etc/nginx",
  host = "loadbal02.domain.local",
  targetdir = "/etc/nginx";
}

Outside of this config I needed to setup an ssh key for the root user from loadbal01 to logon to loadbal02. When you generate the key DO NOT specify a password for the key, or the process wont work.

$ sudo ssh-keygen
$ sudo cat /root/.ssh/id_rsa.pub

Then I copy the output and paste it into /root/.ssh/authorized_keys on loadbal02. Many ways of doing this scp, copy/paste, etc.

Then just to ensure it works by connecting at least once to the target host as root using ssh.

$ sudo ssh root@loadbal02.domain.local

This will ask you to trust and save the hosts id in the ~/.ssh/known_hosts file so it will be trusted in future. Make sure you connect to EXACTLY the same host name as you are putting into the config file eg. “loadbal02.domain.local” as the host id for “loadbal02″does not match the FQDN and you will get a service error like this:

recursive startup rsync: /root/sync/ -> loadbal02.domain.local:/root/sync/
Host key verification failed.

Start the service using:

$ sudo systemctl start lsyncd

and monitor the status and log file to make sure it’s doing what you expect.

$ sudo cat /var/log/lsyncd/lsyncd.status 

Lsyncd status report at Tue Nov 20 15:33:21 2018
Sync1 source=/etc/nginx/
There are 0 delays
Excluding:
nothing.
Inotify watching 7 directories
1: /etc/nginx/
2: /etc/nginx/sites-available/
3: /etc/nginx/modules-available/
4: /etc/nginx/modules-enabled/
5: /etc/nginx/sites-enabled/
6: /etc/nginx/conf.d/
7: /etc/nginx/snippets/

Restarting Nginx on the Secondary

After the files have copied, you need to tell Nginx on the secondary that the config has changed. Then it needs to reload the config so it’s running as up to date as the primary.

For this I use inotify-tools by installing them on loadbal02:

$ sudo apt-get install inotify-tools

Next I created a shell script that monitors the config and reloads the service.

I created a file called /usr/sbin/inotify_nginx.sh and set it as executable.

$ sudo touch /usr/sbin/inotify_nginx.sh
$ sudo chmod 700 /usr/sbin/inotify_nginx.sh
$ sudo vi /usr/sbin/inotify_nginx.sh

This is the content of my script:

!/bin/sh
while true; do
inotifywait -q -e modify -e close_write -e delete -r /etc/nginx/
systemctl reload nginx
done

It monitors the /etc/nginx/ folder (-r recursively) and any event like modify, close_write or delete will cause the script to continue and reload nginx, then loop around to wait for any more changes.

Next I made sure my script ran every time the server rebooted using crond.

$ sudo crontab -e

Added in a line to run the script at reboot:

@reboot /bin/bash /usr/sbin/inotify_nginx.sh

That’s it. Following a reboot the script runs happily. To monitor it you can look at the nginx error log (/var/log/nginx/error.log). This will show a process started event like:

2018/11/21 09:16:57 [notice] 736#736: signal process started

The only downside of this would be if I spanner the config on primary the service reload on the secondary will fail. eg.

2018/11/21 09:25:20 [emerg] 1819#1819: unknown directive "banana" in /etc/nginx/nginx.conf:1

This isn’t such a concern unless the primary happens to fail whilst you’re editing it. The most important part is:

Test your config before you reload the primary server!

$ sudo nginx -t
nginx: [emerg] unknown directive "banana" in /etc/nginx/nginx.conf:1
nginx: configuration file /etc/nginx/nginx.conf test failed

Then fix any issues before doing a systemctl reload.

Laravel Routing Returns 302 Redirect — November 20, 2018

Laravel Routing Returns 302 Redirect

That was a frustrating few hours. I added a route into my api.php route and every time I visited it it triggered a redirect. I saw nothing in the browser and it was like my controller function wasn’t even being called.

I put the usual dd() and dump() into my function and it wasn’t being called. I even changed the function name so it didn’t exist and thought I’d get an error message, but nothing – still a redirect.

What is going on? I expanded out the call from “Namespace\Class@function” to put in a function () {} and that was being called and ran ok.

Clearly there was something wrong with my class somewhere. The other functions within it worked fine, just this new one didn’t even seem to be there.

Sure enough a bit of Duck-Jitsu and I found this: https://stackoverflow.com/questions/35020477/laravel-unexpected-redirects-302 – answer 3 is where I got my clue.

I looked at my class constructor and I was using the auth middleware to ensure the functions were not used unless authenticated.

public function __construct() {
  $this->middleware(['auth:api']);
}

For my function I wasn’t using any authentication, so of course I was getting a redirect. But because it’s an api call and the expected return in JSON under an XHttpResponse I wasn’t able to properly debug it. I was getting back a HTML page instead.

As a work around all I needed to do was add in an exception for my function:

public function __construct() {
  $this->middleware(['auth:api'], ['except' => [ 'myFunction' ]]);
}

Now all I have to do is go back and sort out the authentication for the function so I don’t need to bypass it.

Debian Upgrading from an ISO File — November 7, 2018

Debian Upgrading from an ISO File

Kind of an unusual situation, but I have a Debian jessie box that has a terrible <2MB Internet connection, no CD/DVD and the USB stick I have I don’t want to overwrite and make bootable – it already has things on it I need. But it does have the capacity to hold the Debian DVD ISO #1.

How do you upgrade Debian from an ISO without being bootable?

Mount the USB Sick

First mount the USB Drive into your stretch environment so you can use the ISO it contains. You may want to check the output of dmesg to see what device name your stick has been given.

$ sudo mkdir /mnt/usb
$ sudo mount /dev/sdb1 /mnt/usb

Now we can see the ISO in the /mnt/usb folder

$ sudo ls -lh /mnt/usb

total 12G
-r-------- 2 root root 3.4G Nov 7 11:25 debian-9.5.0-amd64-DVD-1.iso

Mount the ISO

We can then mount the ISO into another folder under /mnt

$ sudo mkdir /mnt/iso

$ sudo mount -t iso9660 -o loop /mnt/usb/debian-9.5.0-amd64-DVD-1.iso /mnt/iso

We have a mounted ISO

$ sudo ls -lh /mnt/iso

total 1.5M
-r--r--r-- 1 root root 146 Jul 14 11:27 autorun.inf
dr-xr-xr-x 1 root root 2.0K Jul 14 11:27 boot
...

Edit Your Installation Sources

Next we edit the file /etc/apt/sources.list so it only contains the path to our ISO to install from. Take a copy of the original one or just comment out the existing lines.

deb file:///mnt/iso stretch main contrib

You may also want to check any source list files under sources.list.d and move them out whilst you upgrade.

Carry Out the Upgrade

Just continue as you normally would using upgrade/dist-upgrade to deliver your new OS. Making sure you do an update first so you read your news sources file.

$ sudo apt-get update

$ sudo apt-get upgrade

$ sudo apt-get dist-upgrade

Because you’re not getting the install from the internet and apt isn’t able to trust the source, you will have to accept to install the upgrades by ignoring the security warning.

When you’re done make sure you uncomment/put back the sources.list to point at the internet and replace the version with the new one eg. change jessie to stretch.

Java Decompiler — November 6, 2018

Java Decompiler

Today I’ve been working on a problem where properties being loaded into our CRM system from a GIS DTF file are doing some strange updates. It appears that the vendor in their wisdom has decided to populate user defined fields that we already use!

The Loader is written in Java, I have a .jar file and can extract the .class files, but they’re all compiled.

A bit of searching reveals an online decompiler I can use: http://www.javadecompilers.com/jad

I started using it to decompile one .class file at a time and quickly became bored. So tried the whole .jar file. It happily decompiled the file and returned a single zip file containing the decompiled .class files as .javafiles.

Now I can scan the .java files to find out what other SQL calls they are making to fill other fields we might be using.

DBeaver – SQL GUI —

DBeaver – SQL GUI

I’ve used a few SQL GUI’s over the years, SQuirreL, DBVisualizer, HeidiSQL, MySQL Workbench, but the one that stands out recently is DBeaver.

It’s got a community and enterprise edition. The community does everything I need and connects to all the SQL servers we use, Microsoft SQL, MySQL, Postgres/PostGIS.

Being Java based it’s cross platform, so you can use it in Windows too.

RADIUS Testing — November 5, 2018

RADIUS Testing

We have a need to authenticate a couple of devices via our Wifi access points with a RADIUS server. Right now I wanted to test things out using a MAC address authentication process. But for some reason we can’t get it working on the AP’s.

How do I test the RADIUS authentication policies are correct?

I recall using a RADCHECK program in Windows many years ago and figured Linux would probably have something similar. Sure enough a quick search means I can install freeradius-utils which includes radtest and radclient.

I needed to pass a number of RADIUS attributes and values with my test call and this is how I did it:

$ cat << EOF | radclient -x [radisuserver] auth [supersecretkey]
User-Name = 6894244B56EB
User-Password = 6894244B56EB
NAS-Port-Type = 19
NAS-Port = 0
Calling-Station-Id = SSID
EOF

This spoofed an auth call to the RADIUS server using the specified MAC address as user name and password and pretended the call was from a NAS-Port-Type of Wireless - 802.1x (19). I got the table of values from here: https://www.juniper.net/documentation/en_US/junos/topics/concept/subscriber-management-nas-port-type-overview.html

Statement Option NAS-Port-Type Value Description
value

0–65535

Number that indicates either the IANA-assigned value for the RADIUS port type or a custom number-to-port type defined by the user
adsl-cap

12

Asymmetric DSL, carrierless amplitude phase (CAP) modulation
adsl-dmt

13

Asymmetric DSL, discrete multitone (DMT)
async

0

Asynchronous
cable

17

Cable
ethernet

15

Ethernet
fddi

21

Fiber Distributed Data Interface
g3-fax

10

G.3 Fax
hdlc-clear-channel

7

HDLC Clear Channel
iapp

25

Inter-Access Point Protocol (IAPP)
idsl

14

ISDN DSL
isdn-sync

2

ISDN Synchronous
isdn-v110

4

ISDN Async V.110
isdn-v120

3

ISDN Async V.120
piafs

6

Personal Handyphone System (PHS) Internet Access Forum Standard
sdsl

11

Symmetric DSL
sync

1

Synchronous
token-ring

20

Token Ring
virtual

5

Virtual
wireless

18

Other wireless
wireless-1x-ev

24

Wireless 1xEV
wireless-cdma2000

22

Wireless code division multiple access (CDMA) 2000
wireless-ieee80211

19

Wireless 802.11
wireless-umts

23

Wireless universal mobile telecommunications system (UMTS)
x25

8

X.25
x75

9

X.75
xdsl

16

DSL of unknown type

 

ESXi 6.0 to 6.5 Upgrade — October 14, 2018

ESXi 6.0 to 6.5 Upgrade

This weekend has turned out to be a challenge. Upgrading our VMware Horizon 7 estate to the latest release involved upgrading all the components from connection servers, security server, composer, vCenter and vSphere hosts.

Last weekend was upgrading the connection servers, security server and composer. This weekend is vCenter and the vSphere hosts.

99.9% of the skills required are really about how strong your Google Fu is!

hongkongphoeey

My skills include Google-fu and Duck-Jitsu, but I’m a Bing-do novice!

Continue reading

Jira Token Error, Leading to Fisheye + Crucible Failure — October 12, 2018

Jira Token Error, Leading to Fisheye + Crucible Failure

Today Jira fell over. Not sure why, but the result was a token error which refused to let my user or admin accounts ability to login properly. I managed to logon, but none of the dashboard or menu items worked as it had this persistent token error.

I ended up rebooting the server and restart the two services for Jira and Confluence.

These use the regular service start and stop with systemctl start jira and systemctl start confluence.

Crucible however, uses a start and stop sh script.

As root starting Crucible from

# /home/crucible/fecru-4.4.7/bin/start.sh

Caused some very strange behaviour.

First thing I noticed was that I had lost all of the configuration and it had reverted to a blank database and launched the setup program when I visited the URL! Something clearly wrong there.

Next I thought I’d run it as the crucible user I setup for this purpose.

# sudo -u crucible /home/crucible/fecru-4.4.7/bin/start.sh

Even worse! Now not only was it empty but the log had all kinds of permissions errors.

The clue was, but what log am I looking at? I ended up with logs in the ~/fecru-4.4.7/var/log folder AND in ~/instance/var/log folder, but with different dates. It looks like I spannered the install somehow as logs should only be in the instance folder. Although I ran the start.sh script it must have been as the root user and therefore created my config under fecru* NOT instance. When I then ran it as the crucible user using sudo it did the same, but all the files were owned by root and caused the permission errors.

The outcome showed that the problem related to running sudo and not maintaining the environment variable for FECRU_INST which points to the instance folder. I fixed this by running visudo and set the rule to keep certain environment variables – in the same way as I would for a proxy server.

Edit the line:

Defaults        env_keep += "ftp_proxy http_proxy https_proxy no_proxy FISHEYE_INST"

I then had to make sure I moved the config.xml file and data folder that had been erroneously created under fecru into instance.

# mv fecru-4.4.7/config.xml instance
# mv fecru-4.4.7/data instance

Now when I run sudo for the crucible user it keeps the environment setting pointing to the install path, the instance path and all contained files must belong to crucible:crucible so chmod them:

# chmod crucible:crucible instance/* -R

Finally starting crucible with:

# sudo -u crucible /home/crucible/fecru-4.4.7/bin/start.sh

All is good once again.

 

  • fecru = FishEye + CRUcible
Proxy Fun and Games — October 11, 2018

Proxy Fun and Games

I seem to spend most of may day trying to sort out issues regarding getting different applications through the corporate proxy server. I’m really hoping one day we can setup a transparent proxy if for no other reason than to make our development lives easier.

At present we need use a browser proxy script (http://wpad/wpad.dat) to determine which of the corporate proxy servers to use. We have an internet proxy and a Gov’t gateway proxy. Depending where the user is trying to go determines which proxy they must use.

The script works just fine for 99% of our user base.

However, when it comes to the other 1% there’s need to tell not just the browser what proxy to use, but in the development world we need to inform the various development tools how to use a proxy too. This is where the pain is.

We need to setup a proxy in several places eg. for the operating system, for the browser, for Git, for NPM/Yarn, for Composer, for Java…

Operating System

Windows

Open a CMD/PowerShell window with Administrative permissions

C:> netsh winhttp set proxy http://username:password@192.168.0.117:8080 "<local>"

You may not need the username and password here as the OS will send your Windows credentials.

The <local> means bypass the proxy for any local address. You may add into that for other specific servers eg. "<local>,server.domain.tld"

Also set the Environment variables for the proxy

Windows Key + R

control sysdm.cpl,,3

Click the environment settings and add in the following settings to your user variables.

http_proxy=http://username:password@192.168.0.117:8080
https_proxy=http://username:password@192.168.0.117:8080
all_proxy=http://username:password@192.168.0.117:8080
no_proxy=localhost,domain.local,192.168.56.2

Linux

$ sudo vi /etc/envronment

http_proxy=http://username:password@192.168.0.117:8080
https_proxy=http://username:password@192.168.0.117:8080
all_proxy=http://username:password@192.168.0.117:8080
no_proxy=localhost,domain.local,192.168.56.2

Git proxy settings

$ git config --global http.proxy http://username:password@192.168.0.117:8080

You’ll probably need to ensure this is set for the sudo environment too if you ever have the need to install global requirements with npm.

$ sudo git config --global http.proxy http://username:password@192.168.0.117:8080

NPM proxy settings

$ npm config set proxy http://username:password@192.168.0.117:8080

Again you’ll probably need to ensure it’s replicated into sudo.

$ sudo npm config set proxy http://username:password@192.168.0.117:8080

This actually writes to a file in your home folder called .npmrc which you can edit if you need to put in some backslashes to escape and special characters in your password. eg. c:\Users\myuser\.npmrc or ~/.npmrc and the sudo version will write it into the root users home folder.

Yarn proxy settings

As Yarn is essentially npm on steroids it works the same way but writes to ~/.yarnrc

$ yarn config set proxy http://username:password@192.168.0.117:8080
$ sudo yarn config set proxy http://username:password@192.168.0.117:8080

Composer proxy settings

Thankfully this is capable of using the Operating System proxy environment variables. So if you set them as above for Windows and/or Linux you should be good to go.

Java proxy settings

This has it’s own rules just like all the others. But you may also run into Java applications having their own proxy settings too. Such as gradle which has it’s own properties file to setup the proxy. They all seem to be a similar pattern though, edit a properties file and add in:

http.proxyHost=192.168.0.117
http.proxyPort=8080
http.nonProxyHosts=localhost|127.*|[::1]|*.domain.local

Typically this is done in the JRE’s lib/new.properties file so it applies to Java globally. eg. My net.properties file is located under c:\Program Files\Java\jdk1.80_151\lib and has plenty of helpful commented examples on how to set things.

Under Debian my net.properties is located under /usr/lib/jvm/java-1.8.0-openjdk-amd64/jre/lib

They can also be passed to the Java command line as -D parameters eg.

$ java -Dhttp.proxyHost=192.168.0.117 -Dhttp.proxyPort=8080 -Dhttp.nonProxyHosts="localhost|domain.local"