Stuff I'm Up To

Technical Ramblings

ESXi 6.0 to 6.5 Upgrade — October 14, 2018

ESXi 6.0 to 6.5 Upgrade

This weekend has turned out to be a challenge. Upgrading our VMware Horizon 7 estate to the latest release involved upgrading all the components from connection servers, security server, composer, vCenter and vSphere hosts.

Last weekend was upgrading the connection servers, security server and composer. This weekend is vCenter and the vSphere hosts.

99.9% of the skills required are really about how strong your Google Fu is!

hongkongphoeey

My skills include Google-fu and Duck-Jitsu, but I’m a Bing-do novice!

Continue reading

Advertisements
Jira Token Error, Leading to Fisheye + Crucible Failure — October 12, 2018

Jira Token Error, Leading to Fisheye + Crucible Failure

Today Jira fell over. Not sure why, but the result was a token error which refused to let my user or admin accounts ability to login properly. I managed to logon, but none of the dashboard or menu items worked as it had this persistent token error.

I ended up rebooting the server and restart the two services for Jira and Confluence.

These use the regular service start and stop with systemctl start jira and systemctl start confluence.

Crucible however, uses a start and stop sh script.

As root starting Crucible from

# /home/crucible/fecru-4.4.7/bin/start.sh

Caused some very strange behaviour.

First thing I noticed was that I had lost all of the configuration and it had reverted to a blank database and launched the setup program when I visited the URL! Something clearly wrong there.

Next I thought I’d run it as the crucible user I setup for this purpose.

# sudo -u crucible /home/crucible/fecru-4.4.7/bin/start.sh

Even worse! Now not only was it empty but the log had all kinds of permissions errors.

The clue was, but what log am I looking at? I ended up with logs in the ~/fecru-4.4.7/var/log folder AND in ~/instance/var/log folder, but with different dates. It looks like I spannered the install somehow as logs should only be in the instance folder. Although I ran the start.sh script it must have been as the root user and therefore created my config under fecru* NOT instance. When I then ran it as the crucible user using sudo it did the same, but all the files were owned by root and caused the permission errors.

The outcome showed that the problem related to running sudo and not maintaining the environment variable for FECRU_INST which points to the instance folder. I fixed this by running visudo and set the rule to keep certain environment variables – in the same way as I would for a proxy server.

Edit the line:

Defaults        env_keep += "ftp_proxy http_proxy https_proxy no_proxy FISHEYE_INST"

I then had to make sure I moved the config.xml file and data folder that had been erroneously created under fecru into instance.

# mv fecru-4.4.7/config.xml instance
# mv fecru-4.4.7/data instance

Now when I run sudo for the crucible user it keeps the environment setting pointing to the install path, the instance path and all contained files must belong to crucible:crucible so chmod them:

# chmod crucible:crucible instance/* -R

Finally starting crucible with:

# sudo -u crucible /home/crucible/fecru-4.4.7/bin/start.sh

All is good once again.

 

  • fecru = FishEye + CRUcible
Proxy Fun and Games — October 11, 2018

Proxy Fun and Games

I seem to spend most of may day trying to sort out issues regarding getting different applications through the corporate proxy server. I’m really hoping one day we can setup a transparent proxy if for no other reason than to make our development lives easier.

At present we need use a browser proxy script (http://wpad/wpad.dat) to determine which of the corporate proxy servers to use. We have an internet proxy and a Gov’t gateway proxy. Depending where the user is trying to go determines which proxy they must use.

The script works just fine for 99% of our user base.

However, when it comes to the other 1% there’s need to tell not just the browser what proxy to use, but in the development world we need to inform the various development tools how to use a proxy too. This is where the pain is.

We need to setup a proxy in several places eg. for the operating system, for the browser, for Git, for NPM/Yarn, for Composer, for Java…

Operating System

Windows

Open a CMD/PowerShell window with Administrative permissions

C:> netsh winhttp set proxy http://username:password@192.168.0.117:8080 "<local>"

You may not need the username and password here as the OS will send your Windows credentials.

The <local> means bypass the proxy for any local address. You may add into that for other specific servers eg. "<local>,server.domain.tld"

Also set the Environment variables for the proxy

Windows Key + R

control sysdm.cpl,,3

Click the environment settings and add in the following settings to your user variables.

http_proxy=http://username:password@192.168.0.117:8080
https_proxy=http://username:password@192.168.0.117:8080
all_proxy=http://username:password@192.168.0.117:8080
no_proxy=localhost,domain.local,192.168.56.2

Linux

$ sudo vi /etc/envronment

http_proxy=http://username:password@192.168.0.117:8080
https_proxy=http://username:password@192.168.0.117:8080
all_proxy=http://username:password@192.168.0.117:8080
no_proxy=localhost,domain.local,192.168.56.2

Git proxy settings

$ git config --global http.proxy http://username:password@192.168.0.117:8080

You’ll probably need to ensure this is set for the sudo environment too if you ever have the need to install global requirements with npm.

$ sudo git config --global http.proxy http://username:password@192.168.0.117:8080

NPM proxy settings

$ npm config set proxy http://username:password@192.168.0.117:8080

Again you’ll probably need to ensure it’s replicated into sudo.

$ sudo npm config set proxy http://username:password@192.168.0.117:8080

This actually writes to a file in your home folder called .npmrc which you can edit if you need to put in some backslashes to escape and special characters in your password. eg. c:\Users\myuser\.npmrc or ~/.npmrc and the sudo version will write it into the root users home folder.

Yarn proxy settings

As Yarn is essentially npm on steroids it works the same way but writes to ~/.yarnrc

$ yarn config set proxy http://username:password@192.168.0.117:8080
$ sudo yarn config set proxy http://username:password@192.168.0.117:8080

Composer proxy settings

Thankfully this is capable of using the Operating System proxy environment variables. So if you set them as above for Windows and/or Linux you should be good to go.

Java proxy settings

This has it’s own rules just like all the others. But you may also run into Java applications having their own proxy settings too. Such as gradle which has it’s own properties file to setup the proxy. They all seem to be a similar pattern though, edit a properties file and add in:

http.proxyHost=192.168.0.117
http.proxyPort=8080
http.nonProxyHosts=localhost|127.*|[::1]|*.domain.local

Typically this is done in the JRE’s lib/new.properties file so it applies to Java globally. eg. My net.properties file is located under c:\Program Files\Java\jdk1.80_151\lib and has plenty of helpful commented examples on how to set things.

Under Debian my net.properties is located under /usr/lib/jvm/java-1.8.0-openjdk-amd64/jre/lib

They can also be passed to the Java command line as -D parameters eg.

$ java -Dhttp.proxyHost=192.168.0.117 -Dhttp.proxyPort=8080 -Dhttp.nonProxyHosts="localhost|domain.local"

 

Local Git Repository — October 6, 2018

Local Git Repository

When working on a project at home I don’t necessarily want to host my Git repo online and don’t feel the need for installing a Gitlab server on my home network, but I do want to backup my projects to my cloud backup.

I also would like to not backup all the vendor resources with my project. So I’d like to exclude the node_module folder and other .gitignore content.

Whilst googling around I discovered I could just use a folder as a repo. Most people tend to do this onto a network file share, but my needs were simple. All I wanted to do was include my Git repo within the folders that are automatically backed up to the cloud.

Continue reading

JIRA, Confluence and Nginx — September 15, 2018

JIRA, Confluence and Nginx

With Atlassian Jira Software and Confluence installed onto the same server I thought I’d investigate setting things up so we don’t have to use the default TCP port type of access over HTTP. instead let’s setup a reverse proxy using HTTPS over TCP 443 that forwards to the TCP 8080 and 8090 ports.

The aim is to get Jira accessible as https://jira.domain.local and Confluence as https://jira.domain.local/confluence.

Continue reading

JIRA Software and Confluence — September 14, 2018

JIRA Software and Confluence

Installing Atlassian Jira Software onto an in-house or self-hosted server is as simple as following the Jira installation guide. The only thing missing is the setup of the database.

Jira suggest that whilst other databases are available, MySQL, MSSQL etc. their preferred DB is postgresql. Primarily because it’s common in their user space and support environment, meaning that their support and documentation is likely to be more readily available for postgresql instances than other DB’s.

Let’s follow the advice and install postgresql.

$ sudo apt-get install postgresql

At the time of writing this installs postgresql version 9.6 on Debian Stretch.

In order to create the environment that we can manage there are a couple of postgresql config changes that we make to ensure you can access the DB from another system – for managing with pgadmin 4.

Enable access to postgresql from specific network/IP addresses by editing pg_hba.conf under /etc/postgresql/9.6/main.

$ sudo vi /etc/postgresql/9.6/main/pg_hba.conf

Find the line:

host    all    all    127.0.0.1/32    md5

Add a line below to match your required IP addresses/subnets eg.

host    all    all    192.168.0.0/24  md5

This allows any machine with a 192.168.0.X address to access the DB.

Now we need to listen or bind to an IP address that is available on the network. By default postgresql only listens on 127.0.0.1 port 5432, meaning it will only accept connections to the local machine from the local machine.

$ sudo vi /etc/postgresql/9.6/main/postgressql.conf

Find the line beginning:

#listen_addresses = 'localhost'

Add a new line below it:

listen_addresses = '*'

Restart the postgresql service:

$ sudo systemctl restart postgresql.service

Databases and User

Create a database and a user for Jira/Confluence to use

$ sudo -u postgres createdb jira
$ sudo -u postgres createdb confluence
$ sudo -u postgres createuser jiradb

Set the users password and grant them access to the DB’s.

$ sudo -u postgres psql 
psql (9.6.10)
Type "help" for help.

postgres=# alter user jiradb with encrypted password 'mysupersecretpassword';

postgres=# grant all privileges on database jira to jiradb;

postgres=# grant all privileges on database confluence to jiradb;

When you install Jira and confluence you can then use the database settings you’ve just created.

Database Type: PostgreSQL
Hostname:      localhost
Port:          5432
Database:      jira
Username:      jiradb
Password:      mysupersecretpassword
Schema:        public

Selection_076

 

XSLT and SOAP —

XSLT and SOAP

All of our SOAP interactions with the Lagan CRM send and return SOAP and by association, XML. The normal practice of handling the sent or returned XML is by using XSLT to transform the data to and from the required format.

The forms product will submit XML through an XSL translation taking data from the POST’ed form data and turning it into the XML format/type required. The returned XML data must also be processed via an XSLT to present the data to the form.

How do we go about testing translations and stylesheets without constantly publishing forms and requesting data from the CRM server?

For this I used postman to submit and retrieve sample SOAP envelopes with the required XML soapenv:Body. Then I can take the returned sample data and save it to an XML file. Now I have a local sample of the XML I can use an XSLT tool to process it via a locally created stylesheet. No more repetitive form submissions or having to work with only the form product to develop the XSLT.

xlst_working

XSLT Tools

There are a very few XSLT tools that seem to do the job for free. Certainly when it comes to a GUI environment all the tools are paid for products.

At the command line there are some free options, but each have challenges. But I figured that just because it’s command line, doesn’t mean I can’t use it in a GUI. Atom has a very useful plugin that can be used to interface with the command line XSLT programs – atom-xsltransform. The settings for the plugin just point to the XSLT processor of your choice.

Once installed you press ctrl-shift-p whilst in your XML source file, it prompts you for the path of the XSLT transformation file to use and then returns the output into an edit tab in Atom.

MSXSL

For Windows I came across a very simple command line product from Microsoft MSXSL. It doesn’t look like there’s a recent version as this dates back to 2004. But as XML has been around for 20 years or so this may not be a problem. I did however find it seemed to produce broken output that looked to be to do with unicode. So maybe it’s not capable of handling the UTF-8 files I’m using.

xsltproc

This is from the world of Linux, but there is a port to Windows that works.

For Linux just install it from the repository:

$ sudo apt-get install xsltproc

For Windows, it’s harder work. Not significantly, but frustrating. You need to download a series of files, extract them all into the same place, to let their individual bin folders merge their contents. Then you can run the included xsltproc.exe and it should find all of the dll’s.

ftp://ftp.zlatkovic.com/libxml/

I chose the 64bit 7z files and extracted these files:

  • iconv-1.14-win32-x86_64.7z
  • libtool-2.4.6-win32-x86_64.7z
  • libxml2-2.9.3-win32-x86_64.7z
  • libxslt-1.1.28-win32-x86_64.7z
  • mingwrt-5.2.0-win32-x86_64.7z
  • openssl-1.0.2e-win32-x86_64.7z
  • xmlsec1-1.2.20-win32-x86_64.7z
  • zlib-1.2.8-win32-x86_64.7z

Saxon

This is a Java product and comes in a number of versions from home edition to professional that requires payment.

It’s hosted here on Sourceforge: http://saxon.sourceforge.net/

I downloaded the HE (home edition) and just placed the jar files somewhere I could use them.

From the Linux command line I used it like this:

$ java -jar saxon9he.jar -s:/home/user/lagan/xslt/FWTCaseFullDetails.xml -xsl:/home/user/lagan/xslt/FWTCaseFullDetails.xslt

Atom plugin settings

It’s a simple case of putting in the path of the executable you want to run. Pay attention to the order of the parameters for the tools. MSXML and xsltproc have the XML and XSL options in a different order.

For the Linux xsltproc settings I used:

/usr/bin/xsltproc %XML %XSL

For Saxon I had to be specific about where the jar file was as I haven’t installed it into the java class path.

java -jar /home/home/saxon/saxon9he.jar -s:%XML -xsl:%XSL

Stylesheets

The XSLT stylesheet acts as the instruction set to take the XML input and apply the XSLT logic to transform the XML content into another format such as text or HTML.

W3Schools has some useful guidance here: https://www.w3schools.com/xml/xsl_intro.asp

Another useful intro: https://www.tutorialspoint.com/xslt/

 

VOF – Accessing H2 from Another System — August 30, 2018

VOF – Accessing H2 from Another System

Following on from Verint Online Forms using H2 seems pretty straight forward locally. It fires up a web server and you can manage the H2 database straight from there.

You need the VOF database details you put into Config.sh then you can start connecting to it from within the browser.

eg.

JDBC URL: jdbc:h2:~/lagan/dform-x.x.x/db/kana-integration/h2

But if you’re running dforms on your virtual box development server you’ll be denied because the setting webAllowOthers is not set.*

This is easily remedied and still secure as your virtual box should be using a “host only network adapter” so only your system can get to it.

Create a file in you home folder and put the following one line into it:

$ vi ~/.h2.server.properties

webAllowOthers=true

That’s all. Once the server runs it will probably add some more to the file, but you should now be able to access the H2 GUI at http://192.168.56.2:8082

The other thing that may be stopping you is by default H2 will look at your systems name and resolve it to an IP. This is the IP that will listen on port 8082. If you’ve setup a virtual box then your /etc/hosts file may contain a line like 127.0.1.1 debian and this will be an inaccessible IP. You’ll need to change this so you have an entry matching your machines DNS name eg.

192.168.56.2    debian debian.domain.local

When you start H2 you should then see it listen on the correct address.

$ sh ./h2.sh
Failed to start a browser to open the URL http://192.168.56.2:8082: Browser detection failed and system property h2.browser not set

Don’t worry about the error. It’s complaining because we’re running a headless non-Windowed server, there is no X11 and there is no browser to launch. The important thing is it’s starting on the right IP address.

Other Useful Options

By default H2 tries to start a browser as shown in the above error message. You can stop this behaviour by passing parameters to the h2.sh call

-web – start a web service
-tcp – start a tcp service
-pg – start a postgres service
-browser – try and start a browser

So if you just want a web service

$ sh ./h2.sh -web

Web Console server running at http://192.168.56.2:8082 (others can connect)

No more error message!

You can chain them too eg. sh ./h2.sh -web -tcp -pg or for a more permanent solution edit the h2.sh file and add -web to the java line:

java -cp "$dir/h2-1.4.197.jar:$H2DRIVERS:$CLASSPATH" org.h2.tools.Console "$@" -web

 

* Yes I know you can set this in the Web GUI. But if you can’t get to the Web GUI (the whole point of this article) you’ll need to set it from the servers command line.

Verint Online Forms —

Verint Online Forms

We’re new to this and trying to integrate a form solution with our Lagan CRM system. We have a corporately installed test and production system for forms, but it get frequent usage by many non-IT related staff, so I thought about deploying our own dev system.

The forms products are pretty much Jetty programs with a database requirement. Looking at the config files for the initial deployment package they are looking for either H2, Oracle or MSSQL. That means our only real dev option is H2.

Step 1 – Install the H2 database

Download the platform independent zip from http://www.h2database.com/html/main.html and extract it into a suitable location.

Make sure you have a $JAVA_HOME environment variable set, and ensure you have a Java JDK (not just a JRE) installed. On my Debian system I set it to the default java instance (which just happens to be Java 10):

$ export JAVA_HOME=/usr/lib/jvm/default-java

Then run the H2 program:

$ cd h2/bin
$ sh ./h2.sh

This fires up a browser session and gives you an icon in the tray if you’re running a windowed environment.

Step 2 – Install dforms

Extract the dfoms zip file.

Edit the config.sh file in dforms-x.x.x/bin as necessary. The only changes I made to this one was the jdbc user and password. I prefer not to use defaults.

Run the program:

$ cd dforms-x.x.x/bin
$ sh ./Run.sh

We then have a running dforms program listening on the default port 9081. You can use your browser to visit it at http://localhost:9081/auth/login and logon using the defaults Admin/Admin credentials.

Step 3 – Install dforms-leadapter

Extract the dfoms-leadapter-x.x.x zip file.

Edit the config.sh file in dforms-leadapter-x.x.x/bin as necessary. I made a few more changes to this one, again the jdbc user and password – it is a different database than the dforms one and there are two of them in this config. But also the flweb_lagan_uri and flweb_user and flweb_password to match our environment.

Run the program:

$ cd dforms-leadapter-x.x.x/bin
$ sh ./Run.sh

We then have a running dforms-leadapter program listening on the default port 9082. You can use your browser to visit it at http://localhost:9082/auth/login and logon using the defaults Admin/Admin credentials.

Sudo and Proxy / Environment Settings — August 15, 2018

Sudo and Proxy / Environment Settings

When you run a program using sudo what tends to happen is the sudo/root account fails to do anything useful on the internet. It times out trying to connect to systems to download updates that are required by elevated permissions.

We discovered using sudo composer self-update failed to update the core instance of composer, not because of permissions, but because it could not get to the internet to download it.

Set the environment variables that get persisted within your /etc/sudoers file by running:

$ sudo visudo

Seach for the line

Defaults    env_reset

and change it to

Defaults    env_keep += "ftp_proxy http_proxy https_proxy no_proxy"

Now your proxy will be set within your sudo environment too.

References

Source: https://stackoverflow.com/questions/8633461/how-to-keep-environment-variables-when-using-sudo

Laravel 5.5 and Hot Module Reload — July 20, 2018

Laravel 5.5 and Hot Module Reload

Revisiting a previous post  about vue-cli 3 and hmr I tried to get HMR going in a similar fashion through Laravel-mix.

First mistake to make is that laravel-mix does not need BrowserSync for HMR. So don’t install it or configure it in the webpack.mix.js file.

HMR on Laravel 5.5 is loaded by running the package.json “hot” script:

$ npm run hot

It compiles the assets and sits there doing apparently nothing. When actually it’s listening on localhost:8080 for HMR/WDS connections. Whilst in this state if you open another session and serve your project using artisan, HMR just works…

$ php artisan serve

… if you are developing on the same “localhost” as the HMR and artisan server are running on.

But what if you’re not?

I tend to fire up a virtual host with Laravel installed so can’t access it as “localhost” I must use one of it’s public interfaces such as 192.168.56.2.

To make laravel-mix HMR run on one of your public interfaces you’ll need to edit you webpack.mix.js file and add it the following as per your serving host:

mix.options({
  hmrOptions: {
    host: '192.168.56.2',
    port: 8080
  }
});

In your blade template ensure you refer to your assets using the form src="{{ mix('js/app.js') }}" as using it like this handles adjusting the host in the blade based on if you are running the hot script or not.

You MUST run two sessions to use HMR. One to run the hot compiler and one to serve the php environment with artisan. I’ve had frustrating times trying to use & for task spawning.

In session one:

$ npm run hot

In session two:

$ php artisan serve --host 192.168.56.2

Visit your app at: http://192.168.56.2:8000 and you’ll get your artisan served php project.

If you inspect the page in your browser you’ll see the mix src becomes //192.168.56.2:8080/app.js because of the webpack.mix.js change – NOT the artisan serve.

A Further Note

The ability to overwrite the config for hmtOption looks like a laravel-mix ^2.0 thing. I just spent an hour wondering why another project was failing to run on anything other than localhost when the option set.

Upgrading to laravel-mix ^2.0 in composer.json​ resolved the problem.

 

Laravel storage/logs Error — July 19, 2018

Laravel storage/logs Error

A regular issue for me is failing the initial deployment of a git clone Laravel server using Nginx. It’s almost always because I forget to create and give permissions to the Nginx user www-data.

UnexpectedValueException
There is no existing directory at "/var/www/myproject/storage/logs" and its not buildable: Permission denied
Then even if I do sort the permissions here it fails again with:
InvalidArgumentException
Please provide a valid cache path.
This is because the storage/framework path and subfolders don’t exist. You need to create folders and make sure the www-data user has read/write/create permissions:
$ mkdir -p storage/framework/cache
$ mkdir -p storage/framework/sessions
$ mkdir -p storage/framework/views
$ sudo chgrp www-data storage -R
$ sudo chmod g+rwx storage -R