Stuff I'm Up To

Technical Ramblings

Upgrading MySQL in a Container — July 3, 2020

Upgrading MySQL in a Container

Upgrading MySQL 5.5 to 5.7 in a docker container set caused me some trouble. Setting the tag to 5.7.30 was all well and good but when I fired up the container MySQL would stop immediately.

Looking at the log I found The table is probably corrupted and references to run mysql_upgrade which I was expecting to have to do, but how do you do that when the service fails to start and the container is offline?

Continue reading
LAMP Container Set — February 22, 2020
QEMU/KVM and virt-manager — January 10, 2020

QEMU/KVM and virt-manager

Setting up an open source virtualization solution is pretty straight forward. You just need to ensure you include all the components. This should drag in all the dependencies:

$ sudo apt install qemu-kvm libvirt-daemon-system libvirt-clients bridge-utils firewalld qemu-utils spice-client-gtk

In order for your user to have permissions to manage and create VM’s you will need to add them to the libvirt group.

$ sudo gpasswd -a myuser libvirt

Now all you need is to drop an iso into the /var/lib/libvirt/images folder and you can begin installing a virtual machine from the virt-manager gui and boot it from your chosen iso.

Docker and PostgreSQL —

Docker and PostgreSQL

New job, new challenges.

I’ve come across docker in the past, but have pretty much been on the follow these commands to fire up a docker and then use it. Now I find myself in a place where much of the infrastructure of the new work place is built on docker or some measure of virtualisation of containers and hosts.

Install Docker

First setup docker using the relevant installation guidance. In my case the new workstation is Linux Mint 19.3 (tricia) which is Ubuntu based, but meant I had to jig the install slightly to match the Ubuntu build name of “bionic” not the Mint build name of “tricia”.

$ cat /etc/os-release                                                             
NAME="Linux Mint"
VERSION="19.3 (Tricia)"
PRETTY_NAME="Linux Mint 19.3"

I created an apt list file /apt/sources.list.d/docker.list and put in the entry:

deb [arch=amd64] bionic stable

After installing docker I followed on through to install docker-composer as this allows me to group a number of docker containers to provide the services I require.

In order for my regular user to manage containers and volumes etc. I added it to the docker group:

$ sudo gpasswd -a myuser docker

Create a YAML file

Make a directory for a collection of containers. Reaaly just a location for configuration files for it. Within the folder create a file called docker-composer.yaml. This will contain the specifics about what docker containers you want to create, what images they use and what resources they provide or require:

version: '3'
    container_name: pgsql10
    image: postgres:10
      - data:/var/lib/postgresql/data
    restart: always
      POSTGRES_PASSWORD: mysupersecretkey
      - 5432:5432        
    container_name: adminer
    image: adminer
    restart: always
      - 8080:8080

I this case I am creating a container for PostgreSQL 10 and a container for adminer that is a web based database administration tool. So I can run postgres and use adminer to manager/test it.

The important entries are:

image: the name and tag of the image to install from docker hub.

environment: contains specific environmental variables for this server. In this case the ‘postgres’ users password so we can login.

volumes: this creates a docker volume called ‘data’ and will mount it inside the container at /var/lib/postgresql/data. This will be the non-volatile storage used by the service. So the service can be torn down and replaced, but the docker volume will still have the data within it.

ports: In this case what port mapping is used. The number before the colon is presented to the OS and the number following it is the port it forwards to within the container.

If I then bring the containers up using:

$ docker-composer up -d

This will begin initializing the containers, firstly by pulling down the images postgres:10 and adminer from the docker hub. The data volume will be created and mounted into the container. The posgres image has it’s own initialization and will create the database if required.

That’s it! I now have a running PostgreSQL 10 instance.

I can test the both containers and connect to the postres instance by visiting http://localhost:8080 in a browser. Change the System type to PostgreSQL, server to db, username to postgres and the password specified in the environment above. Click Login and you should find yourself able to manage the database.

You can also skip adminer and use whatever DB tools you like such as DBeaver, or just go connect your application to it like you would any regular instance of postgres.

To stop the container services from within the folder holding your docker-composer.yml:

$ docker-compose down

This will tidy things up removing temporary storage etc. but leaves your data volume ‘as is’ so it’s ready for the next time you want to start the containers.

There’s more magic capable using composer, such as not exposing ports to anything other than other containers within the same composer group of containers, which really add a layer of application security by not exposing services to systems that don’t require them.

Some useful commands:

$ docker-composer ps
$ docker-composer images
$ docker volume ls
$ docker ps
$ docker ps -a
$ docker-compose exec -u postgres db psql -l
$ docker-composer logs -f db
Continue reading