New job, new challenges.
I’ve come across docker in the past, but have pretty much been on the follow these commands to fire up a docker and then use it. Now I find myself in a place where much of the infrastructure of the new work place is built on docker or some measure of virtualisation of containers and hosts.
First setup docker using the relevant installation guidance. In my case the new workstation is Linux Mint 19.3 (tricia) which is Ubuntu based, but meant I had to jig the install slightly to match the Ubuntu build name of “bionic” not the Mint build name of “tricia”.
$ cat /etc/os-release
PRETTY_NAME="Linux Mint 19.3"
I created an apt list file
/apt/sources.list.d/docker.list and put in the entry:
deb [arch=amd64] https://download.docker.com/linux/ubuntu bionic stable
After installing docker I followed on through to install
docker-composer as this allows me to group a number of docker containers to provide the services I require.
In order for my regular user to manage containers and volumes etc. I added it to the
$ sudo gpasswd -a myuser docker
Create a YAML file
Make a directory for a collection of containers. Reaaly just a location for configuration files for it. Within the folder create a file called
docker-composer.yaml. This will contain the specifics about what docker containers you want to create, what images they use and what resources they provide or require:
I this case I am creating a container for PostgreSQL 10 and a container for adminer that is a web based database administration tool. So I can run postgres and use adminer to manager/test it.
The important entries are:
image: the name and tag of the image to install from docker hub.
environment: contains specific environmental variables for this server. In this case the ‘postgres’ users password so we can login.
volumes: this creates a docker volume called ‘data’ and will mount it inside the container at
/var/lib/postgresql/data. This will be the non-volatile storage used by the service. So the service can be torn down and replaced, but the docker volume will still have the data within it.
ports: In this case what port mapping is used. The number before the colon is presented to the OS and the number following it is the port it forwards to within the container.
If I then bring the containers up using:
$ docker-composer up -d
This will begin initializing the containers, firstly by pulling down the images
adminer from the docker hub. The data volume will be created and mounted into the container. The posgres image has it’s own initialization and will create the database if required.
That’s it! I now have a running PostgreSQL 10 instance.
I can test the both containers and connect to the postres instance by visiting
http://localhost:8080 in a browser. Change the System type to
PostgreSQL, server to
db, username to
postgres and the password specified in the environment above. Click Login and you should find yourself able to manage the database.
You can also skip adminer and use whatever DB tools you like such as DBeaver, or just go connect your application to it like you would any regular instance of postgres.
To stop the container services from within the folder holding your
$ docker-compose down
This will tidy things up removing temporary storage etc. but leaves your data volume ‘as is’ so it’s ready for the next time you want to start the containers.
There’s more magic capable using composer, such as not exposing ports to anything other than other containers within the same composer group of containers, which really add a layer of application security by not exposing services to systems that don’t require them.
Some useful commands:
$ docker-composer ps
$ docker-composer images
$ docker volume ls
$ docker ps
$ docker ps -a
$ docker-compose exec -u postgres db psql -l
$ docker-composer logs -f db