As I’m working on Asterisk right now the actual challenge is to get Jitsi configured so we can conference in audio users to our video chats. To do this you need to use a SIP add-on called ‘Jigasi’.Continue reading
I started out wanting a real-time database connection to our existing LDAP server. This went well, but involved importing a schema into the LDAP
cn=config and mapping the data into Asterisk.
It then became apparent that the effort involved in linking Asterisk to LDAP didn’t really produce the key result that I was after. My whole reason for linking Asterisk to LDAP was to share authentication credentials from our users for their SIP devices. After I’d deployed it I discovered that Asterisk would store it’s credentials in different fields and what’s worse is that the password could only be plain-text or an MD5 hash.
If our users must use a separate credential for logging into a SIP device, then using LDAP is no longer of interest to me. We may as well use a database – enter PostgreSQL.Continue reading
In light of the possibility of many people needing to work from home the boss wanted to upgrade the phone system to bring in some fixes and new features for home working.
I’ve no experience of Asterisk and I’m not really a phone person, but he asked me to get a replacement system using the latest v17 release. I noticed there are v16 images available, but he was insistent upon v17. That meant building from source.
It’s a week of firsts as up until now I haven’t built a multi-stage Docker image either.Continue reading
I’m using a vpn based on OpenVPN and when I try to fire up a docker-compose set of containers it fails with:
ERROR: could not find an available, non-overlapping IPv4 address pool among the defaults to assign to the network
A quick session of Duck-jitsu and I found: https://github.com/docker/for-linux/issues/418#issuecomment-491323611
A few simple steps sorted it out for me. Create docker network and use an override to tell compose to use it.
$ docker network create localdev --subnet 10.0.1.0/24
version: '3' networks: default: external: name: localdev
This does mean I’ll have to add it into all my local projects that get pushed upstream, but I can add it to
.gitignore to prevent it being included.
Not sure if a ‘LAMP container set’ is the right name, but I have a docker-compose container set that includes Nginx, PHP and MySQL. I seem to build them regularly so thought I’d create a template to start from.
The installation on eoan fails with a missing dependency for
containerd.io not having an install candidate.
/etc/apt/sources.list file and change the
eoan version to
disco. Or remove the line and re-add it using:
$ sudo add-apt-repository \ "deb [arch=amd64] https://download.docker.com/linux/ubuntu \ disco \ stable"
It may still fail to install with an error
docker.service Failed with result 'start-limit-hit'. A reboot soon sorted it out followed by a call to
$ apt install -f
The LDAP instance in our environment is pretty ancient and has served well for many, many years. But there’s one key feature we’d like to see added to our schema – memberOf.
The current group membership is based on memberUID and is a bit clunky by modern standards. Time to upgrade.
This time we’re going to run it in a container. Making it more mobile and resilient. The image we chose
osixia/openldap has a lot of pulls and looks a good candidate to use.
I need a repeatable process for handling synchronising files between systems. Something modular and stable. A couple of Docker containers using
sshd should do the job.
When wanting to monitor the condition of your containers using Nagios there are some really nice features you can enable to check that your containers are up, not abusing the cpu etc. But wouldn’t it be nice to check out what’s going on inside the container too?Continue reading
New job, new challenges.
I’ve come across docker in the past, but have pretty much been on the follow these commands to fire up a docker and then use it. Now I find myself in a place where much of the infrastructure of the new work place is built on docker or some measure of virtualisation of containers and hosts.
First setup docker using the relevant installation guidance. In my case the new workstation is Linux Mint 19.3 (tricia) which is Ubuntu based, but meant I had to jig the install slightly to match the Ubuntu build name of “bionic” not the Mint build name of “tricia”.
$ cat /etc/os-release NAME="Linux Mint" VERSION="19.3 (Tricia)" ID=linuxmint ID_LIKE=ubuntu PRETTY_NAME="Linux Mint 19.3" VERSION_ID="19.3" HOME_URL="https://www.linuxmint.com/" SUPPORT_URL="https://forums.ubuntu.com/" BUG_REPORT_URL="http://linuxmint-troubleshooting-guide.readthedocs.io/en/latest/" PRIVACY_POLICY_URL="https://www.linuxmint.com/" VERSION_CODENAME=tricia UBUNTU_CODENAME=bionic
I created an apt list file
/apt/sources.list.d/docker.list and put in the entry:
deb [arch=amd64] https://download.docker.com/linux/ubuntu bionic stable
After installing docker I followed on through to install
docker-composer as this allows me to group a number of docker containers to provide the services I require.
In order for my regular user to manage containers and volumes etc. I added it to the
$ sudo gpasswd -a myuser docker
Create a YAML file
Make a directory for a collection of containers. Reaaly just a location for configuration files for it. Within the folder create a file called
docker-composer.yaml. This will contain the specifics about what docker containers you want to create, what images they use and what resources they provide or require:
version: '3' services: db: container_name: pgsql10 image: postgres:10 volumes: - data:/var/lib/postgresql/data restart: always environment: POSTGRES_PASSWORD: mysupersecretkey ports: - 5432:5432 adminer: container_name: adminer image: adminer restart: always ports: - 8080:8080 volumes: data:
I this case I am creating a container for PostgreSQL 10 and a container for adminer that is a web based database administration tool. So I can run postgres and use adminer to manager/test it.
The important entries are:
image: the name and tag of the image to install from docker hub.
environment: contains specific environmental variables for this server. In this case the ‘postgres’ users password so we can login.
volumes: this creates a docker volume called ‘data’ and will mount it inside the container at
/var/lib/postgresql/data. This will be the non-volatile storage used by the service. So the service can be torn down and replaced, but the docker volume will still have the data within it.
ports: In this case what port mapping is used. The number before the colon is presented to the OS and the number following it is the port it forwards to within the container.
If I then bring the containers up using:
$ docker-composer up -d
This will begin initializing the containers, firstly by pulling down the images
adminer from the docker hub. The data volume will be created and mounted into the container. The posgres image has it’s own initialization and will create the database if required.
That’s it! I now have a running PostgreSQL 10 instance.
I can test the both containers and connect to the postres instance by visiting
http://localhost:8080 in a browser. Change the System type to
PostgreSQL, server to
db, username to
postgres and the password specified in the environment above. Click Login and you should find yourself able to manage the database.
You can also skip adminer and use whatever DB tools you like such as DBeaver, or just go connect your application to it like you would any regular instance of postgres.
To stop the container services from within the folder holding your
$ docker-compose down
This will tidy things up removing temporary storage etc. but leaves your data volume ‘as is’ so it’s ready for the next time you want to start the containers.
There’s more magic capable using composer, such as not exposing ports to anything other than other containers within the same composer group of containers, which really add a layer of application security by not exposing services to systems that don’t require them.
Some useful commands:
$ docker-composer ps $ docker-composer images $ docker volume ls $ docker ps $ docker ps -a $ docker-compose exec -u postgres db psql -l $ docker-composer logs -f dbContinue reading
Since trying out Headphones a few years ago I got frustrated with it in the first hour and ditched it and went back to manually downloading music. That was until I got pointed to Lidarr.
Lidarr is either a fork of, or certainly based on the excellent Sonarr project for downloading TV series. Lidarr applies the same methodology and familiar interface to download music.