Totally Dockerized, Part 2: Docker Compose
I wanted to use Docker to host 8 different websites. Some of them use a little bit of PHP, and one of them (this site) uses WordPress and requires a database. I wanted all the websites to bootstrap from S3 so that I could smash the setup and rebuild pretty much instantly, no longer worrying about the sites getting hacked or broken. I wanted the static sites to update periodically from S3, so that I could edit them by editing the files on S3 (S3 is the data master). I wanted the WordPress site to back up regularly to S3 (WP is the data master).
I realised to do all this would require many different containers, and I didn’t want to start them by hand. I did that with my last attempt and although it worked it was a bit of a hack. I also wanted the entire solution to be containerized. The last attempt used Haproxy running on the host to send traffic to each site – this time I wanted that to happen in the container.
A colleague pointed me to Docker Compose which is described as:
Compose is a tool for defining and running multi-container applications with Docker. With Compose, you define a multi-container application in a single file, then spin your application up in a single command which does everything that needs to be done to get it running.
That is exactly what I needed! I’ve noticed a few Docker containers on Github now include Compose recipes (like this one) so this could be the way to go. I’ll describe how I set up and used Compose to spin up all my sites.
Installing
You can install Compose using pip as follows:
pip install -U docker-compose
but a word of caution here – pip will downgrade any dependencies necessary to install this, and in my case it downgraded the requests library which broke another tool I was running. Of course, the answer is to use virtualenv which is what I ended up doing.
Working with Compose
Compose is a very simple tool to get started with. The process is:
- Define a Dockerfile or set of Dockerfiles for your application
- Create a docker-compose.yml file which is a description of the services that make up your application and how they connect
- Bring up the whole thing by typing docker-compose up
There are some useful tips for running Compose in production here, which I’d recommend reading. In particular you should note the advice on having multiple compose files:
docker-compose-dev.yml docker-compose-prod.yml
and choosing which one to run using an environment variable:
$ COMPOSE_FILE=docker-compose-prod.yml $ docker-compose up <span class="hljs-operator">-d</span>
My Compose file
The Compose file ended up being quite big, so I’ll explain it one section at a time. The whole thing needs to be in one plain text file though. The file is written using yaml and is split into a number of sections, one for each service that you are starting. Each service normally corresponds to one container, and the section effectively captures what you would normally have passed on the command line to Docker.
Proxy
proxy: build: nginx-proxy ports: - "80:80" volumes: - /var/run/docker.sock:/tmp/docker.sock:ro
This section uses Jason Wilder’s cool automatic reverse proxy for Nginx to set up a top level reverse proxy to all the other sites I’m hosting. This container hooks into Docker’s event stream to pick up when other containers are started and stopped and automatically creates Nginx config to route traffic to them. The source is here and a write up here. The ports section just says that the container will expose port 80 to the outside and pass on requests to port 80 on each child container.
S3
s3fudgetiger: build: docker-s3-volume command: /data s3://fudgetiger.com/www environment: AWS_DEFAULT_REGION: eu-west-1 s3inkymum: build: docker-s3-volume command: /data s3://inkymum.com/www environment: AWS_DEFAULT_REGION: eu-west-1 ... one of these sections for each site ... s3charltones: build: docker-s3-volume command: /data s3://charltones.com environment: AWS_DEFAULT_REGION: eu-west-1
This section has one service per website that uses docker-s3-volume (source here). This container creates a Docker data volume (basically a folder that can be shared with other containers) which is backed up periodically to an S3 bucket. The data volume appears in /data. When the container is started it will do a restore, and it will do a backup whenever it receives a USR1 signal. More on that later.
I created one bucket per site and copied the root web folder to them. The environment section contains environment variables to pass to the container. In this case I needed to pass variables to allow the AWS command line tools to work since these are used to sync with S3. In my development environment, I needed to also specify AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY. In production I granted a role to the machine running Docker so that it could access the required buckets.
I only made a minor change to the original container recipe – I changed it to do a restore every 30 minutes. This means the /data folder will be updated with any changes made to S3 every half hour. This is how I can edit the contents of the sites.
PHP / static sites
fudgetiger: build: nginxphp volumes_from: - s3fudgetiger environment: VIRTUAL_HOST: www.fudgetiger.com,fudgetiger.com inkymum: build: nginxphp volumes_from: - s3inkymum environment: VIRTUAL_HOST: www.inkymum.com,inkymum.com ... one of these sections per static site ... mobilemechanic: build: nginxphp hostname: mobilemechanic-hampshire domainname: co.uk volumes_from: - s3mobilemechanic environment: VIRTUAL_HOST: www.mobilemechanic-hampshire.co.uk,mobilemechanic-hampshire.co.uk SMTP_SERVER: ******** SMTP_USERNAME: ******** SMTP_PASSWORD: ********
These sites are all based off this container. This is a standard recipe that uses Nginx and PHP to create a web site. The volumes_from section links each site to the /data volume that is copied from S3 every 30 minutes. The environment section is necessary for the top level proxy container to know what vhost to put in the top level nginx config.
I made one major change to the recipe. A couple of sites need to send emails. I altered the recipe to install ssmtp. This is a very simple MTA which is useful for cases where you need a service to be able to send emails but don’t want to have to set up EXIM or Postfix. I also had to change the recipe to point to /data as the web root.
WordPress
charltones: build: wordpress/apache hostname: charltones domainname: com links: - charltonesdb:mysql environment: VIRTUAL_HOST: www.charltones.com,charltones.com WORDPRESS_DB_USER: ******** WORDPRESS_DB_PASSWORD: ******** WORDPRESS_DB_NAME: ******** SMTP_SERVER: ******** SMTP_USERNAME: ****@******** SMTP_PASSWORD: ******** charltonesdb: image: mariadb environment: MYSQL_ROOT_PASSWORD: ******** MYSQL_DATABASE: ******** MYSQL_USER: ******** MYSQL_PASSWORD: ********
This section defines the site you’re currently reading. It uses the offical Docker WordPress image from here with only one modification by me to send emails as in the nginxphp recipe.
The yaml above will start two containers. One running apache and php to host WordPress, the other running MariaDB for storage. To link the two together, the first one uses a links section. This uses Docker’s linking facility to hook the two containers together without the need to specify ports.
WordPress Backup
charltonesbackup: build: wordpress-backup links: - charltonesdb:mysql volumes_from: - s3charltones - charltones charltonesbackupnotify: build: docker-inotify-signal-container links: - s3charltones:monitor volumes: - /var/run/docker.sock:/var/run/docker.sock volumes_from: - s3charltones command: SIGUSR1 /data
The last section sets up two containers to help automatically back up WordPress. The source for wordpress-backup came from here. The source for the notify container came from here.
The wordpress-backup container sets up a cron to back up WordPress daily. It uses links to connect to both the database and the php containers, then creates an archive for each. It normally puts these on its own /backups volume, but I modified it to copy the archives to the s3 volume instead. I also added the ability to restore the latest archive, which is what it does when the container starts for the first time. This allows me to completely bootstrap this site from fresh when needed.
The first container makes backups, but these will just be stored inside my docker container. What is also needed is a trigger to copy the backups back to S3 when they are added. This is what the inotify-signal container does. It watches the S3 container, and when the /data folder changes it triggers a sync back to S3. The triggering is done by sending a USR1 signal to the S3 container. This is trapped by the main script which copies back to S3. I had to modify this container to be a bit more compose friendly.
That’s it! {.}
I’ll post next time details of some of the modifications I made to get these containers working how I wanted. I will also describe some of the challenges I had in getting it all working in production and how I overcame them.