Totally Dockerized, Part 3: Tuning and fixing
I mentioned in my last Dockerized post that I would follow up with how I tuned the setup to work more reliably. Here are some of the problems I encountered and what I did to fix them:
MariaDB exit
One of the first problems I had was that the MariaDB container would just stop after a short period. This led to the wordpress site stopping working a lot, which was annoying. From the logs, it looked like the database was complaining about a lack of memory. I first tried switching to mysql but that didn’t seem to make any difference so I logged into the docker host to have a look:
$ docker-machine ssh <your environment name>
Once on the machine I used top to see where all the memory was going. Bear in mind I’m running a lot of containers on a micro instance, so there is only 1GB to go around. To my surprise mariadb wasn’t the memory hog, just the victim of another memory hog – Apache. The wordpress official Docker container comes with two flavours – a default one that uses Apache, and a second one that uses PHP FPM but which doesn’t have a web server. My solution was to move to the second flavour and add a new container running nginx. I found a gist which contained the necessary nginx foo and put it in github here: https://github.com/charltones/docker-nginx-fpm
Once I added this to my docker compose file, everything started up using a lot less memory and stayed running.
Could not find bridge docker0
A couple of times, after running docker-compose up and docker-compose-down a lot, I got this error:
adding interface veth1287a59 to bridge docker0 failed: could not find bridge docker0: no such network interface
The short answer to this is that it means docker is broken and needs to be restarted:
$ docker-machine ssh <your environment> <docker host>$ sudo service docker restart
After this docker becomes happy again. I guess this is one of those hints that docker isn’t quite ready to be used in commercial production systems.
Docker stats
In the course of investigating low memory issues, I found a more accurate way of seeing exactly how much memory each container was using:
$ docker stats <list of container names>
Provides this kind of info:
CONTAINER CPU % MEM USAGE/LIMIT MEM % NET I/O docker_charltones_1 0.00% 2.048 MB/1.041 GB 0.20% 739.3 MB/776.5 MB docker_charltonesbackup_1 0.00% 3.011 MB/1.041 GB 0.29% 5.624 MB/64.09 kB docker_charltonesbackupnotify_1 0.00% 1.802 MB/1.041 GB 0.17% 26.38 kB/738 B docker_charltonesdb_1 0.03% 102.1 MB/1.041 GB 9.81% 44.04 MB/858.8 MB docker_charltoneswp_1 0.00% 113.5 MB/1.041 GB 10.91% 879.1 MB/773.5 MB docker_dashboard_1 0.00% 26.54 MB/1.041 GB 2.55% 47.63 kB/224.7 kB docker_fudgetiger_1 0.01% 26.48 MB/1.041 GB 2.54% 314.2 kB/23.54 MB docker_hostmybusiness_1 0.01% 26.61 MB/1.041 GB 2.56% 44.24 kB/34.87 kB docker_infonimbus_1 0.00% 24.94 MB/1.041 GB 2.40% 24.7 kB/738 B docker_inkymum_1 0.01% 25.9 MB/1.041 GB 2.49% 140.8 kB/3.94 MB docker_mobilemechanic_1 0.00% 27.86 MB/1.041 GB 2.68% 365 kB/3.361 MB docker_proxy_1 0.09% 7.733 MB/1.041 GB 0.74% 820.1 MB/814.2 MB docker_s3charltones_1 0.00% 11.68 MB/1.041 GB 1.12% 49.09 MB/41.75 MB docker_s3dashboard_1 0.00% 2.63 MB/1.041 GB 0.25% 1.379 MB/467.5 kB docker_s3fudgetiger_1 0.00% 2.236 MB/1.041 GB 0.21% 5.553 MB/492.5 kB docker_s3hostmybusiness_1 0.00% 2.175 MB/1.041 GB 0.21% 2.582 MB/471.9 kB docker_s3infonimbus_1 0.00% 2.114 MB/1.041 GB 0.20% 1.138 MB/464.8 kB docker_s3inkymum_1 0.00% 2.122 MB/1.041 GB 0.20% 7.004 MB/506.9 kB docker_s3mobilemechanic_1 0.00% 2.093 MB/1.041 GB 0.20% 10.83 MB/539.6 kB docker_s3shonarain_1 0.00% 2.834 MB/1.041 GB 0.27% 124.8 MB/1.701 MB docker_shonarain_1 0.00% 25.97 MB/1.041 GB 2.50% 217.3 kB/3.545 MB
This really lets you home in on problem areas – particularly if you are trying to optimise for a tight memory environment. In doing this I found that mariadb was using more memory than anything else. After some research I found that I could drop the innodb_buffer_pool_size from 256MB to 64MB and save a lot of runtime memory. I made a custom Dockerfile and posted it here: https://github.com/charltones/docker-mariadb-lomem
After running with this, memory usage was much more healthy.
Where’s that email?
My WordPress container wouldn’t send emails. This is a common problem and it can be quite tricky to find the root cause because there are so many different levels on which it can go wrong. At the most basic level, at least when using ssmtp (http://linux.die.net/man/8/ssmtp) like I am, you need to make sure your SMTP server settings are correct. Then, you need to make sure that you can send emails from the command line:
$ docker exec -it <container name> bash <container>$ mail -s "Test Subject" user@example.com < /dev/null
If you receive the email, then the next stage is to make sure that php can send emails:
$ docker exec -it <container name> bash <container>$ php -a > mail ('you@example.com', "Test PHP mail", "Test mail from PHP"); > exit
This is where it went wrong for me. I got the brilliant error message:
sh: 1: -t: not found
Luckily I’d come across this before. There is a great blog post that dissects exactly what causes this http://axiac.ro/blog/2013/06/sh-1-t-not-found/
The solution was to add a php.ini in the right place which specified the sendmail command line. I fixed it in my customised wordpress dockerfile here: https://github.com/charltones/wordpress/commit/d898b007282c101321da205d303617e280def2c1
Restart!
Occasionally one of the docker containers will exit. In my experience there is little in the logs to explain why this happens. One thing that can help is to specify a restart policy in the compose file. This is as simple as adding a:
restart: always
line to every container inside your docker-compose.yml
Wipe, Rinse, Repeat
In applying these fixes, it meant cleaning everything and restarting many times. If I had to restore data manually each time, I would never have bothered. This really showed the benefits of a fully bootstrapped, wipe-clean start. Every time I wipe and restart – all the data is restored to the latest automatically. This is the best way to start, not by adding it later.