Bye bye Cpanel, hello Docker and AWS
I’ve had a bunch of small websites, including this one, running on a variety of different cpanel (http://cpanel.net/) hosting accounts for a number of years. I’ve never been particularly happy with cpanel but since it seemed like the de-facto way of doing web hosting unless you run your own dedicated machine I haven’t really touched it. I always thought cpanel just had to much… stuff in it. Too many ways in, too many features, too much clutter.
Last year my reseller account was compromised a couple of times which I discovered was ultimately down to vulnerabilities in WHMCS (http://www.whmcs.com/) the reseller management front end that came with my account. Now, this was surely my fault for not keeping up to date with the patches, but for me this was enough of a nudge to clean everything up. There were just too many layers of PHP stuff for my liking: WHMCS, WHM, cpanel (a master one and then one for each client site). All of these exposed FTP, Webdev, web, management portals and backups and none of it over SSL. None of my customers actually wanted cpanel – they just wanted a place to put HTML files so they would appear on the web.
I decided I would consolidate all of my sites in one place. Since I use AWS for work and I’m interested in learning about Docker, I decided to combine everything and run the whole lot as containers on a single AWS instance (since I’m trying to keep everything in the free tier). I wanted to keep everything as maintenance free as possible, so I tried to automate from the beginning. The first thing to automate was the AWS machine creation. I decided to try and use Vagrant for this since I’d been playing with Vagrant while trying to get the Docker setup working on a local machine. I used the Vagrant AWS provider from here: https://github.com/mitchellh/vagrant-aws
Here’s my Vagrantfile:
Vagrant.configure(2) do |config| # This configuration is for our EC2 instance config.vm.provider :aws do |aws, override| aws.access_key_id = "<aws access key>" aws.secret_access_key = "<aws secret access key>" # ubuntu AMI - 64 bit trusty, eu-west-1, hvm aws.ami = "ami-73f97204" aws.keypair_name = "<aws keypair>" aws.security_groups = ["<aws security group"] aws.region = "eu-west-1 or other region" aws.instance_type = "t2.micro" override.vm.box = "dummy" override.vm.box_url = "https://github.com/mitchellh/vagrant-aws/raw/master/dummy.box" override.ssh.username = "ubuntu" override.ssh.private_key_path = "path to private key.pem" end # Install latest docker config.vm.provision "docker" # rsync our content config.vm.synced_folder "sync", "/vagrant" # bootstrap haproxy and all the containers config.vm.provision :shell, :path => "bootstrap.sh" end
When I do “vagrant up” this does the following:
- Create a new micro instance in the specified region using the specified AMI and set the security group
- Install the latest version of Docker on it (this uses the Vagrant Docker provisioner)
- Set up a folder of local files to be rsync’ed to the new machine
- Run a bootstrapping script to do the rest of the setup
I only encountered one problem with this: the rsync happens later, so my bootstrap (which needs some files from the rsync) initially fails. I’m still working on that bit.
The bootstrapping script does the following:
#!/bin/sh # install any extra packages apt-get update apt-get install haproxy # copy our config changes over - mostly to get haproxy logging to rsyslog cp -r /vagrant/config/etc / # restart changed services service rsyslog restart service haproxy start # build our nginx / docker basic container docker build -t nginxphp /vagrant/nginxphp # and the wordpress one docker build -t wp /vagrant/docker-wordpress-nginx # launch the docker containers docker run -p <docker_host_port1>:80 --name dashboard -v /vagrant/<rsynced folder name>/www:/var/www:rw -d nginxphp /sbin/my_init docker run -p <docker_host_port2>:80 --name charltones -v /vagrant/<rsynced folder name>/www:/var/www:rw -d wp # and many more of these, one for each site
This script sets up haproxy, which I run on the docker host to forward web traffic to the correct container. It then builds a couple of containers. I use two – one for wordpress and one generic nginx/php one for mostly static sites. It then launches each container. The static files come from the rsynced folder and the port is exposed to haproxy – each one on a different port. Whenever I edit any site content, I just type “vagrant rsync” and the new content is automatically rsync’ed to AWS.
I would like to take this setup further – here are some next steps I want to look at: