Monday, December 1, 2014

Docker: How I dockerized my websites


The Problem:


My current system for deploying my pretty awful websites is by using Chef. I'm not a web developer, and I haven't spent much time developing these websites, so don't judge me too much on them. I dabble with them every couple of months to get them closer to something good. They don't get much traffic, so I can run all three on one very tiny AWS instance. I'm still using the free tier, so who knows where I'll go once that's done. Actually, this brings me to my decision for my current setup. I originally hosted all three sites on my server at home, but power outages had become a problem.

However, I had no reliably consistent way of deploying these three servers to another server without taking a rather significant amount of time and effort. I had been using Chef at my day job, and I thought this would be a great opportunity to learn more and complete a task that needed to be finished quickly, as one of the websites was for my wedding and people would soon be needing to access it for RSVP's and other information. I couldn't have it going down for half the day whenever the power went out. I had already gotten complaints when my ISP randomly changed my IP address which I thought was static.

So, I endeavored to setup a deployment method that would allow for fast redeployments and code updates. I wanted this to be a cross-platform implementation, but it never quite got there. I was running Debian at home and RHEL at work, so I went with RHEL as it would be more beneficial for my knowledge base at work. Debian uses apt-get, so I had originally developed my Chef cookbook/roles with apt-get in mind. It didn't take much to switch to yum and the epel repo, but it did take extra effort that I wish hadn't been needed.

This Chef setup did allow for multiple websites to be deployed at once using a single httpd server with multiple virtual hosts configured. Updates could be made when one of the website's code had changed. Everything could be initiated with a simple chef-solo run, but this wasn't portable enough. I still had to manage the dependent cookbooks, but Berkshelf made this much easier. I also had to restart the single running httpd server when any website was updated, which creates some risks if the updates I'm making bring the httpd server down. I didn't have a sandbox environment that I could use to ensure everything would work on an identical AMI. In fact, I didn't even have another RHEL or Centos box that closely resembled the production AMI. This is not the way to do it.

The Solution (the part you actually care about):


That is when I discovered Docker. We had been discussing it at work for awhile before it was officially released as a 1.0 version. I had even created Jenkins slave containers to more efficiently utilize some of our resources. I had also setup a private registry at work so we could share our images when the time came to ramp up our Docker development. However, it hadn't occurred to me that I could run my servers using Docker containers. I had originally thought about just using Chef to provision a container with the exact setup I was already running. That's lazy and really doesn't fit the purpose of Docker.

Chef can be utilized to manage your Docker configuration on a node, but containers should be immutable once created. Containers should also be used in composition of an app or system similar to what developers already do with classes and libraries. If a container is running syslog, then you're probably doing it wrong (for the record, I haven't gotten to the point where I have a container for logging). My goal with Docker was to separate the concerns of my current setup so that I could update the code in one website without needing to restart all the servers. I also wanted an easier way to manage my deployment and upgrades.

I chose to separate each website into its own container running its own httpd server. This allowed a simpler configuration without virtual hosts and a single directory within the container where the code is stored. I also chose to use haproxy to route my website traffic to the appropriate container. To do this, I had to use links between the containers. I'll show how to manually do this below along with the better way using fig.

First, we need to create our website containers. With Chef, I had to install git or curl so that I could download the entire website repo during the Chef run. That's not a huge deal, but I didn't need either on my production machine. This isn't needed on the container, but it can be installed if you really want to. I chose to clone locally and then COPY the contents into the container.

The Dockerfile for each website is the same except the website name:

danpluslaura.com
1:  FROM httpd:2.4  
2:    
3:  MAINTAINER Daniel Barker (barkerd@dbdevs.com)  
4:    
5:  COPY ./danpluslaura/ /usr/local/apache2/htdocs  
6:  COPY ./httpd.conf /usr/local/apache2/conf/httpd.conf  

dbdevs.com
1:  FROM httpd:2.4  
2:    
3:  MAINTAINER Daniel Barker (barkerd@dbdevs.com)  
4:    
5:  COPY ./dbdevs/ /usr/local/apache2/htdocs  
6:  COPY ./httpd.conf /usr/local/apache2/conf/httpd.conf  

cafezvous.com
1:  FROM httpd:2.4  
2:    
3:  MAINTAINER Daniel Barker (barkerd@dbdevs.com)  
4:    
5:  COPY ./cafezvous/ /usr/local/apache2/htdocs  
6:  COPY ./httpd.conf /usr/local/apache2/conf/httpd.conf  

Note that I have also included a custom httpd.conf file, but this is only needed so I can remove the www from the website address and set the ServerName:

191:  ServerName cafezvous.com:80  
...    
504:  RewriteEngine on  
505:  Options +FollowSymlinks  
506:  RewriteCond %{HTTP_HOST} ^www\.(.+)$ [NC]  
507:  RewriteRule ^(.*)$ http://cafezvous.com$1 [R=301,L]  

With this setup, the only images that are needed for each different website are the same as the original implementation (the website files and the special configuration), except I can now run three separate httpd servers. This setup also allows me to develop locally using Docker's -v option to mount those files on my host system and have the httpd server use those files. This allows for the container to immediately see changes to my code during development. I could also use this method with a data container for production deployments, but that really wouldn't be very useful for this instance.

Now you may be asking how I plan to route each website to the appropriate container since they can't all run on port 80, and that's probably how I'll want visitors to access them. I could probably run an httpd server on the host system or another container to route each site to a particular port, but that isn't really a good use of httpd and is more suited to an actual proxy. I chose to use haproxy to route my website traffic, but I'll leave that for another blog post. The current setup will allow you to test your websites locally with hardly any code. If you want to update to a newer version of httpd, then you simply change the FROM command and rebuild and push your images. I won't cover this process here, as the Docker docs cover the basics very well and are updated regularly.

Note: I could have installed curl or git or some other program to get my code into the container, but I thought that would just create more images with more space taken up by code that would never be needed again. The only problem with this is ensuring you have the correct version locally. Therefore, I may have done this differently if it were a corporate environment where I could download a tar and unpack it in a single RUN command to eliminate most of the overhead. This would also allow for versioning to be maintained in source control.

No comments:

Post a Comment