Blog

Clouds and Containers and Swarms! Oh My!

If I manage a restaurant, I schedule enough employees to comfortably handle the average number of customers. For peak periods, or when I have a special catering event, I bring in extra employees to handle those exceptional situations. Cloud computing works much the same way.

If I manage a corporate website, I organize enough computing resources to comfortably handle the average number of site visitors. For expected high traffic periods -- such as Black Friday sales on an e-commerce site -- I bring in extra computing resources for the occasion. This doesn't involve temporarily installing more hardware for the high-traffic periods; it can all be done with software, using cloud computing.

fluff container image
Figure 1: Clouds and containers, but no swarms...

Some Terminology

First of all, what is cloud computing? Here's one description:

In the simplest terms, cloud computing means storing and accessing data and programs over the Internet instead of your computer's hard drive. The cloud is just a metaphor for the Internet. It goes back to the days of flowcharts and presentations that would represent the gigantic server-farm infrastructure of the Internet as nothing but a puffy, white cumulus cloud, accepting connections and doling out information as it floats.

For the container-specific terminology that follows, I use Docker terminology, since Docker is arguably the leader in container technology.

For a number of reasons (explained below), it's useful to break down websites into a number of distinct pieces, so that each piece can be managed independently. These pieces are called containers. My simple example below consists of three containers: an application, a web server, and a database.

An image is a template from which a container can be started. One of the great things about container technology is that it is easy to exactly reproduce my website's environment, because all of the required programs and version numbers for an image are defined in a text file called a Dockerfile. If I send you the Dockerfile files for my website, you can very easily recreate the images, and therefore launch the containers, and therefore recreate my website. ¡Qué portabilidad! (Now that's portability!)

A swarm is Docker's name for the software that manages, or orchestrates, a group of containers. If the website experiences heavy load, the swarm can add more containers to handle the increased demand. If a database container starts acting strangely, the swarm can replace it with a new one.

One Responsibility per Container

A good strategy for dividing up a site's functionalities is to dedicate a container to each responsibility. For example, one responsibility is managing the data in a database. Another one is handling all web interactions. And last but not least, the application logic is a responsibility.

There are good reasons for placing only one responsibility in a container:

  1. Some containers have functionality that is more compute-intensive and can cause a bottleneck. When performance starts to lag, more of that kind of container can be added to the swarm. If the entire website were packed into a single container, then adding resources to speed it up would require unnecessary duplication of parts that aren't causing the bottleneck.

  2. If part of the site needs to be upgraded or repaired, only that part's container needs to be modified. A change to the DB doesn't have to affect the other containers.

  3. Similarly, if one of the containers starts to misbehave, only that container needs to be restarted.

Placing only one responsibility in each container results in a website that is more flexible, more scalable, and more fault-tolerant.

My Example

My goal was to create a simple website with these three containers that work together:

Main Application
This container holds the main application logic, which in this case is a simple blogging app that demonstrates CRUD (Create, Read, Update and Delete) functionality. It is implemented in Python using the Flask microframework.
Web Server
I am using the NGINX web server to work as a proxy server. It accepts all incoming requests, handles the ones it can, and passes the rest on to the main application.
Database
This is a MySQL database that contains all the CRUD data.

All requests from web browsers are initially received by the web server. If the request is for a static file (such as a CSS or JavaScript file), the web server can fulfill that request without "bothering" the application. This lightens the load on the application, and the NGINX web server is more efficient at this kind of task.

For all other requests, the web server forwards the request to the application. These requests will usually require database access -- either to read data or to write it -- so the application executes DB commands and returns the results.

Ready... Set... Deploy!

The example described above was first implemented on my private development computer, using the wonderful Docker for Mac development environment, which allows me to create images, containers, and swarms on my computer.

Once I got that working, it was time to try it out in a "real world" environment. I chose Amazon Web Services (AWS), not only because it is currently the largest cloud computing service provider, but also because Amazon has a "free tier" that offers limited use of cloud computing resources for a year. ;-)

AWS dashboard
Figure 2: The AWS dashboard showing my running website

Most cloud computing providers have their own unique systems for creating containers and swarms, but thanks to Docker's popularity, providers usually offer Docker compatibility. This is indeed the case with AWS.

First, I create a cluster, which defines the number of containers I want and the geographical region (US, Europe, Asia) with the physical computers where my website will actually run. Then, after making a few adjustments to the configuration files I used on my development computer, I tell AWS to launch my containers in the cluster I created. Figure 2 shows the browser-based administrator dashboard for the task, with the three running containers listed at the bottom.

And thanks to the handy ecs-cli command-line tool, I can perform all of these operations from my terminal: create and destroy a cluster; start and stop my website; and add or remove additional containers.

Published: 19 Dec 2018