One of the biggest challenges for implementing cloud native technologies is learning the fundamentals—especially when you need to fit your learning into a busy schedule.
In this series, we’ll break down core cloud native concepts, challenges, and best practices into short, manageable exercises and explainers, so you can learn five minutes at a time. These lessons assume a basic familiarity with the Linux command line and a Unix-like operating system—beyond that, you don’t need any special preparation to get started.
In the last lesson, we deployed a web application in a multi-container configuration on a user-defined network. The process wasn’t too difficult—but it could be streamlined. Fortunately, Docker includes tools for simplifying multi-container deployments. Docker Compose is a tool for specifying multi-container deployment details in one easy-to-run file.
Table of Contents
- What is a Container?
- Creating, Observing, and Deleting Containers
- Build Image from Dockerfile
- Using an Image Registry
- Volumes and Persistent Storage
- Container Networking and Opening Container Ports
- Running a Containerized App
- Multi-Container Apps on User-Defined Networks
- Docker Compose and Next Steps←You are here
When we set up our multi-container deployment of Mediawiki using MySQL, we needed to manually configure a number of options: a user-defined network, volumes, ports, passwords, and so on. It’s not hard to imagine that we might need to go through this configuration process repeatedly with the same options.
Under such circumstances, we can use docker compose. Docker Compose files are written in YAML (which stands for Yet Another Markup Language or YAML Ain’t Markup Language, depending on who you ask.) A Mediawiki implementation like the one we deployed last lesson might look something like this:
# MediaWiki with MySQL # version: '3' services: mediawiki: image: mediawiki restart: always ports: - 8000:80 volumes: - /var/www/html/images database: image: mysql restart: always environment: MySQL_ROOT_PASSWORD: root
Many of the specifications above will look familiar. We’re creating two “services”: the Mediawiki app and the MySQL database. For the app, we’re publishing port 80 to the local machine’s port 8000. For the database, we’re specifying the in-container directory for persistent data.
But what do we mean by service? This is an abstraction that Docker uses to organize large deployments—the service definition is essentially an image of a container configured to work in the context of a larger deployment.
In a complex deployment, specific pieces of functionality are sometimes broken down into discrete modules that can communicate with one another through web interfaces and can be managed by small teams. This approach is called a microservices architecture, and our simple two-container deployment gives us a highly simplified way to understand the architectural pattern. In this case, the application is broken down into containerized components with clearly defined jobs: the application and the database. The application logic could be further decomposed into modules handling the specific logic for, say, authenticating accounts, editing posts, serving the site, and so on.
But for now, we’re concerned with composition, not decomposition! Go ahead and copy the above into a file called wiki.yml and save it in a project folder on your host machine. Take note: this pattern of organizing multi-container deployments in YAML files will be a recurrent feature in your journey through cloud native systems. If you’re starting fresh, this could be your first time encountering a YAML file, but it certainly won’t be your last.
Make sure you don’t have any containers from previous lessons running. Then in your terminal, run:
docker-compose -f wiki.yml up
The -f argument points to a custom file name—in this case, wiki.yml. (Otherwise, Docker Compose expects the YAML file to be called docker-compose.yml, which could get confusing if you had many such files on the same system.)
This command will start the process in attached mode, so you’ll see continuous logs in your terminal tab. When you navigate to localhost:8000, you’ll go through the exact same setup process as previously, except that you’ll need to run docker container ls to find the name of the database container for the database host. (It’s likely wiki_database_1.) You will have to download LocalSettings.php again, but this time, you’ll place it in the same host machine directory as wiki.yml, which you’ll edit to look as follows:
# MediaWiki with MySQL # services: mediawiki: image: mediawiki restart: always ports: - 8000:80 volumes: - /var/www/html/images - ./LocalSettings.php:/var/www/html/LocalSettings.php database: image: mysql restart: always environment: MySQL_ROOT_PASSWORD: root
Now, when Docker Compose launches, it will import the LocalSettings.php file into the directory associated with your persistent volume.
Go ahead and end the process in your terminal tab with CTRL-C and run docker compose again:
docker-compose -f wiki.yml up
We have a running multi-container deployment again—this time, with less grunt-work in the command line (and an easy-to-share set of instructions in YAML).
In this lesson, we wanted to simplify the deployment of a multi-container application. So what did we learn?
- We used the Docker Compose tool to configure our deployment in a single YAML file, which is easy to copy, share, place in version control, and so on.
- We found that using Docker Compose can significantly accelerate set-up for an environment.
- We learned that Docker Compose defines services for large-scale deployments, and that the services model forms the basis of the microservice architecture commonly used in cloud native development.
Where do we go next?
In Chapter 1, we noted that Docker is not the only container system. Docker has been the industry standard for nearly fifteen years, but alternative engines and runtimes are growing in popularity, often with their own well-defined use-cases:
- Podman provides a free and open source container engine well-suited for individual developers.
- Mirantis Container Runtime gives enterprises a container runtime with advanced cryptographic functionality to support compliance with security requirements, as well as native Windows and Linux support. It can also serve as the container runtime for Kubernetes.
- containerd is the open source runtime used by Docker Engine—not so much an alternative as the open source engine block spun out on its own and managed under the auspices of the Cloud Native Computing Foundation (CNCF). Scoped for use as a component with minimal direct user interaction, containerd provides an open container runtime option for many technologies including Kubernetes.
Fortunately, the rise of alternative runtimes and engines has not led to dramatic fragmentation in the container market. In 2015, Docker, Inc. founded the Open Container Initiative (OCI) to help establish open and standardized specifications for container runtimes and images. That makes OCI-compliant container images interoperable between OCI-compliant runtime environments—and it makes your knowledge about one runtime largely transferable to another. For example, the Podman homepage cheekily recommends using a command line alias to make “docker” commands run “podman”—from there, you can use most of the same commands you already know.
Taken on their own, container platforms provide a way to accelerate development, providing access to “building block” images and isolated, quickly provisionable environments. But the ephemerality and efficiency of containers have given rise to new software deployment patterns and applications, including container orchestrators: systems such as Kubernetes and Docker Swarm designed to coordinate containerized services.
If Docker Compose is like the composer of a complex musical score, Swarm and Kubernetes are the conductors of great orchestras, with containerized applications and services instead of instruments. By delivering applications in the form of containers, these systems can quickly replicate container-based deployments in order to achieve high availability and resiliency.
Today, we can identify a continuum of container tools with different (but sometimes overlapping) use-cases:
- Docker (and other user-facing container engines): designed for creating and running containers, well-suited to quickly deploying single-container apps on one host for development environments
- Docker Compose: a Docker tool that simplifies deployment of multi-container apps, also well-suited to development environments on one host
- Docker Swarm: a container orchestrator built into Docker, suitable for deploying applications across multiple hosts for production use
- Kubernetes: an open source container orchestrator originally developed by Google, well-suited for very large scale deployments with many different hosts, applications, or services—popular among enterprise users for its ability to scale services across a large number of nodes
There are many more complexities to unpack when it comes to Docker Swarm and Kubernetes, and the right tools for your cloud native projects depend on your context—but understanding the fundamentals of containerization will give you a solid foundation for whatever you set out to do.
In our next series of articles, we’ll dive headfirst into container orchestration, focusing on what developers need to know to build and run apps on Kubernetes. In the meantime, you find all of the lessons from this unit collected in a free ebook, Learn Containers 5 Minutes at a Time, which includes a book-exclusive capstone chapter on deploying a Node, React, and MySQL app as containerized services. You can download the book for free here.