Cloud Native 5 Minutes at a Time: Multi-Container Apps on User-Defined Networks

Eric Gregory - April 1, 2022 - , ,

One of the biggest challenges for implementing cloud native technologies is learning the fundamentals—especially when you need to fit your learning into a busy schedule.

In this series, we’ll break down core cloud native concepts, challenges, and best practices into short, manageable exercises and explainers, so you can learn five minutes at a time. These lessons assume a basic familiarity with the Linux command line and a Unix-like operating system—beyond that, you don’t need any special preparation to get started.

Last time, we ran a web app–the open source wiki platform Mediawiki–from a container in a single-container configuration, with a persistent volume and a container port published to the host machine. That gave us a functional development environment, from which we could modify and build on the application.

But cloud native deployments often consist of applications spread across multiple containers. How can we connect the constituent parts of a single application once they’ve been containerized? In this tutorial, we’ll learn how to let containers talk to one another by deploying Mediawiki in a multi-container configuration on a user-defined network.

Table of Contents

  1. What is a Container?
  2. Creating, Observing, and Deleting Containers
  3. Build Image from Dockerfile
  4. Using an Image Registry
  5. Volumes and Persistent Storage
  6. Container Networking and Opening Container Ports
  7. Running a Containerized App
  8. Multi-Container Apps on User-Defined Networks←You are here
  9. Docker Compose and Next Steps

User-defined networks

When containers reside on the default bridge network together, they should in theory be able to communicate with each other by name via Domain Name System (DNS)–but they can’t. Instead, those containerized apps need to know one another’s specific IP addresses to transmit data back and forth.

This is a deliberate restriction on the default bridge network. But why? Well, simply by virtue of being default, the docker0 bridge is apt to be a busy place, full of Docker containers without any necessary relationship to one another—without any need to communicate. That could be a security risk in a system predicated on isolation. So on the default bridge, Docker makes containers jump through a few extra hoops to communicate effectively.

When we have a group of containers that need to communicate, instead of using the default bridge we can place them in their own user-defined network. While this isn’t the only way to let containers communicate, it is the Docker-preferred way of doing things, since this creates a precisely scoped layer of isolation. We can create a user-defined network with the command:

docker network create 

The -d (or --driver) argument for this command specifies a model for the new network: bridge, overlay, or a custom driver option added by the user.

  • Bridge networks allow containers within the network—all of which must be on the same Docker daemon host—to communicate with one another while isolating them from other networks.
  • Overlay networks allow containers within the network—which may be spread across multiple Docker daemon hosts–to communicate with one another while isolating them from other networks. This driver is used by the container orchestrator Docker Swarm.
  • Custom drivers allow for custom network rules.

The bridge driver is the default, so if you don’t specify a driver, Docker will create a bridge network. The bridge driver is what we’ll be using in this lesson.

What about linking?

Another way to name-based communication between containers on the default bridge is container linking, which involves creating manual links between containers using the --link argument. This was once the standard technique for connecting containers, and you’ll still see it used in the docs for many images. But Docker considers this a legacy option, which means that it is not recommended and may be disabled in the future; linking has been superseded by user-defined networks.

Exercise: A multi-container wiki using container networking

You can follow along in the following video tutorial:

Or, you can go through the written-out steps below:

First, let’s create our new user-defined network. We’ll explicitly specify the bridge driver and name the new network wiki-net, so we can identify its role at a glance.

docker network create -d bridge wiki-net

Now, our new implementation of Mediawiki is going to be broken into two containers:

  1. the application itself, and
  2. a MySQL database

Last time, we mentioned that our single-container configuration of Mediawiki using the SQLite database was best-suited to a development environment rather than real-world production deployment. But why is that? Why are we bothering with a multi-container configuration? For the answer, we should briefly consider the strengths and weaknesses of our database options.

  • SQLite is designed to be lightweight, portable, and easily embedded within an app. It’s principally a tool for local data storage.
  • MySQL needs a container of its own to serve the database; it has a heavier footprint, and is designed to handle many simultaneous requests rather than embedding directly with an app.

SQLite is great for development: we can set it up easily and it will run simply and quickly. But it’s not built for large, scalable datasets intended to grow indefinitely and to be queried concurrently by many different users. That’s a different use case, and so it calls for a different tool.

Neither database is “better” or “worse” than the other; they’re not even really comparable, because they serve different purposes. And those different purposes directly inform the container pattern we adopt. This might seem like a simple point, but it’s easy to forget and will constantly guide our approach in cloud native development: our solutions should be determined by our problems and contexts.

Now, let’s create our containerized MySQL database:

docker run --name wiki-mysql --network=wiki-net -v wiki-data:/var/lib/mysql -d -e MYSQL_ROOT_PASSWORD=root mysql

That’s a pretty hefty docker run command, so let’s break it down. We’re…

  • Naming the container wiki-mysql
  • Using the --network argument to assign the container to our new wiki-net network
  • Mounting the wiki-data volume (the same one from the previous lesson—if you deleted that volume, you may need to recreate it) and assigning it to the directory in the MySQL container where it expects to be able to save persistent data
  • Running in detached mode, as discussed in Lesson 6
  • Using the -e argument to specify an environment variable—in this case, a root user password for the database. (Never use a password like “root” in production, but we’ll use it here for the sake of replicability.)
  • Building from the official Docker Hub image for MySQL

You might find it interesting to launch an nginx container on the default bridge and compare its IP address to that used by our MySQL container on wiki-net:

docker run --name nginx-test -d nginx
675eeead7df8d23fbb388826c58403223fd64cf21b9d44917dfb38091d1b6e7f
docker inspect --format='{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' nginx-test
172.17.0.2
docker inspect --format='{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' wiki-mysql
172.19.0.2

Here our networks are differentiated at the second decimal. Further containers on wiki-net will have addresses in the 172.19… range.

Remove the nginx-test container if you created it, and make sure you don’t have any extraneous containers running with docker container ls. All you should see is wiki-mysql.

Now let’s get the wiki app itself up and running:

docker run --name wiki-app --network=wiki-net -v wiki-data:/wiki-data -p 8000:80 -d mediawiki

We’re running the app on the same network as the database, and we’re still mounting a volume, which the app will use for some configuration data. When we navigate to localhost:8000, we should see the same set-up screen as last lesson.

Move forward to the Connect to database screen just like you did last time, and now select “MariaDB, MySQL, or compatible.”

For Database host, you can simply enter the name of the database container—in this case, wiki-mysql. If these containers are restarted, they’ll still be able to interact with one another as configured, even if those future instances have different IP addresses.

For Database name, you can choose any name. You don’t need to enter anything for the table prefix, and for the Database password, you’ll enter the password we set via environment variable (same as username, “root”) when we created the database container.

Screenshot of setup page

On the next screen, click Continue.

Finalize your administrator information, and then go through the installation process.

Like last time, you’ll need to download the generated LocalSettings.php file and include it in the base directory of the wiki app. (You can review the instructions from the last lesson if you need some reminders on this process.) Then we’ll commit our updated image and start a new container.

docker commit wiki-app wiki-app
docker container stop wiki-app
docker container rm wiki-app
docker run --name wiki-app --network=wiki-net -v wiki-data:/wiki-data -p 8000:80 -d wiki-app

Now we have a multi-container app with a production-grade database running on a user-defined network according to Docker best practices.

If we want to deploy this app repeatedly or at scale, we might wish to go through a little less manual configuration. Next time, in the final lesson of our container fundamentals module, we’ll learn how to streamline the deployment of multi-container applications.

banner-img
From Virtualization to Containerization
Learn how to move from monolithic to microservices in this free eBook
Download Now
Radio Cloud Native – Week of May 11th, 2022

Every Wednesday, Nick Chase and Eric Gregory from Mirantis go over the week’s cloud native and industry news. This week they discussed: Docker Extensions Artificial Intelligence shows signs that it's reaching the common person Google Cloud TPU VMs reach general availability Google buys MobileX, folds into Google Cloud NIST changes Palantir is back, and it's got a Blanket Purchase Agreement at the Department of Health and Human …

Radio Cloud Native – Week of May 11th, 2022
Where do Ubuntu 20.04, OpenSearch, Tungsten Fabric, and more all come together? In the latest Mirantis Container Cloud releases!

In the last several weeks we have released two updates to Mirantis Container Cloud - versions 2.16 and 2.17, which bring a number of important changes and enhancements. These are focused on both keeping key components up to date to provide the latest functionality and security fixes, and also delivering new functionalities for our customers to take advantage of in …

Where do Ubuntu 20.04, OpenSearch, Tungsten Fabric, and more all come together? In the latest Mirantis Container Cloud releases!
Monitoring Kubernetes costs using Kubecost and Mirantis Kubernetes Engine [Transcript]

Cloud environments & Kubernetes are becoming more and more expensive to operate and manage. In this demo-rich workshop, Mirantis and Kubecost demonstrate how to deploy Kubecost as a Helm chart on top of Mirantis Kubernetes Engine. Lens users will be able to visualize their Kubernetes spend directly in the Lens desktop application, allowing users to view spend and costs efficiently …

Monitoring Kubernetes costs using Kubecost and Mirantis Kubernetes Engine [Transcript]
WHITEPAPER
The Definitive Guide to Container Platforms
READ IT NOW
Mirantis Webstore
Purchase Kubernetes support
SHOP NOW
LIVE WEBINAR
Istio in the Enterprise: Security & Scale Out Challenges for Microservices in k8s

Presented with Tetrate
SAVE SEAT