Cloud Native 5 Minutes at a Time: What is a Container?


One of the biggest challenges for implementing cloud native technologies is learning the fundamentals — especially when you need to fit your learning in a busy schedule.

In this series, we’ll break down core cloud native concepts, challenges, and best practices into short, manageable exercises and explainers, so you can learn five minutes at a time. These lessons assume a basic familiarity with the Linux command line and a Unix-like operating system — beyond that, you don’t need any special preparation to get started.

In this lesson, we’ll explain the technology at the gravitational center of the cloud native galaxy: containerization.

Table of Contents

  1. What is a Container?←You are here
  2. Creating, Observing, and Deleting Containers
  3. Build Image from Dockerfile
  4. Using an Image Registry
  5. Volumes and Persistent Storage
  6. Container Networking and Opening Container Ports
  7. Running a Containerized App
  8. Multi-Container Apps on User-Defined Networks

What is a container?

Containers are sandboxed software environments that share common dependencies, such as the operating system kernel. You might run many containers on the same machine, and where they depend on different binaries or libraries, they can use those — while sharing the same operating system layer.

graphical representation of a container

More technically, we can say that containers are groups of processes isolated by kernel namespaces, control groups, and restrictions on root privileges and system calls. We’ll see what that means in the next lesson.

But first, we should think about the purpose of containerization. Why would we want to isolate our processes in the first place? Why not simply run programs in parallel on the same system?

Why use containers?

There are many reasons why you might need to isolate processes, especially in the enterprise. You may wish to keep processes separate for the sake of security, so that one program can’t access data from another. You may need to be certain that a process doesn’t have access to root privileges and system calls.

Or it may be a simple matter of resource efficiency and system hygiene. For example, on a given machine you may have one process that relies on Python 2.7 and another that calls for 3.1. Once such competing dependency requirements start to compound, they can create a real headache that process isolation goes a long way toward resolving.

One way to isolate processes is to run them on dedicated virtual machines (or VMs). For some use cases, this may be the most suitable approach, but containers offer advantages that VMs do not. Because VMs simulate an entire machine - including the operating system - they are usually much more resource-intensive. And because containers are so relatively lightweight, they are more portable and easy to replicate.

Indeed, the portability and scalability of containers provides further uses for containers. They can speed development by providing pre-fabricated software modules in the form of container images: easy-to-download container configurations with a certain set of applications and dependencies ready to go. These container images provide readily accessible building blocks for developers, as well as a canvas that is easy to standardize across an organization.

Those are some powerful advantages that can transform the way an organization delivers software. So how do you get started working with containers? What are the primary containerization tools? Most beginners will want to start with Docker.

What is Docker?

Today, “Docker” might refer to the company, Docker Inc., or the suite of tools that they package in their Docker Desktop application for Mac and Windows. But all of that is built around Docker Engine: the application that builds the sandbox walls and passes messages from the processes inside to the kernel. When we refer to Docker in these lessons, we’re talking about the container engine. It sets up structures like control groups that isolate containerized processes from the rest of the system (and, at least initially, from one another).

Today, there are many alternative technologies available, including our own Mirantis Container Runtime. Often, these are designed with extra functionality - Mirantis Container Runtime, for example, provides features for enterprise security and compliance - and are built on the same open-source bones as Docker Engine.

For the purposes of this tutorial, we will use Docker Engine, which is easy to install and includes everything you need to get started.

How to install Docker

In order to install Docker Engine on your system, simply navigate to the download page for your operating system:

  • Linux
  • Mac
  • Windows
  • If you’re on Mac or Windows, you’ll be downloading Docker Desktop, a suite of tools packaged with a graphical user interface for launching and managing containers. You’ll need to have this running as you work through the exercise below. Docker Desktop is not yet available on Linux; instead, Linux users will simply install the container engine.

    Linux users: Under the “Server” heading on the download page, choose your Linux distribution and follow the instructions to install via the relevant package manager, install manually, or install from binaries. If you’re not sure which to choose, I recommend using a package manager.

    Windows users: In addition to Docker Engine, you will need a Unix-like terminal. If you don’t already have a favorite tool for this purpose, Git Bash is a simple solution.

    That’s it for this introduction. In the next lesson, we’ll start working with Docker to create, observe, and delete containers.

  "$experimentIndex": 0,
  "$variantIndexes": [
  "$activeVariants": [
  "$classes": [
  "name": "alternate-ad-placement",
  "experimentID": "ca62VGC4QDaNqECV8gH-kg",
  "variants": [