Cloud Native 5 Minutes at a Time: What is a Container?

Eric Gregory - January 24, 2022 - , , , ,

One of the biggest challenges for implementing cloud native technologies is learning the fundamentals — especially when you need to fit your learning in a busy schedule.

In this series, we’ll break down core cloud native concepts, challenges, and best practices into short, manageable exercises and explainers, so you can learn five minutes at a time. These lessons assume a basic familiarity with the Linux command line and a Unix-like operating system — beyond that, you don’t need any special preparation to get started.

In this lesson, we’ll explain the technology at the gravitational center of the cloud native galaxy: containerization.

Table of Contents

  1. What is a Container?←You are here
  2. Creating, Observing, and Deleting Containers
  3. Build Image from Dockerfile
  4. Using an Image Registry
  5. Volumes and Persistent Storage
  6. Container Networking and Opening Container Ports
  7. Running a Containerized App
  8. Multi-Container Apps on User-Defined Networks
  9. Docker Compose and Next Steps

What is a container?

Containers are sandboxed software environments that share common dependencies, such as the operating system kernel. You might run many containers on the same machine, and where they depend on different binaries or libraries, they can use those — while sharing the same operating system layer.

graphical representation of a container

More technically, we can say that containers are groups of processes isolated by kernel namespaces, control groups, and restrictions on root privileges and system calls. We’ll see what that means in the next lesson.

But first, we should think about the purpose of containerization. Why would we want to isolate our processes in the first place? Why not simply run programs in parallel on the same system?

Why use containers?

There are many reasons why you might need to isolate processes, especially in the enterprise. You may wish to keep processes separate for the sake of security, so that one program can’t access data from another. You may need to be certain that a process doesn’t have access to root privileges and system calls.

Or it may be a simple matter of resource efficiency and system hygiene. For example, on a given machine you may have one process that relies on Python 2.7 and another that calls for 3.1. Once such competing dependency requirements start to compound, they can create a real headache that process isolation goes a long way toward resolving.

One way to isolate processes is to run them on dedicated virtual machines (or VMs). For some use cases, this may be the most suitable approach, but containers offer advantages that VMs do not. Because VMs simulate an entire machine – including the operating system – they are usually much more resource-intensive. And because containers are so relatively lightweight, they are more portable and easy to replicate.

Indeed, the portability and scalability of containers provides further uses for containers. They can speed development by providing pre-fabricated software modules in the form of container images: easy-to-download container configurations with a certain set of applications and dependencies ready to go. These container images provide readily accessible building blocks for developers, as well as a canvas that is easy to standardize across an organization.

Those are some powerful advantages that can transform the way an organization delivers software. So how do you get started working with containers? What are the primary containerization tools? Most beginners will want to start with Docker.

What is Docker?

Today, “Docker” might refer to the company, Docker Inc., or the suite of tools that they package in their Docker Desktop application for Mac and Windows. But all of that is built around Docker Engine: the application that builds the sandbox walls and passes messages from the processes inside to the kernel. When we refer to Docker in these lessons, we’re talking about the container engine. It sets up structures like control groups that isolate containerized processes from the rest of the system (and, at least initially, from one another).

Today, there are many alternative technologies available, including our own Mirantis Container Runtime. Often, these are designed with extra functionality – Mirantis Container Runtime, for example, provides features for enterprise security and compliance – and are built on the same open-source bones as Docker Engine.

For the purposes of this tutorial, we will use Docker Engine, which is easy to install and includes everything you need to get started.

How to install Docker

In order to install Docker Engine on your system, simply navigate to the download page for your operating system:

  • Linux
  • Mac
  • Windows
  • If you’re on Mac or Windows, you’ll be downloading Docker Desktop, a suite of tools packaged with a graphical user interface for launching and managing containers. You’ll need to have this running as you work through the exercise below. Docker Desktop is not yet available on Linux; instead, Linux users will simply install the container engine.

    Linux users: Under the “Server” heading on the download page, choose your Linux distribution and follow the instructions to install via the relevant package manager, install manually, or install from binaries. If you’re not sure which to choose, I recommend using a package manager.

    Windows users: In addition to Docker Engine, you will need a Unix-like terminal. If you don’t already have a favorite tool for this purpose, Git Bash is a simple solution.

    That’s it for this introduction. In the next lesson, we’ll start working with Docker to create, observe, and delete containers.

banner-img
From Virtualization to Containerization
Learn how to move from monolithic to microservices in this free eBook
Download Now
Radio Cloud Native – Week of May 11th, 2022

Every Wednesday, Nick Chase and Eric Gregory from Mirantis go over the week’s cloud native and industry news. This week they discussed: Docker Extensions Artificial Intelligence shows signs that it's reaching the common person Google Cloud TPU VMs reach general availability Google buys MobileX, folds into Google Cloud NIST changes Palantir is back, and it's got a Blanket Purchase Agreement at the Department of Health and Human …

Radio Cloud Native – Week of May 11th, 2022
Where do Ubuntu 20.04, OpenSearch, Tungsten Fabric, and more all come together? In the latest Mirantis Container Cloud releases!

In the last several weeks we have released two updates to Mirantis Container Cloud - versions 2.16 and 2.17, which bring a number of important changes and enhancements. These are focused on both keeping key components up to date to provide the latest functionality and security fixes, and also delivering new functionalities for our customers to take advantage of in …

Where do Ubuntu 20.04, OpenSearch, Tungsten Fabric, and more all come together? In the latest Mirantis Container Cloud releases!
Monitoring Kubernetes costs using Kubecost and Mirantis Kubernetes Engine [Transcript]

Cloud environments & Kubernetes are becoming more and more expensive to operate and manage. In this demo-rich workshop, Mirantis and Kubecost demonstrate how to deploy Kubecost as a Helm chart on top of Mirantis Kubernetes Engine. Lens users will be able to visualize their Kubernetes spend directly in the Lens desktop application, allowing users to view spend and costs efficiently …

Monitoring Kubernetes costs using Kubecost and Mirantis Kubernetes Engine [Transcript]
WHITEPAPER
The Definitive Guide to Container Platforms
READ IT NOW
Mirantis Webstore
Purchase Kubernetes support
SHOP NOW
LIVE WEBINAR
Getting started with Kubernetes part 2: Creating K8s objects with YAML

Thursday, December 30, 2021 at 10:00 AM PST
SAVE SEAT