NEW! Mirantis Academy -   Learn confidently with expert guidance and On-demand content.   Learn More

< BLOG HOME

What are containers, and why do they matter to OpenStack?

Nick Chase - October 13, 2015
Containers are currently experiencing a rebirth, and part of that is due to the growth of cloud computing. Containerization is not a revolutionary idea - container technologies have existed for years and have been publicly available and accessible to a vast range of applications, but until now, containers were rarely considered when architecting systems.
Today, virtually everyone has heard about Docker, many companies are evaluating Docker, and many want Docker, but few have been running Docker in production -- or even seriously -- for more than a year or so. So what changed in the last year?
Docker made containerization easy.
Before Docker, running containers required tons of hacks, a deep understanding of the overall system, and a certain dose of courage. Isolating processes or network stacks is something that has technically existed for a long time -- think about FreeBSD jails or other implementations -- but now with Docker, people just need one command (`docker`) to have something up and running in seconds.
What's more, immediate gratification is not the only reason for Docker's success. The availability of a huge, free Hub with virtually any application you can think of is very attractive to those who don't want to reimplement full stacks of virtualisation every time, but who just want to be free to quickly pull and assemble microservices to achieve the bigger architectural result in minutes.
Where possible, in fact, overcoming the virtualisation stack enables the start of a container in fractions of seconds rather than a Virtual Machine in minutes, and which in turn enables the super fast deployment of entire infrastructures.
Of course, containers are not exempt from problems, such as security and other problems that are common in immature technologies that are developing rapidly. These are all challenges that must be addressed before running containers in mission-critical environments, but the promises of containerization remain valid and attractive for all.

What are containers, how are they structured, and how they relate to VMs

Many compare containers to virtualisation, but containers are not magically capable of running everything. For example, you can’t set iptables firewall rules inside a container; because containers on a host share the kernel, they do not abstract hardware like VMs do. For the uninitiated, this habit of thinking of containers as "virtualisation lite" has the potential to crush your business model before you even get started.
Obviously we need to take the time to understand containers better, so what are they?
Containers are a technology that enables developers to download ready base images, pack onto them applications, break them down into components, deploy and test each part in continuous integration system, and push them to a registry, where system engineers can deploy them on top of the existing infrastructure and make them available to the world.
A container is a system abstraction that can exist thanks to operating system-level virtualisation features. By exploiting these features, container systems can separate processes and networks into something similar to a system sandboxes, so putting an application in a container is more or less like isolating it from the operating system: Processes inside a container are independent; they see their own virtual filesystem, can communicate with the external world on their own, and so on. Many applications thought to run only inside a VM can also run inside a container.
The difference between containers and VMs lies in the decoupling of the stack; VMs stack everything from the kernel to the userland binaries and libraries to the application itself in a totally isolated and independent silo. Containers instead exploit the same kernel of a container hypervisor, may share userland binaries, may share other libraries, and the specific application runs on top of these layers.
An image may also act as a base for other images. In Docker, it’s pretty normal to stack images one on top of another to get a functional result. You might start with the Ubuntu base image, add the Apache 2.4 web server, and create a web-based microservice on top of that.
Images are the way containers can be started: You launch containers (the runtime) from images (the building blocks). Images include the container software and are like a snapshot, but with a couple of additional features. First, they are made on the top of some userspace filesystem, meaning that images can be stacked and share some content. Second, they are highly portable. This allows the users to share their images across the systems, recycle them or make them available to other users via some public repository, update them with ease, and store them wherever is possible.

Types of containers

In the beginning, there was chroot. Chroot() is a system call for the kernel of *NIX operating systems that changes the root directory of the current running process. The process running in a chroot jail (that’s the name of the environment) will not know about the real filesystem root directory, but will know of an apparent root of user choice.
This functionality enables you to isolate applications, letting the process see, for example, /mnt/root as /. But that means that the only files the application will see are those in /mnt/root (the jail), so  building a complete environment capable of running this application requires the operator to create a full tree of software within /mnt/root. This includes binaries in /bin, userspace programs and libraries in /usr, configurations in /etc and so on. This task is not particularly easy, especially for the most complex applications.
LXC is historically one of the first attempts to release a popular containerization technology. Together with other tools of the LinuxContainers ecosystem, it achieved its goal the wide adoption of LXC as a containerization system. It implements a system API to make the Linux kernel container features available to users: By sharing the kernel, LXC makes it possible to create an architecture somewhere between chroots and VMs -- without sacrificing the adherence to Linux standards.

Docker

Docker is just one of the containers technologies out there, albeit the one that potentially has the most traction. Docker is easy to understand, immediate to run, and comes with a great ecosystem of tools for orchestration. It simply works.
On a practical level, Docker is an API implementation, and has its own server (daemon), command line client, tools and a huge availability of utilities - from the registry to the fancy UIs - packed in containers.
The power of Docker comes from the fact that all the previously possible chrootizations and process grouping and separations were available through a series of different commands. The developer or the administrator had to manually solve some of the frequent problems in other containerization technologies by pulling patches, tools, implementing them. With Docker, on the other hand, all this burden disappears, because all operations are encapsulated in just one client command: `docker`.
Both the client and the RESTful API interact with the Docker server, a standard daemon reachable also by remote that accepts and processes the requests. The daemon is responsible for handle images and containers on the Docker host, and binds to the ports tcp/2375 and tcp/2376, as officially recognized by IANA.
Docker abandoned LXC, on which it was originally based, in favor of Libcontainer, a total rewrite in Go of the foundation library and APIs to abstract Linux kernel mechanisms for virtualization.

The Open Container Initiative in detail

In the wake of the success of Docker, many other products appeared: Rkt from CoreOS, Amazon Container Service, Kurma from Apcera, and so on. This proliferation led to a fear of fragmentation among container technologies, but at DockerCon15 in San Francisco, the groups began a unified effort to create an universal  format for containers: The Open Container Initiative, sitting below the Linux Foundation, standardizes how containers should be done, both in terms of images and running containers. Containers created in one of the Open Container formats can be distributable among the different ecosystems. The Open Container Initiative includes all the container players around - from Amazon to CoreOS to Google to VMware to Oracle to many, many others and, of course, Docker.
The Open Container specification (https://github.com/opencontainers/specs) is still somewhat of a work in progress, and every organization or individual is encouraged to contribute. So far the spec defines some unified interfaces to containers, standards in terms of agnosticism (platform and content) and industrial-grade delivery and automation minimal requirements. Companies that adhere to the Open Container specification will produce containers fully compliant with those specifications.
Also, the RunC binary (https://github.com/opencontainers/runc) was developed with the aim of having a unified super-wrapper binary command tool to work with any kind of container (instead of using the specific commands, like `docker` for Docker or `rkt` for Rocket).

Microservices

So what does all this have to do with the rise of cloud? Well, one of the things that we're seeing in cloud computing is the rise of microservices.
In the cloud, monolithic application development is a thing of the past. The new paradigm is called microservices architectures. With microservices, big applications, with all their functionalities are decomposed into many tiny one-purpose services that communicate with each other through a defined means, such as an API. Microservices have been called Legos for cloud computing.
In many ways, containers are an excellent technology for implementing microservices: every containerized microservice has an unique role (database, queue, web server), and containers orchestration (fully available in the most developed containers ecosystems) composes the whole application by putting all those containers into communication with each other.
There are several advantages to this approach. Composability, faster workflow, separation of functionality, maintainability and upgradability. Moreover, it is much easier to scale-out a microservice framework to multiple machines. Microservices can be quickly and easily swapped out for more efficient equivalents, or rolled back in case of problems without bringing down the rest of the system. Different technologies can be used in separate microservices, allowing teams to choose what’s appropriate to the task at hand.
Some big companies, such as Amazon or Netflix, already have microservices-based applications in production.
So what we have is a programming paradigm (microservices) perfect for an environment (cloud) implemented very well by a technology (containers).

Summary

Containers are light and portable stores for software and dependencies. Read like this, it seems a boring statement. But containers are totally changing the way we develop, deploy and run software.
The rise of Docker has been astonishing, and the integration between containers and cloud infrastructures is gaining more and more interest. In fact, OpenStack has not one, but three projects that have heavy components aspects to them: Kolla looks at deploying OpenStack using containerized services, Murano provides an easy way to deploy containerized apps using Kubernetes, and Magnum provides an entire orchestration system for containers, or Containers as a Service.
In future articles, we'll look at building your own containerized applications, and using them within OpenStack and other environments.  Are you using containers, and what do you want to learn how to do?

Choose your cloud native journey.

Whatever your role, we’re here to help with open source tools and world-class support.

GET STARTED
NEWSLETTER

Subscribe to our bi-weekly newsletter for exclusive interviews, expert commentary, and thought leadership on topics shaping the cloud native world.

JOIN NOW