NEW! Mirantis Academy -   Learn confidently with expert guidance and On-demand content.   Learn More

< BLOG HOME

Magnum vs Murano: An OpenStack container strategy

If the last 18 months or so have shown us anything, it's that containerized development is here to stay. And why not?  Containers are small, they don't take a lot of resources, they're portable, they start quickly ... what's not to like?

Some of the reasons for the popularity of containers are practical, and stem from improvements to traditional practices. For example, the nature of a container makes it possible to move from a development to staging to production environment without introducing new variables that can compromise reliability.

Some reasons come from other emerging technologies and architectures.  Take, for example, applications built on microservices, in which components handle only small bits of an application and can be easily swapped out with new versions without affecting the rest of the application.  These applications require a microservice to spawn and return a result nearly instantaneously, which is one of the strengths of containers.

Given all this, it's no surprise that containers -- which had been around for years -- seemed to take the world by storm when Docker made them more convenient to use.  Despite the near hysterics in the various industries when Docker first emerged, however, containers are not a replacement for virtual machines -- in fact, in some environments they're dependent upon them. This is because a container isn't exactly a fully-fledged life form. It's a configured environment with applications and processes -- even its own operating system, in many cases -- but it doesn't bring its own kernel. No, a container must run somewhere -- and in many cases, that "somewhere" is a virtual machine.

And that, for many people and enterprises, means OpenStack.

OpenStack and containers

Because the OpenStack community realizes the importance of containers, it's no surprise that projects have sprung up to make it possible to use containers from within the cloud platform.

To start with, there was nova-docker, an attempt to create a Docker hypervisor driver for Nova, the OpenStack compute project. But containers aren't VMs; you can't use them interchangeably, and eventually the community realized that the Nova API simply wasn't a good fit for the container architecture, and went in another direction.

When Google released the Kubernetes container management system, based on their internal Borg orchestrator, the team behind Murano, the OpenStack Application Catalog, took advantage of the opportunity to add containers to OpenStack by creating a Kubernetes application to Murano.  This application enabled developers to create a Kubernetes cluster in their OpenStack cloud by using the Murano dashboard, simplifying the process and enabling developers to both easily use Kubernetes on OpenStack, and, perhaps more importantly, create Kubernetes/Docker-based applications that could be easily deployed by end users.

Meanwhile, container fever was spreading like wildfire, and the OpenStack community decided that it was time for containers to be "first class citizens" of OpenStack, just like VMs and bare metal servers. To do that, the OpenStack Container Service project, Magnum, set out to build containers into OpenStack in a way that took advantage of the capabilities of both platforms. The end result is a service that enables users to create a "bay" into which containers can be deployed. These bays can be orchestrated by Kubernetes, Docker Swarm, or Mesos, providing flexibility to developers.

Of course, this seeming duplication can lead to confusion, so the Magnum and Murano projects are coming together to eliminate duplicate functionality by creating a Magnum plugin for Murano. This way, developers who want to get the advantage of using Murano to create applications will also be getting the benefits of Magnum functionality. (We'll be looking at that process in detail in a future article.)

But in the meantime, if you're developing a container application on OpenStack, which way should you go?  Murano?  Magnum?  Neither?

At the moment, there are three different approaches to using containers in OpenStack, each of which is at a different layer of management -- a different ratio of control to convenience.

Using a configuration management tool such as Puppet or Ansible

The most obvious way to use containers in OpenStack is to use a configuration management tool such as Puppet or Ansible to request and configure cloud resources on which to run your container apps.  In this case, you'd basically use the tool to set up your VMs, set up security groups, set up networking, set up Kubernetes or some other orchestration tool on those VMs, set up your application, and so on.

This method provides the most control over the final environment, because you're deciding virtually everything yourself.  You'll also you have access to all of the features of the container orchestrator because you're using it directly.

It is, however, also the least convenient, because you're responsible for deciding virtually everything yourself.  That means that deploying the final application isn't something you can easily hand over to the end user.  In general, it's not even something you can really hand over to a developer, because this is a dev/ops thing and requires in-depth knowledge of what's going on in the cloud.

Of course, in some organizations, that's not necessarily a handicap. This method is best for organizations that are still doing things "old school", where IT deploys applications and then provides endpoints to users rather than enabling a self-service environment.

Using the OpenStack Container Service (Magnum) to deploy a containerized environment

Moving one step up the management ladder, we have the OpenStack Container Service, Magnum.  In contrast to using Puppet or a similar tool, Magnum makes containers available as first class citizens in OpenStack, so you don't have to do everything yourself.

Instead, you can use the Magnum API to create a bay into which containers can be deployed. Magnum interfaces with the container orchestration system to create and manage the environment, so you don't have to worry about aspects of the project such as creating VMs, managing networks, and so on.  Magnum currently provides the ability to use Kubernetes, Docker Swarm, and Mesos as orchestrators; once the application has been set up.

Using Magnum provides less control than doing it yourself, but still a pretty good amount; you decide what kind/how many bays, and so on.  It's also much more convenient than doing it yourself because OpenStack handles deployment, but still very much a dev/ops tool, and despite an emerging UI, not for end users.

Using Magnum directly is best for developers who are specifically writing containerized apps that won't need to be deployed by end users, or who need features Murano doesn't yet have, such as the Kubernetes external load balancer, which exposes the service to the external network, and TLS certificates, which secure communication between the master and minions, as well as Magnum services and Kubernetes services running on bays.

Using the OpenStack Application Catalog (Murano) to deploy a containerized environment

When considering Murano for a container-based application, we need to make a distinction: Murano itself isn't a container environment. Instead, it's an application catalog that happens to have a Kubernetes application for deploying containers.

The advantage of using Murano is that as a developer, you not only don't have to manage Kubernetes yourself (though you can) you have the ability to create a Kubernetes-based application that can be easily deployed by users.  In fact, users may even be unaware that the application is even running on containers; all they know is that they requested an application, and it's available. Murano handles all of the internal provisioning for them based on choices made in an easy-to-use User Interface.

Using Murano is best for developers who are writing apps for other people to use, usually in a self-service manner.

Where do we go from here?

At the time of this writing, the Murano and Magnum communities are getting together with a plan to create a Magnum application for Murano. This app will enable developers to get the ease-of-use and convenience of Murano while still making use of Magnum and all it offers, including access to container orchestrators beyond Kubernetes.

Stay tuned to the Mirantis blog; next time we'll talk about how that will actually work, and show you how to take advantage of it.

Interested in trying out containers in Murano?  You can install it easily by downloading Mirantis OpenStack 7.0.

Choose your cloud native journey.

Whatever your role, we’re here to help with open source tools and world-class support.

GET STARTED
NEWSLETTER

Subscribe to our bi-weekly newsletter for exclusive interviews, expert commentary, and thought leadership on topics shaping the cloud native world.

JOIN NOW