NEW! Mirantis Academy -   Learn confidently with expert guidance and On-demand content.   Learn More

< BLOG HOME

The Value of Kubernetes as a Substrate for Complex Software, or Why We Containerized OpenStack [webinar]

Guest Post - January 01, 2011
image
Recently we hosted a webinar where Field CTO Shaun O'Meara talked to Product Manager Artem Andreev about how we containerized OpenStack so it could take advantage of the strengths of Kubernetes. We wanted to share an excerpt of Artem's presentation below. If you would like to view the full-length webinar recording, please click here

Challenges of OpenStack lifecycle management

What are the actual real-life challenges that we deal with when deploying OpenStack-based infrastructure into real bare metal clusters for our customers?

Challenge: Auto-scaling

So the first challenge, which is pretty basic, is the ability of the API endpoints to scale up and down, depending on the load that the cloud is experiencing. The more users you get talking to your API and making API calls, the more processing power the API component needs. There needs to be an entity – I put it here as some sort of a magical brain, some smart person or service that sits there and has control of the things. In this case, it would need to be able to receive metrics from the load balancer, which is sitting in front of Nova API, and understand, okay, there is quite a heavy load happening on the cloud, so there needs to be more Nova API services, and give a command to the registration, to the system to increase the number of pods for Nova API in particular. And of course, once that load is gone, it doesn't make sense to still have so many Nova API pods, because it's a resource consumption. So then it would need to be downscaled. This smart management, and how exactly it is achieved, we'll cover a bit later and show the solution.

Challenge: Self-healing

The second challenge, which is also pretty basic, is how to deal with failures. If one little piece of your system gets broken, it fails, it stops running, and you need to be able to restart it and get it back as soon as possible. In our case, if, for example, the Nova Compute service goes down for any reason, it needs to be restarted ASAP. Why? Because otherwise, you lose control over your VMs. You're not able to create new VMs, or you're not able to change the already running VMs. Loss of control means loss of business. There needs to be a brain that tracks such situations down and takes action, at a minimum, to restart the failed pod itself, and of course, notify the operator what happened, so that post-recovery troubleshooting can take place. In the Kubernetes world, this is quite a trivial task that everybody knows about. But still, it's one of the biggest values that Kubernetes adds to OpenStack as an application.

Challenge: Consistent configuration

The next challenge is a bit more interesting. Since there are a lot of moving parts and a lot of subcomponents to those moving parts, they all need to be configured in a way that they kind of know about each other, and have the similar parameters set in their configuration files. For example, Nova API uses some common data from Nova Scheduler and Nova Conductor, so they all need to be configured in a similar way. How can we achieve that? How can we make sure that whatever you deploy and manage in this cloud, is configured properly, and the whole system works as one? There needs to be a mechanism which would take your input as the cloud operator, presented in a simplified form, with just a few big options or parameters defined, which basically say how the system should be deployed, and then break it down into smaller pieces. Of course, we need to do the semantics check, the syntax check, break it down, and then pass it over to all of these little pods deployed across your 2,500 node cluster. So how do we make it consistent?This is one of the challenges that we also solve with the help of Kubernetes. How exactly? We'll cover this later.

Challenge: Making things work together

As I mentioned before, those components need to know about each other and be able to talk to each other. But not all are typical Kubernetes pods, which are stateless. For example, Nova Compute and pieces of the libvirtd, are special pods that need to be treated in a special way. The same applies to the database, which is also a containerized solution, a cluster of instances that need to be able to find each other, and need to be able to establish a quorum and accept connections. This whole communication and setup needs to be done. Although it might appear quite trivial, if you just look at one particular service. But altogether, it is a lot of configuration and setup, a lot of procedures that need to be observed for every particular node, and done in a correct way. Otherwise, it will just not work. And the same problem applies to exchanging configuration data with external systems, like software defined networking solutions or software defined storage solutions. So they need to be able to find each other, to communicate, and to do their job. In our world, this magical brain addresses this problem as well. So it hides all of this complex machinery behind a simple and human consumable configuration interface, and handles all of this magic for you, while exposing just very simple handles outside.

Challenge: OpenStack updates

Updates and upgrades. This is a typical problem, and for OpenStack, it is doubly painful, normally. Why? Because, again, of the complexity. A lot of things might go wrong when upgrading and updating your OpenStack cluster. This is just a hard lesson learned from our past experience and past history. This is one of the reasons why many OpenStack operators prefer to stay on their old versions of OpenStack and are afraid of upgrades, because they don't want to break anything. They’re basically slowing themselves down because of that fear, and their business also slows down, because there is no space for improvement. But if you look from the perspective of a typical Kubernetes application, what is an update? You download fresh Docker images from your artifactory or repository, and you restart your pods one by one. Quite trivial. The question is, how do you ensure that this procedure doesn't break your infrastructure, doesn't break your OpenStack? The services need to be restarted in a certain order, and they need to be restarted very, very fast. Otherwise, there will be downtime on your API, and users will not be able to create VMs or do anything, and that means loss of business. This magical brain actually is also designed to solve this problem for us. It knows how OpenStack is built, how every particular OpenStack service is built, and which components are there, and which order these components should be restarted, so that there is very little downtime. And it all handles it very well. How exactly? We are close to the answer.Upgrades are even more complicated than updates, because the data — the persistence layer — is impacted. The database of OpenStack needs to change its structure to accommodate new features delivered in a new release of OpenStack. So an upgrade is a shift and transition between the previous major release OpenStack to the next one. This is what we call a major change, or an upgrade. The overall procedure is similar to updating, but it also needs to consider how the data changes. And of course, if necessary, it needs to make a backup of the database before doing any changes. And if something goes wrong, it needs to be able to roll things back. This kind of magic also needs to be handled somewhere. How exactly? We have arrived at the answer.

The Solution: Mirantis OpenStack for Kubernetes

When building the Mirantis OpenStack for Kubernetes product, we followed the general ideas and concepts that exist in the Kubernetes community for managing any kind of complex Kubernetes apps. We just applied the same practices, combining many of these practices together into a single big solution for containerized OpenStack. Let me just talk you through how it works. The most important component in this diagram is the OpenStack controller. So this is a typical pattern for Kubernetes apps. The controller is a special pod, which extends the Kubernetes cluster with additional functionality, additional logic, which is specifically built to manage the lifecycle of this app. In our case, it's the OpenStack controller. So this is a quite extensive piece of code which runs as a part of the Kubernetes cluster, on top of which OpenStack runs. And it exposes a custom resource. So a custom resource is a data structure which is seen through the API of Kubernetes. It basically defines how your OpenStack cluster should be, in very big strokes, for example this needs to have the OpenStack Ussuri release with Open vSwitch for the SDN solution, and the major set of services that needs to be there. Like very, very big things, right? This is the information that you as an operator put into the structure when deploying your OpenStack cloud code. Similar controllers exist for the SDN and SDS, software defined storage part. It's a similar pattern, with similar pieces of code, and exposes very similar custom resources, which are designed to manage their own parts. These controllers exchange data, so that eventually, the pieces of the system are able to discover each other, talk, and work together. This data is exchanged in the form of a Kubernetes secret, which is a data structure which contains keys, passwords, and certificates that controllers exchange in order to be able to talk, so that the pieces can talk to each other. So we have these data structures defined. What happens next? The OpenStack controller does a bit of translation. It takes this simplified description, the custom resource, and expands it into a set of parameters for OpenStack Helm charts. OpenStack Helm is an upstream community project. Every Helm chart can be seen as a recipe which describes the way that every particular OpenStack service – for example, OpenStack Nova – can be deployed on top of Kubernetes. So these are like templates. But these templates need to have parameters. This is what the OpenStack controller does. It takes the simplified, human consumable description of a cluster and translates that into a set of values that are more complex, with more numerous data structures that are handed over to OpenStack Helm charts. Kubernetes Helm takes these charts, combines them with parameters, and sends these whole definitions of the app to the Kubernetes API. Sometimes, the OpenStack controller needs to talk to the Kubernetes API. That's fine for managing day two events, like making changes, for example. The Kubernetes API accepts these definitions of the primitives that eventually comprise the application, and then controls all these pods, services, daemonsets, LBs, all of these entities that altogether comprise the Kubernetes application — containerized OpenStack, in our case. It controls their life cycle and handles all of these magical things, like self-healing, auto-scaling, and pieces of integration in between all these. This complex logic that solves the mentioned challenges is distributed across multiple entities, in our case. These are controllers, Helm charts, and Kubernetes itself, the Kubernetes machinery. Combined together, they are able to do really magical things to manage even complex applications like containerized OpenStack. Let's summarize really quickly what we can conclude from this story. These pros and cons are written based on our almost a year’s experience of having Mirantis OpenStack for Kubernetes in production. We really feel that this approach works well, and it solves a lot of challenges that classic non-containerized OpenStack distributions actually have.

Benefits of Containerized OpenStack

With Kubernetes as an underlay, we are able to achieve OpenStack reliability through the self-healing, auto-scaling mechanisms that Kubernetes provides natively, out of the box.
benefits of containerized OpenStack
We're able to solve the minor challenges of libraries and dependencies, because there's a lot of moving pieces. Every package needs to have its own version of the libraries. With Kubernetes, this problem is solved through isolation of services inside Docker images inside containers. Every particular service gets what it needs to run, but this does not affect its neighbors, something that you wouldn't get with the classic Debian or Red Hat style packages. Rolling updates. So Kubernetes provides a very good basis for implementing quite a non-trivial logic for application updates, and we are making very good use of it. We are able to achieve 99.5% uptime when applying an update to Mirantis OpenStack with Kubernetes, and this number is improving. We are able to simplify the setup of an OpenStack cloud by means of using the building blocks for networking that Kubernetes offers us, like load balancing and container network communication. It truly solves a lot of issues when setting things up. It saves us a lot of time in day one, actually. Helm and its auto-reconciling mechanisms are really, really helpful for managing this overall infrastructure in day two operations. You just define what you want to achieve, what you want to have deployed. The rest, how to get there, is figured out automatically. That’s the magic of Helm and Kubernetes. You don't have to worry about how to deploy. And, of course, scaling of control plane services, which means adding more controller nodes to an OpenStack cluster. This task was previously quite complex, because you needed to do a lot of setup for every particular node, and you needed to connect it somehow to the cluster. With Kubernetes, it's easy. You just mark a node with a set of labels, and there we go. All of the necessary pods, all of the necessary replicas appear on this node automatically. Easy.
To view the full webinar recording, please click here
To start a free trial of Mirantis OpenStack for Kubernetes, please click here

Choose your cloud native journey.

Whatever your role, we’re here to help with open source tools and world-class support.

GET STARTED
NEWSLETTER

Subscribe to our bi-weekly newsletter for exclusive interviews, expert commentary, and thought leadership on topics shaping the cloud native world.

JOIN NOW