NEW! Mirantis Academy -   Learn confidently with expert guidance and On-demand content.   Learn More

< BLOG HOME

The Future of OpenStack Webinar Q&A: Mirantis OpenStack for Kubernetes

image
A few weeks ago, Mirantis Field CTO Shaun O’Meara and Systems Architect Oleskii Kolodiazhnyi presented a webinar about the future of OpenStack, featuring an overview and demo of Mirantis’ latest OpenStack solution: Mirantis OpenStack for Kubernetes (MOS)
During the course of the webinar, we received a lot of great questions from the audience. In this post, we’ve included those questions, providing answers and links to helpful resources. To see the webinar in its entirety, watch it now on demand.
If you'd like to try Mirantis OpenStack for Kubernetes, check out our Get Started Guide, which includes instructions on how you can register to get a TryMOS Image!

What are your plans for larger clusters that are greater than 3,000 VMs? 

Extensive testing is going through for clusters of 200+ nodes at the moment, and we’ll be constantly testing. We have ways to validate even larger clusters coming up. So, if you have significantly large clusters, both sides, please reach out to your Account Executive or Customer Success Manager, and we'll definitely help you validate that size. It's on our roadmap for this quarter.

Is Masakari included in MOS?

We just released Masakari, the OpenStack service that ensures high availability of instances running on a host, as a Tech Preview in MOS 21.2. It's one of those features I spoke about, that we're really focusing on when we look at the enterprise and the legacy use cases. To learn more about MOS 21.2, read our release blog.

What about additional services apart from Infrastructure-as-a-Service?

I assume that question is around additional capabilities for OpenStack services, and we do have some planned. When you do an updated roadmap session with us, we'll talk about some that are coming. Later in the year, we're looking at things like Database as a service (DBaaS), and we're looking at some additional networking services. We actually have more OpenStack services right now than the ones I showed in the list, there just wasn't space to list any more. If there are services that you feel are critical to operating in your environments, please talk to us and we'll discuss them or work with you to show you how you can implement those yourself.

Can we integrate infrastructure alerts from StackLight directly with ticketing systems?

So if I understand the question correctly, you're asking if we can push alerts from the StackLight logging, monitoring and alerting solution into existing ticketing systems? Absolutely. You could create an alert forwarder that will push those alerts for you, and they'll be dropped into the ticketing system. We've done that for a couple of customers, we even have systems where we are force-forwarding logs and other capabilities directly to their ticketing systems.

Is StackLight monitoring extendable by the operators, or is it immutable and delivered as part of your updates only?

No, it's completely extendable by the operators. We will send out updates if we see new dashboards and new capabilities that we think are necessary. But from an operator perspective, you can create your own dashboards, you can create your own alerts. It's completely customizable.

What is the release lifecycle and support of k0s versions? Could we expect new releases available soon after the upstream vanilla k8s version? 

So the way I read this is how quickly will we support the current versions of Kubernetes in k0s. From a lifecycle perspective, as soon as the upstream is available, we will incorporate and test it. It’s very important for our supported versions to be tested. And we're looking at a cycle of days-to-weeks, at the most. k0s will be delivered through that same continuous delivery model. We have both a fully open source version of k0s as well as a supported version of k0s, and both will be released as rapidly as we can go. k0s has now officially launched. For more information on k0s, read the launch blog and visit: https://k0sproject.io/

During the fifth stage of upgrading from MCP to MOS, VMs are migrated from computes in the old cluster to computes in the new cluster. How many new computes are needed for an upgrade?

At least one. Obviously, only having one new compute means we can only piggyback one node at a time. We've got to do this process of evacuating and moving the nodes across, and moving the workloads across. The more you can give us, the faster we can move up to a max, typically of five, just from a control perspective. What we should note, though, is to support live migration, the new and the old compute have to have the same CPU time. Otherwise, live migration won't work.

How about workloads during the OpenStack Ussuri to Victoria upgrade? Is there downtime for workloads? If not, how will they upgrade? 

We use the live migration capabilities. For a number of these upgrades with OpenStack, non-invasive updates are supported. But yes, there we use live migrate. That’s why the cluster does need to have a little bit of extra capacity. We need to have some overhead within the cluster during this process to allow us to do that live migration. It's also just good practice to have some extra capacity in any cluster, so that in case you have node failure, you have some way to restart those workloads. 

Is there an application catalog or are there templates capabilities that allow developers to do self-service deployment?

We are not delivering an application catalog right now, but from a templates perspective, we support Heat as part of the system. It is an interesting topic, which I'd like to unpack a little bit further to understand the requirement.

Why would somebody deploy Mirantis Container Cloud on public clouds, like AWS or Azure, when it seems to be an alternative for on-prem infrastructure? 

Mirantis Container Cloud is focused on delivering a consistent platform across different environments. We are looking into the roadmap to support existing public cloud services like EKS or AKS in the future. What we're saying is: If you want consistency, if you want a very standardized underlying platform across multiple service providers ensuring that your applications are going to run without any hang ups or any problems, and you want to be able to implement consistent security through access controls, FIPS validation and certification, and policy management, then Mirantis Container Cloud can provide that. You need to put Mirantis Kubernetes Engine and future k0s across all of these environments.  

Is there a plan to support kubevirt or kata containers or Firecracker lightweight VMs with the Kubernetes API?

Yes. Within k0s, we're looking into supporting that. We haven't announced anything, we do have a backlog item to discuss that further.  I would love to discuss your requirements around that and what capabilities you'd like to see, in more depth.

What precautions do we need to take before moving from MCP to MOS?

The most important one is we really need to understand if there are any customizations within MCP that are not just running on top of the workload, that aren't just running as VMs. We are moving to a much more opinionated way of deploying so that we can properly support updates and upgrades in a very consistent way. We do have a lot of flexibility, but not as much as we used to have with DriveTrain. We really need to understand any customizations, any modifications to the system. That's the most important part that we need to understand to make sure you don't lose any capabilities.
The second precaution is actually more of a workload question. Queens to Queens is fine, but as we start moving your workloads, if you have integrated to the OpenStack APIs, capabilities may have been deprecated or changed as you go through different OpenStack versions. You just need to validate the automation you may have set up for any workload deployments. So those are the things we would typically want to find out.  

Can we upgrade Ceph storage in the same way we upgrade OpenStack?

Yes, but we try not to change too many things at the same time. We do them as two separate parts. Obviously, we'll upgrade Ceph after OpenStack to ensure that we don't lose anything in the process.

Migrating compute from MCP to MOS requires at least one new node. How about storage nodes?

No. You shouldn't need additional storage nodes because of the way Ceph works, and we're able to upgrade second place.  

To what level will OpenStack services in MOS be configurable by the operator?

We are offering quite a lot of scope for modifying parameters, but we're not offering every single parameter under the sun, very specifically, because we need to be able to control that to ensure that we can manage those updates and upgrades. That said, if there is a parameter that you need to modify, we will work with you to implement that. Because of the rapid release cycle, we can add those parameters. The product management team and the consulting team will discuss those parameters with you in-depth as part of the upgrade process. If you have any further questions, please feel free to reach out to your Account Executive or Customer Success Manager.

Is it possible to migrate an existing standalone Ceph cluster (that is deployed by ceph-ansible, not with MCP) to MCC/MOS, so that it will be managed by MCC/MOS?

Yes, it is possible, but it would require a one-off professional services project where we would build an extended version of the MCP to MOS upgrade procedure for Ceph.
With that, we've reached the end of the questions from this webinar. Thank you again for reading and if you haven't already, check out the full webinar on demand.

Choose your cloud native journey.

Whatever your role, we’re here to help with open source tools and world-class support.

GET STARTED
NEWSLETTER

Subscribe to our bi-weekly newsletter for exclusive interviews, expert commentary, and thought leadership on topics shaping the cloud native world.

JOIN NOW