WHY MIRANTIS

Mirantis delivers the fastest path to production AI at scale, with full-stack AI infrastructure technology that removes GPU infrastructure complexity and streamlines operations across the AI lifecycle, from Metal-to-Model. Today, all infrastructure is AI infrastructure, and Mirantis provides the end-to-end automation, enterprise-grade security and governance, and deep expertise in Kubernetes orchestration that organizations need to reduce time to market and efficiently scale cloud native, virtualized, and GPU-powered applications across any environment – on-premises, public cloud, hybrid, or edge.

VIDEO: Run Anywhere. Automate Everything. k0rdent in 30 seconds

1. Getting Oriented

What does “Metal-to-Model” mean? Do you sell hardware?

Mirantis does not sell hardware; “Metal-to-Model” describes the full-stack automation we provide for AI infrastructure, starting from bare metal provisioning and management and extending through Kubernetes clusters, GPU resources, and AI services. Mirantis k0rdent Enterprise standardizes and automates Kubernetes clusters across any infrastructure, while Mirantis k0rdent AI further extends the platform with AI-specific capabilities to deliver secure, on-demand access to GPU computing resources and quickly move models into production.

Why is AI such an urgent competitive issue for businesses of every size?

AI is rapidly transforming how products are built, how operations function, and how customers are served. Organizations that industrialize AI early will gain compounding advantages in speed, cost efficiency, and capability; businesses that do not leverage AI early may struggle to scale innovation, incur higher long-term costs as AI-driven capabilities become standard, and miss opportunities to optimize operations and improve decision-making. Mirantis focuses on removing infrastructure friction so that teams can deliver AI capabilities and unlock their benefits as quickly as possible. 

Where should my organization start?

Early Exploration Stage: For teams that are just getting started with AI, it’s a good idea to begin with a standards-based Kubernetes foundation and ready-to-use AI services. This approach allows for safe experimentation and flexible integration with cloud native tooling while also preserving long-term portability. Mirantis k0rdent Enterprise and Mirantis k0rdent AI can run on laptops, in data centers, or hybrid or public clouds, which matches the needs of smaller teams moving from early testing to initial deployment. 

Scaling Stage: As AI deployments expand in production, focus on standardizing clusters, policies, and GPU resources across environments. Introduce an AI PaaS layer that supports governed and repeatable deployment, testing, and inference. Platform teams should implement GitOps-driven automation, while applications teams set up structured, self-service environments to improve consistency and reduce operational bottlenecks. 

Advanced, Multi-Region Stage: Organizations operating at a global scale or delivering services across regions need to prioritize sovereignty, multi-region resilience, and cost control across environments. Mirantis k0rdent Enterprise provides the multi-cluster control plane and monitoring required for efficient distributed operations, while Mirantis k0rdent AI supports AI lifecycle management and large inference hosting at scale.

2. Understanding the AI Stack

What are the main parts of an enterprise AI stack?

An enterprise AI stack can be understood in layers:

Compute: Bare-metal servers and/or cloud instances

GPUs & firmware: Specialized accelerators for training and inference

Virtualization & OS: Optional virtual machines (VMs) and Linux

Kubernetes: The orchestration layer for containers, networking, and storage

GPU Platform-as-a-Service (GPU PaaS): Manages GPU jobs, quotas, and resource sharing

AI Platform-as-a-Service (AI PaaS): Provides templates and services for data management, model training, fine-tuning, evaluation, self-service deployment, observability, and governance.


Mirantis k0rdent AI unifies these layers so they can be defined, deployed, and managed declaratively across environments and to reduce the operational burden of complex AI infrastructure.

What does “operating AI applications” actually involve?

Operating AI applications requires many complex processes, including scaling models and services, managing dependencies and data paths, governing access to GPUs, tracking version and drift, securing endpoints, and monitoring cost and performance. Mirantis k0rdent AI adds a governed AI PaaS layer on top of the centralized Kubernetes foundation to make these tasks repeatable, auditable, and consistent across environments.

What are the benefits of an open, Kubernetes-based AI stack?

Enterprises choose open, Kubernetes-based AI stacks because they are scalable and resilient while also providing control over where data resides, who can access it, which components are approved, and how changes are tracked. Mirantis emphasizes open components, GitOps automation, and portable deployment models so organizations can meet jurisdictional and industry requirements while maintaining choice and enabling use case optimization.

3. Why Kubernetes?

How does Kubernetes work at a fundamental level?

Kubernetes is an orchestration system that manages containers across clusters of machines. You declare the desired state of your applications and infrastructure, and Kubernetes continuously works to ensure that the running environment matches the desired state (e.g., the right number of container replicas are running). This reconciliation loop provides strong automation and consistency for complex distributed systems.

What is the significance of Kubernetes being open source?

Since Kubernetes is a widely-adopted open source project that is the industry-leading container orchestrator, it benefits from rapid innovation from thousands of contributors, plus a broad, neutral ecosystem and no dependence on any single vendor. The Cloud Native Computing Foundation (CNCF) governs Kubernetes, certifies Kubernetes distributions, and encourages consistent behavior across platforms. An open, standards-based foundation ensures flexibility, transparency, and long-term portability for enterprises. Mirantis builds on this with an open, composable approach rather than a closed, proprietary stack.

What are the advantages of using Kubernetes for AI?

AI workloads evolve quickly and often span multiple environments, hardware types, and GPU configurations. Kubernetes provides a uniform operating model that works across data centers, clouds, and edge locations. This consistency allows teams to reuse declarative templates, automate operations, and manage GPU resources effectively. For AI specifically, this means faster iteration, predictable scaling, reproducibility across environments, and more reliable production deployments with self-healing infrastructure.

Additionally, Kubernetes is the industry-leading container orchestration solution that is already widely adopted by enterprises. Major AI players, including Anthropic, OpenAI, and LangChain all run on Kubernetes. Leading open source AI/ML platforms like KServe , Kubeflow, and Ray also have established Kubernetes as the standard runtime for AI workloads.

4. The Mirantis Perspective

What does Mirantis k0rdent Enterprise provide?

Mirantis k0rdent Enterprise provides a Kubernetes-native, declarative control plane for multi-cluster and multi-cloud operations. Platform teams define clusters, policies, and platform services as code so that k0rdent can enforce consistency, prevent drift, and simplify lifecycle tasks. This allows teams to focus on their core competency rather than managing infrastructure sprawl.

What additional functionality does Mirantis k0rdent AI provide?

Mirantis k0rdent AI is a full-stack AI infrastructure platform that also delivers a unified AI Platform as a Service (PaaS) that integrates with a GPU PaaS layer. It offers governed blueprints for data, training, fine-tuning, and inference, along with observability and self-service environments. These capabilities enable organizations to define, deploy, and operate AI workloads at scale across any infrastructure.

Where do k0s, MKE, and Lens fit?

Mirantis provides a suite of technologies that accelerate modern application delivery and simplify operations. These complementary platforms integrate seamlessly with k0rdent or can be used standalone to address specific use cases.

k0s: a minimal, fully-contained CNCF-certified Kubernetes distribution that is easy to install and well-suited for edge locations or lab environments. k0s is the default Kubernete distribution for the k0rdent management cluster and can be used for workload clusters as well.

Mirantis Kubernetes Engine for k0rdent (MKE 4k): An enterprise-grade, CNCF-certified Kubernetes platform that is designed for production clusters and offers built-in security features. MKE 4k clusters come with Mirantis k0rdent Enterprise and can act as k0rdent mothership clusters.

Lens: A Kubernetes IDE that helps developers and operators visualize clusters, troubleshoot issues, and work more efficiently. Lens can provide visual cluster management for k0rdent environments.

5. Determining Your AI Adoption Path

If we have no existing AI initiatives, what is the best place to begin?

A good first step is to find your use case to identify a high-value business problem where AI makes sense.

What repetitive tasks are burning time in your organization?

Where are you sitting on data you’re not using effectively?

What customer experience problems keep coming up?

After identifying your use case, start a proof of concept with a small dataset to see if AI actually moves the needle on your problem.

Once you’ve proven that AI works in a lab environment, you’ll need infrastructure that can handle production AI workloads, including support for GPU orchestration, model versioning, scaling, security, and compliance requirements. A Kubernetes-based foundation like Mirantis k0rdent AI gives you the flexibility to experiment with different models and frameworks while providing the production-grade capabilities you’ll need as AI becomes business-critical.

If we have a proof of concept in one cloud, how do we avoid getting stuck there?

To avoid long-term dependence on a single provider, you should choose a composable, multi-cloud AI infrastructure platform like Mirantis k0rdent AI, that supports deployments across on-prem, clouds, edge and bare metal environments.

If we need strict data control and regional compliance, do we have to give up cloud benefits?

No. Mirantis k0rdent AI enables organizations to rapidly launch compliant, multi-tenant sovereign GPU infrastructure with complete control over data residency and regulatory requirements, while also providing cloud benefits such as scalability, self-service automation, and operational efficiency. Mirantis k0rdent AI enforces data sovereignty through GPU provisioning policies that automatically enforce locality rules, ensuring workloads remain within regulatory boundaries, with support for hard multi-tenancy to ensure strict tenant isolation. Mirantis delivers a public cloud experience for AI infrastructure, anywhere you deploy, by providing a self-service portal and fully-managed 24/7 remote operations so your teams can focus on your core capability, instead of managing infrastructure.

6. Practical Operations

How do we scale inference reliably (and cost-effectively)?

To scale inference reliably and cost-effectively, standardize how models and supporting services are deployed, then use continuous observability, cost/billing analytics, and autoscaling to match capacity with demand, while optimizing GPU utilization and maintaining cost control. Mirantis k0rdent AI enables organizations to launch governed, scalable inference by providing declarative templates that standardize inference deployments, with centralized policy enforcement, integrated observability and FinOps, and a GPU PaaS layer to help organizations share and right-size accelerator resources efficiently.

How do we keep environments consistent across teams and regions?

To keep environments consistent across teams and regions, it is important to define clusters, policies, and AI services as code. This allows you to store them in Git and rely on Kubernetes to reconcile the running state with the declared state. Mirantis k0rdent further strengthens this model by enforcing consistency, preventing drift, and automating rollouts so environments remain uniform across teams and geographic locations.

How can we remain aligned with advancements in the broader ecosystem?

Prioritize conformant, open components and follow standards-driven initiatives like the Linux Foundation’s Agentic AI Foundation (AAIF) and CNCF initiatives around AI conformance to preserve portability. As a founding member of the AAIF and active member of CNCF, Mirantis contributes to shaping these standards while helping enterprises implement future-proof, standards-aligned architectures.

7. Key Takeaways

Organizations that combine standards-based infrastructure, declarative automation, and multi-cloud flexibility can effectively deploy, scale, and manage AI infrastructure with speed, consistency and control across any environment. Here are some points to keep in mind:

Open and composable architectures provide greater flexibility and scalability than closed and monolithic approaches, especially when it comes to AI workloads

Kubernetes is the universal control plane, and pairing it with GitOps and continuous reconciliation makes complex AI systems more manageable

Portability and compliance must be strategic priorities; the Mirantis Metal-to-Model approach helps organizations maintain choice, security, and compliance across data centers, clouds, and edge environments

LET’S TALK

Contact us to learn how Mirantis can accelerate your AI/ML infrastructure initiatives.

We see Mirantis as a strategic partner who can help us provide higher performance and greater success as we expand our cloud computing services internationally.

— Aurelio Forese, Head of Cloud, Netsons

image

We see Mirantis as a strategic partner who can help us provide higher performance and greater success as we expand our cloud computing services internationally.

— Aurelio Forese, Head of Cloud, Netsons

image