< BLOG HOME

All Infrastructure is AI Infrastructure: Watch the Keynote by Mirantis CTO Shaun O’Meara

image

As AI agents begin to displace traditional application logic, all infrastructure moving forward must be purpose-built for AI workloads. Mirantis CTO Shaun O’Meara presented this call to action to technology leaders last week in a keynote at RAISE Summit 2025, in which he challenged traditional infrastructure assumptions and laid out a roadmap for the AI era.  Watch a full recording of Shaun’s presentation below and read on for an overview of his core ideas on infrastructure strategies in the AI era. 

Applications are Dead; Long Live Apps

Digital transformation is entering a new phase, driven by AI. Traditional application logic is being replaced by AI agents capable of making autonomous decisions. Before, digital tools helped people work faster. Now, that paradigm is being upended because AI systems can decide what to do, not just how to do it more efficiently. This shift represents the evolution from static, rules-based software to adaptive systems that learn and respond in real time. What all this means is that digital transformation is now AI transformation, and that business logic will be replaced by AI Logic. 

This shift redefines the role of applications. What matters now is dynamic, data-driven AI logic, not rigid business processes. The application itself becomes less important than the intelligence that drives it, andinfrastructure must rise to support this new model. And because nearly every modern application will soon incorporate AI, whether for personalization, automation, or intelligent decision-making, infrastructure must be ready to support AI as a baseline capacity, not a specialty workload.

Application Patterns are Evolving

AI is reshaping the architecture of applications. Traditional, modular, and cloud-native approaches are being superseded by agentic AI patterns, which introduce flexible decision-making at multiple layers and increase resource demands across cloud, on-prem, and edge. At the same time, workloads are no longer abstracted from infrastructure. The performance and behavior of AI applications are now closely tied to the capabilities of the underlying stack, marking a fundamental change from past architectural approaches. 

As applications adopt agentic AI, new architectural requirements are introduced: inference endpoints, hybrid scaling, specialized hardware, and streaming data pipelines. The shift from traditional approaches to AI is making every aspect of the stack more complex and increasingly dependent on intelligent, adaptive infrastructure. Flexibility, performance, and real-time intelligence can no longer be layered on top and must be built into the foundation of the platform itself. 

AI Apps are Performance Bound Beyond GPUs

AI workloads are performance-bound across the entire stack. As model sizes increase and inference becomes more distributed, pressure mounts on compute, storage, and network systems. GPUs may drive AI, but CPUs handle orchestration, data prep, and parallel processing tasks that are critical to system performance. The spotlight cannot remain solely on GPUs because all of these key infrastructure components need to work in harmony for AI workloads to scale successfully.

AI Infrastructure Platform Challenges are Impacting Delivery

Building and delivering infrastructure for AI is becoming significantly more complex. While demand is growing rapidly, the ability to meet that demand is limited by supply chain issues, physical constraints, and evolving regulatory requirements. Some of the key concerns that enterprises are facing include:

  • Future-proofing: Ensuring that infrastructure can adapt to new hardware, software, and workload requirements is increasingly difficult as technology evolves faster than ever

  • Resource scarcity: Issues like long wait times for GPUs, power shortages, and insufficient data center space are slowing down deployments 

  • Sovereignty and legal barriers: Regulations and national data laws complicate where and how data can be processed

  • Multi-tenancy requirements: Hard multi-tenancy is a must for regulatory compliance in industries like banking, especially in Europe

  • Sprawl and fragmentation: Managing workloads across multiple providers, environments, and geographic areas increases cost and complexity

Developers and Data Scientists Don’t Want to Care About Infrastructure

AI developers and data scientists want to focus on models and innovation, not infrastructure operations. The AI infrastructure stack has become too complex for developers to manage directly, yet they remain highly dependent on compute, networking, and platform reliability. In the future, infrastructure must fade into the background so that developers and data scientists can focus on building without being weighed down by operational complexity.

Requirements for AI Infrastructure Platforms

To support the scale, speed, and complexity of modern AI workloads, infrastructure platforms must be thoughtfully designed. A successful AI infrastructure platform must be:

  • Manageable: Platforms must be easy to operate, upgrade, and evolve over time

  • Improvable: Platforms have to continually improve to meet power and performance benchmarks

  • Observable: Every layer of the stack must be designed to provide visibility in order to quickly detect, diagnose, and resolve issues

  • Controllable: Aside from their own environments, organizations need to maintain control across public cloud and vendor-managed infrastructure as well

  • Performant: Platform performance needs to be reliable and measurable, or it doesn’t meet the core demands of AI workloads

Key Building Blocks of AI Infrastructure 

The AI stack can be visualized in layers, each supporting the one above:

  1. Applications: AI Workloads, such as code generation or computer vision, drive platform needs

  2. App platform as a service: Developer experience, lifecycle tooling, and support for models

  3. GPU platform as a service: Training, inference, orchestrating, provisioning

  4. Infrastructure as a service: Kubernetes, virtualization, multi-cloud services

  5. Hardware layer: GPU, network, storage, and compute

Everything is connected with a unified management and observability layer.

Fundamental Infrastructure Considerations

When designing an AI infrastructure platform, organizations must ask foundational questions about their providers, management architecture, and platform flexibility. These considerations help ensure the infrastructure aligns with long-term business, operational, and regulatory needs.

Providers: Who You Build On Matters

Selecting the right infrastructure and cloud providers has a direct impact on performance, compliance, and cost. These questions help assess whether a provider can meet your current and future requirements:

  • Is data sovereignty a requirement?

  • What are my latency needs?

  • What are my availability needs?

  • What are my cost constraints?

  • Can I move my data if I need to?

Management: Can You Operate at Scale?

A well-designed infrastructure platform must be manageable, repeatable, and decoupled from underlying services. These questions focus on operational efficiency and supportability:

  • Are the management tools separate from the data plane?

  • Can you create repeatability?

  • Do they allow you to select the best possible solutions for your needs now?

  • Are managed services available to help you support the platform?

Flexibility: Will Your Stack Evolve With You?

Long-term success depends on the platform’s ability to adapt. These questions test whether your architecture can evolve without lock-in or technical debt:

  • Can you add to the stack as needed?

  • Can you switch out components?

  • Are you locked in to the solutions provided by one vendor?

  • Can you deploy based on your matrix of needs at a given time?

Strategic Open Infrastructure

All of this leads to a clear objective: building strategic open infrastructure that blends vendor innovation with open-source autonomy. This approach combines the autonomy of open source with the rapid time to value offered by strategic vendor solutions. On one hand, vendor solutions bring production-ready capabilities, enterprise support, and fast deployment cycles, which are essential for delivering results quickly and at scale. Alternatively, open source technologies offer the flexibility to customize and innovate while avoiding vendor lock-in. Strategic open infrastructure intentionally combines these to create infrastructure that supports accelerated adoption without sacrificing flexibility.

Key Architectural Requirements for Success

To support the growing demands of AI workloads and maintain long-term success, AI infrastructure must be designed with the following architectural principles in mind:

  • Composable infrastructure: Modular and adaptable architecture allows teams to reconfigure or replace components as needed, ensuring that the platform can evolve instead of eventually becoming outdated

  • Borderless computing: The ability to access and deploy resources wherever they’re needed is critical for supporting distributed environments

  • Observability at all layers: Every layer must be set up with monitoring, troubleshooting, and optimization in order to manage performance and quickly diagnose issues.

  • Performance guarantees: Infrastructure must be able to deliver specific, measurable outcomes

  • Template-driven deployment: Rather than reinventing the wheel for every deployment, teams can apply proven patterns to accelerate setup and maintain consistency

k0rdent: Solving for Scalable, Modern AI Infrastructure

After delving into the complexity of today’s AI infrastructure landscape, the natural question becomes: What is the path forward?

Mirantis provides a compelling answer with open source k0rdent, a Kubernetes-native, declarative platform that provides a common control plane across diverse environments, including AWS, Azure, GCP, private cloud, data centers, and edge. 

k0rdent is built to deliver secure, multi-tenant access to shared AI infrastructure, and it embeds observability, policy enforcement, and cost controls directly into AI workflows. k0rdent also lowers operational overhead by unifying training, inference, and MLOps pipelines on a single platform. Its declarative model also reduces time to value by automating provisioning and enabling GPU-aware orchestration.

For organizations navigating the shift to AI-driven systems, k0rdent provides a cohesive platform aligned with the principles of openness, control, and long-term adaptability. It is how Mirantis brings the vision of future-ready AI infrastructure to life.

Next Steps

As Shaun O’Meara outlined at the RAISE Summit, AI has transitioned from a mere workload to becoming the force that reshapes the entire infrastructure stack. The widespread adoption of AI across industries necessitates infrastructure that treats intelligent workloads as the norm, not the exception. With traditional models breaking down, organizations must embrace platforms that are purpose-built for AI: composable, observable, scalable, and ready to deliver performance across all environments. 

k0rdent provides the foundation for this shift with a unified, Kubernetes-native platform that brings AI infrastructure under one declarative control plane. 

To learn more about k0rdent and see how you can build your own scalable and sovereign AI Factory, download the Mirantis AI Factory Reference Architecture.

Medha Upadhyay

Product Marketing Specialist

Mirantis simplifies Kubernetes.

From the world’s most popular Kubernetes IDE to fully managed services and training, we can help you at every step of your K8s journey.

Connect with a Mirantis expert to learn how we can help you.

CONTACT US
k8s-callout-bg.png