KubeCon Atlanta 2025: AI Infra “Plumbing” Takes Center Stage
)
KubeCon + CloudNativeCon North America, held November 10-13, 2025 in Atlanta, GA, felt like a validation of Mirantis’ vision: that once you step beyond the fascinating research around models, training, scaffolds and agents, AI’s biggest real-world implementation challenges are about infrastructure. Basically plumbing: lots of composable pieces, most of it open source, wired together in next-generation-PaaS-adjacent, self-similar platforms. Platforms that work automatically, reliably, repeatably, securely at scale, and let engineers, data scientists, and developers do their thing with minimum concern for how the plumbing works.
The Meta Theme: AI Infrastructure Is Moving Down into Foundations
AI-relevant initiatives begun in Kubernetes open source over the past couple years – all fundamentally about plumbing – made key (though mostly quiet) announcements at KubeCon Atlanta. CNCF itself announced a Kubernetes AI Conformance Program: aiming to standardize the execution of AI/ML workloads across Kubernetes environments to enable workload portability, interoperability, and predictability across compliant platforms.
Other ‘plumbing’ announcements included the fact that KServe has recently become a CNCF Incubating project. KServe is an abstraction and orchestration system for models that works seamlessly with backends like vLLM, disaggregated serving frameworks like llm-d, Kubeflow ecosystem components, Envoy, Istio, Knative and more. It provides native Custom Resource Definitions (CRDs) for abstracting models, backends, and runtime frameworks (think TensorFlow, PyTorch, scikit-learn, etc.). The abstraction approach, by the way, shares a ton of non-overlapping conceptual DNA with Mirantis k0rdent AI, and Mirantis k0rdent AI can host KServe on multiple child clusters by installing its serviceTemplates from the k0rdent catalog.
Other new incubating projects announced at KubeCon include Kubescape (security scanning), Metal3 (bare metal management), OpenCost (cost analytics), and Kube-OVN (CNI version of Open Virtual Network, built on Open vSwitch) all easily implemented and lifecycle-managed with Mirantis k0rdent AI.
The vibe is that some standardization is happening up and down the AI-on-Kubernetes stack, and that it’s following Kubernetes-native principles: using declarative CRDs and operators to abstract components, make them composable, and package component-specific functionality in composable ways. At KubeCon Atlanta, this vision was clearly emerging in the area around model serving (with KServe, vLLM, and llm-d acting as ‘three legs of the inference-serving stool’).
It’s also, apparently, coalescing around databases. IBM, for example, announced a big commitment to OpenSearch (i.e., re-opened ElasticSearch), improvements to opensearch-jvector (scalable ANN nearest-neighbors vector search), and announced its intention to found an OpenRAG project for Retrieval Augmented Generation.
Agents and MCP ‘plumbing’ were also in the mix. Solo.io’s debuted agentregistry, an open source, centralized registry for AI agents, MCP servers, and Anthropic-style Agent Skills, positioned as the governance and discovery hub for agentic infrastructure on Kubernetes.
Mirantis at KubeCon Atlanta
Mirantis had a strong presence on stage at KubeCon + CloudNativeCon North America 2025 in Atlanta, not just in the expo hall. Miska Kaipiainen, founder of Lens, was at the show talking about Lens Loop: software that works to close the observability/visibility gap in LLM development. He wrote an essay on LinkedIn about the need to enable a feedback loop connecting development, operations, and compliance.

On the cluster management side, Jussi Nummelin, Senior Principal Engineer and lead on k0sproject, focused on Cluster API and multi cluster operations in related sessions, connecting foundational upstream work to practical fleet scale operations. Jussi posted a summary on LinkedIn after the event.
Just ahead of KubeCon Atlanta 2025, Mirantis announced Mirantis k0rdent AI support for NVIDIA BlueField DPUs to provide secure, multi-tenant offload for networking and storage in AI intensive environments, giving operators better performance and isolation for those workloads. Mirantis also announced a collaboration with NVIDIA, where k0rdent provides the Kubernetes-native operational layer for regulated, sovereign, and security sensitive AI deployments under NVIDIA’s AI Factory for Government initiative.
At booth 820, Mirantis highlighted Mirantis k0rdent AI as a unified plane for AI workloads comprising containers and virtual machines (critical for hosting components and also for tenant isolation). Mirantis k0rdent AI delivers KubeVirt based (and Mirantis-enhanced) virtualization together with AI PaaS and GPU PaaS capabilities, so hybrid AI workloads can run collectively on a continuous Kubernetes substrate. Mirantis adds smarter VM placement via a distributed resource balancer and support for GPU friendly virtualization with awareness of NUMA and PCIe topology, all wired into GitOps style workflows and built on top of k0s, the CNCF Sandbox Kubernetes distribution. 
Mirantis ran continuous demonstrations of Mirantis k0rdent AI that walked visitors through cluster lifecycle management, GPU pipelines, LLM serving, data services, and agent style workloads, all running across bare metal, cloud, and edge setups. To keep things fun, they also raffled custom skateboard decks created by Mirantis art director Dave Stoltenberg, tying a bit of culture and visual flair to a week that was otherwise packed with infrastructure, AI, and platform engineering details.
Plumbing is the Key
KubeCon Atlanta had plenty of AI buzz. But it was mostly about the plumbing: open, neutral, community-owned components that will define the next decade of AI infrastructure. That’s exactly where Mirantis k0rdent AI lives: a control plane that can absorb the best open-source components, run them across any substrate (bare metal, cloud, edge), secure them, scale them, and make them real for platform teams.

)
)
)


)