Complexity of a Full-Stack, End-to-End AI Solution
Despite a high level of in-house expertise, Nebul faced operational challenges with providing seamless allocation and multi-tenancy provisioning of their full-stack AI Cloud, which included AI software stacks, containers, data intelligence and infrastructure layers. It was extremely complex to bring together their platform services — including workload orchestration, NVIDIA operators, the data fabric, and databases — to integrate with their applications and all the components of the cloud infrastructure.
Their Private AI Cloud infrastructure implements Kubernetes container orchestration and GPU, CPU, and DPU compute resources, with multiple generations of NVIDIA hardware clusters. The Kubernetes clusters integrate Ethernet, InfiniBand, and NVIDIA NVLink networks, as well as block, file, and object storage. Additionally, for virtual machines they are transitioning from VMware traditional virtualization to Mirantis OpenStack for Kubernetes, a highly scalable virtualization solution that containerizes OpenStack infrastructure-as-a-service. This complex cloud infrastructure required a lot of operational toil to maintain, with lifecycle management of many drivers, operators, and other components.
A “Shared Nothing” Approach to Data Privacy
To improve operational efficiency, Nebul investigated Kubernetes multi-tenant capabilities. However, they determined that Kubernetes multi-tenancy was not robust enough to meet their stringent security and isolation requirements.
They decided to deploy individual clusters for each customer, adopting a “shared-nothing” approach to ensure that customer data and models remain private and secure. Unfortunately, this quickly resulted in costly Kubernetes cluster sprawl that was painful and time-consuming to manage at scale, taking technical staff away from more strategic, revenue-generating activities.

Unifying Kubernetes Clusters to Streamline Management
Nebul deployed Mirantis k0rdent AI, an enterprise-grade AI infrastructure solution for platform architects and MLOps engineers that simplifies the experience of deploying AI workloads at scale. Mirantis k0rdent AI uses the open source k0rdent project to provide composable, extensible Kubernetes-native capabilities for centrally managing AI infrastructure and integrations across public clouds, on-premises, hybrid and edge locations — all with a single point of control.
Simplified Operations for AI Infrastructure
Mirantis k0rdent AI greatly simplifies provisioning for Nebul with pre-validated templates for different types of clusters. Templates are available for easily integrating networking, storage, security, and other important services available from k0rdent’s open ecosystem of cloud native technologies. Mirantis k0rdent AI also provides FinOps and observability tools to help ensure cost-efficiency and high performance of clusters and keep cluster sprawl under control.