Mirantis Kubernetes Engine offers simplified cluster management with NVIDIA GPU Operator
As demand to run containerized AI/ML workloads skyrockets, demand for NVIDIA GPU-accelerated systems is following suit. Enterprises running AI/ML workloads would like to move away from the complexity of manually installing and managing multiple components to provision GPU servers. They expect a simple, consistent, and easy-to-use tool to manage policies and provision server components, enable smart routing, and maintain high security, integrating Active Directory, LDAP, and certificate stores. The same tool should ideally also be able to govern VM-based workloads to streamline operations.
Mirantis and NVIDIA are partnering to make it faster and easier for developers to build and run GPU-accelerated containers using Mirantis Kubernetes Engine. The new integration is based on theNVIDIA GPU Operator, which automates the lifecycle management of the various software components required to expose GPUs on Kubernetes. It enables advanced functionality and increased GPU performance, utilization, and telemetry. Mirantis has worked with NVIDIA to validate theNVIDIA GPU Operator on Mirantis Kubernetes Engine 3.5.7+ and 3.6.2+, optimizing the GPU Operator to quickly provision the servers with GPUs and run containerized AI/ML workloads.
Given constant cybersecurity threats and strengthening classification levels of data protection, many of our large customers in financial services, the public sector, healthcare, and telecommunications as well as edge computing customers expect to run their software in airgap environments with strict security and data protection. The NVIDIA GPU Operator enables customers to easily configure and consume NVIDIA accelerators for AI/ML workloads in even the most restrictive environments.
To learn more about how Mirantis and NVIDIA can simplify operations for your containerized GPU workloads, contact us.