NEW! Mirantis Academy -   Learn confidently with expert guidance and On-demand content.   Learn More

Artificial Intelligence and Machine Learning with Mirantis Cloud Native Platform

Mirantis Kubernetes Engine (formerly Docker Enterprise/UCP) supports Nvidia GPUs to enable rapid, high-precision computation for all your data science and Big Data needs

VIDEO

Mirantis Kubernetes Engine GPU Worker Nodes

Machine Learning can benefit from specialized hardware such as Graphics Processing Units (GPUs), which are tuned for specific types of mathematics. But you can’t achieve the benefit unless your application runs on a server with those capabilities available. Learn how Mirantis Kubernetes Engine runs on Nvidia GPUs on bare metal or public clouds for data science and Big Data.

Containerization with Mirantis Kubernetes Engine enables data scientists and those who work with them to be more productive. It also directly improves the speed of innovation in their organizations. What’s more, it provides an easier path to Machine Learning and Artificial Intelligence than trying to figure out all of the pieces needed to make that happen. Instead, expedite and improve data models, and as a result, make breakthroughs possible, faster and more reliably.   Machine Learning and Artificial Intelligence capabilities are becoming more routinely required as ML/AI moves into the mainstream, powering solutions ranging from predictive analytics to image recognition to sales and operational efficiency. Mirantis Kubernetes Engine provides an ideal environment, enabling the advantages of not just a containerized approach, but also the ability to make use of hardware capabilities such as GPU support.


Portability of Data Science Environments

Streamline experiments, analysis, and practices with Mirantis Kubernetes Engine by capturing environments, , data sets, and models, including configuration and dependencies, in a portable package that facilitates reproducibility at scale.

  • The underlying container image records all the instructions to quickly (re)build a container when necessary.

  • The container captures an experiment’s state (such as, data, code, results, package versions, parameters  and hyperparameters, and so on) at any point in time as a whole and can be deployed on your on-premises infrastructure or any of the cloud platforms such as AWS, Microsoft Azure and Google Cloud Platform.

  • If needed, data scientists can set up and run multiple configurations at the same time using Docker containers in isolation to keep each environment independent from the others, even on the same system

  • Save money by perfecting your applications and algorithms on local on-premise machines, then perform your actual production runs on more expensive public cloud resources.

  • Running multiple instances can simplify and increase performance for certain types of ensembles, enabling parallelization of computations where possible.

Simplified Collaboration

Use Mirantis Secure Registry to securely share experiments with colleagues, and build on prior work to take innovation to the next level. And do it in a way that protects your data, your applications, and your workflow.

  • Research teams can securely store, manage and collaborate on container images for experiments and analyses, with the ability to manage users and permissions.

  • Securely store datasets and data models while still making them easily available to colleagues — but only those who should have access.

  • Ensure that applications use only models that come from the registry by requiring signed images, or verify the identity and accuracy of a verified data set.

  • Test models against production grade standards before sharing with colleagues.

Additional Case Studies