Deploy, manage and secure applications with globally-consistent Kubernetes environments that run anywhere.
What is Kubernetes?
Kubernetes is an open source orchestrator that automates the management, placement, scaling, and routing of connections to and among containers. It provides a way of hosting container-based workloads that helps keep applications highly available and performant, despite problems occurring with underlying infrastructure, or with containerized software itself.
Kubernetes manages this, first, by coordinating among many different types of higher-order Kubernetes services, which are called by different names and perform inter-related functions. Collectively, these Kubernetes services work to maintain applications in a desired state, described in aggregate by the configurations of each of the application’s components.
If something happens to change the state — for example, a worker node fails causing many containers to vanish — Kubernetes services will quickly identify the problem, restart the vanished containers on available nodes, and make them accessible to one another, and to other application components, once again. This is very powerful: eliminating on-call crises, and limiting the need for human operators to intervene to keep important applications running in emergencies — all without Kubernetes users needing to write application-specific automation.
As noted, the Kubernetes services that provide core functionality have many names, like API service and scheduling service and load balancing service. Developers can also create custom Kubernetes services — called Operators — to maintain the state of applications and assist in managing them. Most of this functionality, meanwhile, is enabled by another kind of Kubernetes service — one that’s actually called a “Kubernetes service,” among other things.
This is actually a configuration — concretely a YAML or JSON file — that defines aspects of a workload or component, like how many replicas of that workload or component you want to keep running. This kind of Kubernetes service abstracts application configuration away from actual, running application components (containers and pods): you define the services, and let Kubernetes worry about the details of allocating resources and keeping them running.
Besides “Kubernetes services,” Kubernetes services of the latter type are called several different things, depending on what they define and where they exist in Kubernetes hierarchy of abstractions. Specifically, Kubernetes services define a set of Pods (each pod contains and supports one or more running containers) and exposes them for access by other components or external agencies on an IP address or equivalent. Kubernetes services can have one of several different types, including ClusterIP, which makes the service (and its pods) accessible within the cluster, and NodePort, which exposes the service on a desired IP port on all cluster nodes. Another type of Kubernetes services is Loadbalancer, which tells the cluster to negotiate with an external load balancing service (e.g., provided by a host cloud) to provide load balancing for the pods on a fixed IP address.
Mirantis Kubernetes Engine (formerly Docker Enterprise/UCP) provides a secure and fully-conformant Kubernetes distribution for developers and operators of all skill levels. With Mirantis Kubernetes Engine, organizations can run Kubernetes interchangeably with Swarm orchestration for ultimate flexibility at runtime. Deployed clusters can also be attached as children to join fleets of clusters managed by Mirantis Container Cloud, our multi-cloud container platform.
Mirantis Kubernetes Engine
Rapidly-deployable, scalable, consistent environments on any infrastructure, secure by default. Make DevOps portable: one standard web UI, and one set of familiar tools drive dev, test, and production with minimal variation — no proprietary skills required.
Additional layers of security with built-in encryption and advanced role-based access controls. Ready to scale across multiple tenants and across hybrid/multi-cloud environments with Linux, Windows, and/or GPU-equipped worker nodes.
Full Platform Integration
The only container platform with Kubernetes that includes a secure container runtime (Mirantis Container Runtime) that’s FIPS 140-2 Validated, an integrated image registry solution (Mirantis Secure Registry) with built-in scanning and signing. Mirantis Kubernetes Engine also integrates Calico networking and Istio ingress with Kubernetes, providing “batteries included” clusters that are ready for work.
How Mirantis enhances Kubernetes
Advanced Access Controls
Mirantis Kubernetes Engine includes integrated RBAC that works with corporate LDAP, Active Directory, PKI certificates and/or SAML 2.0 identity provider solutions.LEARN MORE
Multi-Tenancy Made Simple
Scale Mirantis Kubernetes Engine to support multiple teams through clear separation of resources and node-based isolation. Restrict visibility for different user groups and operate multi-tenant environments with ease.LEARN MORE
Mirantis Kubernetes Engine includes Calico as the Kubernetes CNI plug-in for a highly scalable networking and routing solution. Gain access to overlay (IPIP), no overlay, and hybrid data-plane networking models. Built-in Istio ingress provides fine-grained gateway controls and load-balancing for applications.LEARN MORE
Secure by Default
Automatically deploy a secure Kubernetes cluster with mutual TLS authentication. Leverage FIPS 140-2 validated encryption in Mirantis Container Runtime (formerly Docker Engine – Enterprise) with your Kubernetes deployment.LEARN MORE
Secure Software Supply Chain
Mirantis Kubernetes Engine offers integrated security for the entire lifecycle of an application. Leverage Mirantis Secure Registry to centrally scan, sign, and store images to prevent unvalidated content from being deployed.LEARN MORE
Certified Ecosystem and Infrastructure
Deploy and run Kubernetes on fully supported infrastructure. Integrate with vendor-certified monitoring and logging tools or leverage storage and networking plugins.LEARN MORE