Announcing Docker Enterprise 3.1 General Availability
May 28, 2020
- As you may know, last November Mirantis acquired the Docker Enterprise business from Docker, Inc., and since then we've been, as you might imagine, quite busy! The two teams have been integrating their efforts, and combining the best of both worlds into the best products, services, and support we can bring to customers.
Now, six months later, we are proud to announce the general availability of Docker Enterprise 3.1, with new features that let you up your Kubernetes game even more. This new release includes lots of new features, including:
Over the last few Kubernetes releases, the community has been working on the ability to run on Windows, and with Docker Enterprise 3.1, you now have the ability to easily add Windows worker nodes to a Kubernetes cluster and manage them just as you would manage traditional Linux nodes with UCP.
The ability to orchestrate Windows-based container deployments lets organizations leverage the wide availability of components in Windows container formats, both for new application development and app modernization. It provides a relatively easy on-ramp for containerizing and operating mission-critical (even legacy) Windows applications in an environment that helps guarantee availability and facilitates scaling, while also enabling underlying infrastructure management via familiar Windows-oriented policies, tooling, and affordances. Of course, it also frees users to exploit Azure Stack, and/or and other cloud platforms offering Windows Server virtual and bare metal infrastructure.
Docker Enterprise 3.1 with Kubernetes 1.17 makes it simple to add standard GPU worker node capacity to Kubernetes clusters. A few easily-automated steps configure the (Linux) node, either before or after joining it to Docker Enterprise, and thereafter, Docker Kubernetes Service automatically recognizes the node as GPU-enabled, and deployments requiring or able to use this specialized capacity can be tagged and configured to seek it out.
This capability complements the ever-wider availability of NVIDIA GPU boards for datacenter (and desktop) computing, as well as rapid expansion of GPU hardware-equipped virtual machine options from public cloud providers. Easy availability of GPU compute capacity and strong support for standard GPUs at the container level (for example, in containerized TensorFlow) is enabling an explosion of new applications and business models, from AI to bioinformatics to gaming. Meanwhile, Kubernetes and containers are making it easier to share still-relatively-expensive GPU capacity, or configure and deploy to cloud-based GPU nodes on an as-needed basis, potentially enabling savings, since billing for GPU nodes tends to be high.
You may have heard of Istio, the service mesh application that gives you extremely powerful and granular control of traffic within the parts of a decentralized application. Part of Istio is Istio Ingress, a drop-in replacement for Kubernetes Ingress, which controls the traffic coming into your cluster (Learn more about what is a service mesh by reading our guide to Istio).
Docker Enterprise 3.1 includes Istio Ingress, which can be controlled and configured directly from UCP 3.3.0. That means you can easily enable or disable the service directly from the user interface or the CLI.
The process is simple. All you need to do is download the installer, tell it where to find your servers, and let it go. (There are a couple of optional intermediate steps depending on your deployment preferences, but we've tried to make the process as frictionless as possible.)
For more information on how to get and use Mirantis Launchpad, click here.
Now, six months later, we are proud to announce the general availability of Docker Enterprise 3.1, with new features that let you up your Kubernetes game even more. This new release includes lots of new features, including:
- K8s on Windows
- GPU support
- Istio Ingress
- A new UCP Installer
- Upgrade to K8s 1.17
Kubernetes on Windows
From the start, Kubernetes has been an extremely Linux-centric project, which is understandable, as containers themselves evolved from Linux constructs such as cgroups. But what does that mean for Windows developers? After all, Docker runs on Windows, and makes it possible to run Linux containers (albeit using virtualization).Over the last few Kubernetes releases, the community has been working on the ability to run on Windows, and with Docker Enterprise 3.1, you now have the ability to easily add Windows worker nodes to a Kubernetes cluster and manage them just as you would manage traditional Linux nodes with UCP.
The ability to orchestrate Windows-based container deployments lets organizations leverage the wide availability of components in Windows container formats, both for new application development and app modernization. It provides a relatively easy on-ramp for containerizing and operating mission-critical (even legacy) Windows applications in an environment that helps guarantee availability and facilitates scaling, while also enabling underlying infrastructure management via familiar Windows-oriented policies, tooling, and affordances. Of course, it also frees users to exploit Azure Stack, and/or and other cloud platforms offering Windows Server virtual and bare metal infrastructure.
GPU support
There was a time when Graphic Processing Units (GPUs) were just for gaming, but that time has long since passed; now they are an essential part of efficiently performing the heavy calculations that are becoming more and more a part of enterprise life. Even before Machine Learning and Artificial Intelligence crept onto the enterprise radar, large corporations had data mining operations that have prepared them for the coming onslaught.Docker Enterprise 3.1 with Kubernetes 1.17 makes it simple to add standard GPU worker node capacity to Kubernetes clusters. A few easily-automated steps configure the (Linux) node, either before or after joining it to Docker Enterprise, and thereafter, Docker Kubernetes Service automatically recognizes the node as GPU-enabled, and deployments requiring or able to use this specialized capacity can be tagged and configured to seek it out.
This capability complements the ever-wider availability of NVIDIA GPU boards for datacenter (and desktop) computing, as well as rapid expansion of GPU hardware-equipped virtual machine options from public cloud providers. Easy availability of GPU compute capacity and strong support for standard GPUs at the container level (for example, in containerized TensorFlow) is enabling an explosion of new applications and business models, from AI to bioinformatics to gaming. Meanwhile, Kubernetes and containers are making it easier to share still-relatively-expensive GPU capacity, or configure and deploy to cloud-based GPU nodes on an as-needed basis, potentially enabling savings, since billing for GPU nodes tends to be high.
Istio Ingress
When you are using Kubernetes, you don’t want to expose your entire cluster to the outside world. The safe and secure thing to do is to expose only as much of your cluster as necessary to handle incoming traffic. Ideally, you would want to be able to configure this part and have additional handling logic based on routes, headers, and so on.You may have heard of Istio, the service mesh application that gives you extremely powerful and granular control of traffic within the parts of a decentralized application. Part of Istio is Istio Ingress, a drop-in replacement for Kubernetes Ingress, which controls the traffic coming into your cluster (Learn more about what is a service mesh by reading our guide to Istio).
Docker Enterprise 3.1 includes Istio Ingress, which can be controlled and configured directly from UCP 3.3.0. That means you can easily enable or disable the service directly from the user interface or the CLI.
Mirantis Launchpad CLI Tool for Docker Enterprise
Docker Enterprise is meant to make your life easier by giving you a more straightforward way to perform tasks such as adding servers to your Kubernetes or Swarm clusters, but before you ever get there, you have to install it. Until now, this has been a somewhat manual process, but Docker Enterprise 3.1 includes a new CLI tool, Mirantis Launchpad, that takes the pain and complexity out of deployment and upgrades.The process is simple. All you need to do is download the installer, tell it where to find your servers, and let it go. (There are a couple of optional intermediate steps depending on your deployment preferences, but we've tried to make the process as frictionless as possible.)
For more information on how to get and use Mirantis Launchpad, click here.
Upgrade to K8s 1.17
Finally, Docker Enterprise 3.1 upgrades the included version of Kubernetes to 1.17, which means you now have access to all of the features that come with that release, such as:- IPv4/IPv6 dual stack support and awareness for Kubernetes pods, nodes, and service
- The ability to automatically prevent workloads from being scheduled to a node based on conditions such as memory usage or disk space
- CSI Topology support, which attempts to ensure that workloads are scheduled to nodes that actually host the volumes they're going to use, improving speed and performance
- Environment variables expansion in SubPath mount and defaulting of CustomResources, which expand capabilities for end user developers
- The ability to use RunAsUsername for Windows in addition to Linux