Try out Docker Enterprise or generate PoC clusters quickly and confidently with our new deployer
Released in Beta with Docker Enterprise 3.1 comes Mirantis Launchpad, a simple-to-use, robust CLI deployer that works out-of-the-box to let you quickly configure, deploy, tear down, and update Docker Enterprise clusters for trial, PoCs, labs, and development on almost any infrastructure, and integrates with Terraform and other tools for low-level IaaS provisioning.
Right now, Mirantis Launchpad only deploys Docker Enterprise itself, due to (now changing) limits on how Docker Trusted Registry applies license files. In coming weeks, Mirantis will add the ability to deploy DTR alongside Docker Enterprise, and add layers of custom configurability while preserving sensible defaults. The evolving result will remain the easiest way of deploying demo and (eventually) full production Docker Enterprise clusters: readily integrated with other automation you may be using, and complementary to existing deployment solutions.
Using Mirantis Launchpad
Mirantis Launchpad is a command-line tool that runs on Linux, Mac, or Windows. To deploy and evaluate Docker Enterprise, you’ll need to set up a deployer/evaluation machine (this may be your own laptop or a separate, task-dedicated VM) and target virtual machines, using a public or private cloud or local desktop virtualization solution (e.g., VirtualBox).
Prepare a deployer/evaluation machine and target VMs
Launchpad itself requires only SSH access to all target machines. However, many evaluators may wish to use the same machine for running the Docker Enterprise Universal Control Plane (UCP) webui, kubectl (the Kubernetes management client), Docker (for developing container workloads, interacting with Docker Hub, and/or managing container images on the cluster’s local Docker registry), accessing the Kubernetes REST API using curl or language-specific SDKs, as well as communicating with applications running on the cluster behind nodeports or ingress. Whatever machine you want to use for this, you’ll need to provide secure network connectivity on many IPv4 ports (simplest to enable it on all ports) between this machine and target VMs.
A small demo deployment can be done on as few as two virtual machines running a supported operating system, and configured to comply with Docker Enterprise minimum requirements. Docker Enterprise manager nodes must run on Linux. Worker nodes can run on Windows Server.
Important: Target Linux VMs should be configured for access via .ssh and private key (see this tutorial), and login accounts (for Linux nodes) should be part of the sudoers group, with passwordless sudo enabled (see this tutorial). This is the default setup for Linux VMs on most public clouds.
Important: Windows Server machines will need to be set up for access with SSH or WinRC by the Administrator. Instructions for doing so can be found here, under the heading ‘Remote Management.’
Important: If you used a public cloud service to create a new ssh keypair to use with target machines in your deployment, remember to retrieve and copy the private key to your deployer machine (on Linux, this would typically be stored in the .ssh folder in your home directory). If you’ve deployed Windows Server on target nodes, retrieve the Administrator passwords (most public cloud services encrypt this) via the cloud console, for use as your SSH or WinRC passwords.
Networking considerations for target and deployer machines are discussed below.
Target machines need internet access (to download Docker Enterprise components under Launchpad’s control), and must be able to see one another on numerous ports for Docker Enterprise nodes to interoperate (the complete list is here). They’ll also need to be accessible from the deployer/evaluation machine on port 22 (for deployment) and other ports for UCP webui and client access.
If you intend to deploy your evaluation cluster(s) on a public or private cloud, a convenient and relatively secure setup for evaluation, therefore, might use a desktop VM deployer/evaluation machine configured with a desktop and browser (to run Launchpad, all Kubernetes clients, the UCP webui, and other tools) with normal internet access, connected to remote target VMs (e.g., on the same AWS subnet) using a VPN to encapsulate traffic on all ports between deployer and subnet targets.
Alternatively, you might deploy a medium VM on the same subnet as your targets (effectively a jumpbox), install a desktop OS and browser there, install Launchpad, kubectl, and other tools, and access the jumpbox using VLC, nomachine, or another remote-desktop tool, either via a public IP address or VPN.
Target machines can be given public IP addresses, if desired, or can work with internal addresses only, provided they have internet access and your deployer/evaluation machine can reach them. In principle, you should be able to create one security group for all target machines (optionally also applying this to your deployer machine) enabling all-ports any-to-any access among machines on the same subnet or within the same security group. This should allow all components of Docker Enterprise to interoperate among nodes, and permit convenient access by Launchpad, browser, and CLI clients between deployment/evaluation machine and your cluster. (Note: If you’re deploying on AWS (and perhaps other public clouds), depending on your configuration, it may also be necessary to explicitly allow machine to machine communications on private IP addresses. Do this on the AWS console by selecting the machines, then, Actions -> Networking -> Source/destination IP checks -> Disable.)
If you intend to deploy your evaluation cluster(s) on a desktop virtualization platform (e.g., VirtualBox) running on one or more machines on a typical home network (i.e., with routed internet access), the easiest approach is to create a deployer/evaluation VM with a desktop OS, set up target machines with a supported OS (e.g., Ubuntu 18.04), and configure all machines with ‘bridged’-type networking, which gives them IP addresses on your local network (in the 192.168.x.x Class C private IP address range), all-ports mutual visibility, and internet access as needed. Note that this will require you to specify a CIDR for Kubernetes pods (pod-cidr) in some non-overlapping range (see below).
Mirantis Launchpad is written in Go and distributed as binaries for direct execution on Windows, Mac, or Linux. To get started, visit our download page to register, and either download the binary from there, or (perhaps easier), just visit our Launchpad GitHub repo and grab a link to the latest version (under Releases). The repo’s readme.md contains expanded documentation.
The binary should be downloaded to a convenient folder using your browser or (for example, on a browser-free jumpbox, using wget or curl), optionally renamed (we renamed it to ‘launchpad’), and made executable. On (Ubuntu) Linux, we did this as follows:
mv launchpad-linux-x64 launchpad
chmod +x launchpad
We could then test the installation by executing launchpad with the ‘version’ argument:
The ./ simply directs execution to the local file, since we didn’t add launchpad to our execution path.
This produces the output (example only):
version: 0.10.0 commit: 636ce55
Your version details may vary.
Registering yourself as a user
We’re interested in knowing how people use Mirantis Launchpad, so we ask that you register before using the software. This can be done from the command line:
This will cause Mirantis Launchpad to ask your name, email, and company name, and transmit these to Mirantis.
Create a config.yaml file
Next step is to create a config.yaml file for launchpad, representing your cluster’s desired configuration. The command:
./launchpad init > cluster.yaml
… will generate a basic cluster.yaml file for you to modify. Meanwhile, here’s a minimal cluster.yaml for deploying a cluster on two Linux nodes, creating a manager and a worker. This will work as-is on AWS for Linux nodes. Additional parameterization is needed for Windows worker nodes, and/or to enable deployment to nodes hosted on desktop virtualization:
apiVersion: launchpad.mirantis.com/v1beta2 kind: UCP metadata: name: my-ucp spec: ucp: installFlags: - --admin-username=admin - --admin-password=supersekret hosts: - address: node1 role: manager ssh: keyPath: ~/.ssh/id_rsa user: theuser - address: node2-2 role: worker ssh: keyPath: ~/.ssh/id_rsa user: theuser
If you want to deploy on VirtualBox or other desktop virtualization solution and are using ‘bridged’ networking, you’ll need to make a few minor adjustments to your cluster.yaml — deliberately setting a –pod-cidr to ensure that pod IP addresses don’t overlap with node IP addresses (the latter are in the 192.168.x.x private IP network range on such a setup), and supplying appropriate labels for the target nodes’ private IP network cards (this typically defaults to ‘enp0s3’ on Ubuntu 18.04, or ‘eth0’ on earlier versions).
apiVersion: launchpad.mirantis.com/v1beta2 kind: UCP metadata: name: my-ucp spec: ucp: installFlags: - --admin-username=admin - --admin-password=supersekret - --pod-cidr 10.0.0.0/16 hosts: - address: node1 role: manager ssh: keyPath: ~/.ssh/id_rsa user: theuser privateInterface: enp0s3 - address: node2 role: worker ssh: keyPath: ~/.ssh/id_rsa user: theuser privateInterface: enp0s3
As with Kubernetes object definition files, the important stuff begins in the spec: stanza, where (in the ucp: sub-stanza) you specify the cluster administrator’s username and password.
Following the ucp: stanza is an array of maps describing cluster nodes and roles. Mirantis Launchpad requires (at this point) at least one node designated as ‘manager’ and one as ‘worker.’ It can provision multiple manager nodes in a highly-available configuration, and as many workers as you like.
Mirantis Launchpad will default to accessing target nodes as ‘root.’ If this isn’t practical (e.g., on Ubuntu targets, which by default don’t permit root login, preferring instead of designate an administrative user with sudo privileges) you can use the user: parameter (under ssh:) to specify a username (in this case, ‘theuser’). The keyPath: key, as you might expect, takes as its value the full path and filename of the private key it will use to access target servers (e.g., ~/.ssh/id_rsa).
Save cluster.yaml after making changes.
Mirantis Launchpad seeks to avoid unnecessary complexity, so by default, for example, component versions are left unspecified, and Mirantis Launchpad will select automatically among latest compatible versions of Docker Engine – Enterprise and other artifacts. Ability to specify versions and many other details, however, is built in. Full documentation of the Mirantis Launchpad YAML specification is here.
Running launchpad to deploy a cluster
At this point, you can deploy your cluster by cd’ing to the directory in which you saved launchpad, and entering:
Mirantis Launchpad finds cluster.yaml and begins by testing SSH connectivity to your target machines. As it executes, Mirantis Launchpad tests before performing operations or implementing changes, exposing errors and stopping before anything gets broken. Assuming no configuration, networking, or other errors, it will implement your configuration and terminate execution, telling you the IP address/hostname of your manager node, enabling browser connection using your admin username and password.
Mirantis Launchpad can also tear down your cluster, using the command:
… in the process, uninstalling all installed components. This will typically only be used when you no longer need the cluster, however (see below).
Idempotency and updates
More generally, like other mature deployment tools (and Kubernetes itself) Launchpad tries to function idempotently: making changes only in cases where a target system’s actual configuration differs from the configuration requested. You can thus apply (and change, and reapply) cluster.yaml to converge your cluster on a desired state, without repeating steps unnecessarily, or breaking the cluster in the process. For example, if you want to add additional servers, you can add them to the cluster.yaml file.
You can thus perform ‘launchpad apply’ as many times as needed (to fix basic configuration errors such as the wrong path to a private key), add or remove nodes, or update components.
Integrating Mirantis Launchpad with other tools
Users of Terraform will appreciate that Mirantis Launchpad can consume Terraform infrastructure description files to deploy clusters on infrastructure provisioned with this tool. The files need to be converted from JSON to YAML (trivial, using a tool like ‘yq’ or equivalent). An upcoming tutorial will address ways of integrating Mirantis Launchpad with Terraform and other automation.