Try Mirantis Container Cloud

Bootstrap Container Cloud on AWS and deploy your first managed cluster in under an hour

Mirantis Container Cloud lets you deploy, observe, scale, update, and tear down consistent managed clusters running Mirantis Kubernetes Engine (formerly Docker Enterprise) across multiple public and private cloud frameworks.

Since most developers and organizations have access to an AWS account, this tutorial guides you through the deployment of a Mirantis Container Cloud manager cluster on an Amazon Web Services (EC2) region, and shows you how to deploy a Mirantis Kubernetes Engine managed cluster on the same region. Please visit our docs for quick starts on how to deploy on OpenStack or VMware.

If you just want to try Mirantis Kubernetes Engine, it’s easier to do so with our Launchpad deployment tool. Mirantis Kubernetes Engine clusters deployed this way can later be added to a Mirantis Container Cloud manager as managed clusters.


This minimal demo lets you experience the simplicity of using Mirantis Container Cloud to configure, deploy, observe, and manage consistent Kubernetes clusters on a single infrastructure. In a full-scale enterprise deployment, Mirantis Container Cloud might be additionally resourced and configured to deploy and manage clusters across multiple infrastructures, including private clouds (such as VMware or OpenStack), on datacenter bare metal, and perhaps on hosted bare metal (such as Equinix Metal) as well. It would likely also be integrated with enterprise directory, notifications and ticketing, and other IT facilities to accelerate operations.

What You’re Deploying

You’ll be deploying a Mirantis Container Cloud instance on Amazon Web Services (EC2). This deployment will create:

  • Three c5d.2xlarge virtual machines to serve as the control plane
  • One t2.micro virtual machine for a so-called “bastion host” — a secure gateway
  • Plus additional t2.micro and c5d.2xlarge virtual machines to host Mirantis Kubernetes Engine clusters you may then deploy with Mirantis Container Cloud.

You do not need to create any of these machines. The Mirantis Container Cloud bootstrap script will build the core Container Cloud infrastructure, and Container Cloud’s AWS provider will then acquire resources for Kubernetes managed clusters. When initial deployment of the Container Cloud cluster is complete, you’ll be able to access:

  • The Mirantis Container Cloud webUI, from which you’ll be able to deploy and manage Mirantis Kubernetes Engine managed clusters or attach clusters you’ve already deployed by other means (such as existing Mirantis Kubernetes Engine (formerly UCP) 3.3.3 clusters deployed with Mirantis Launchpad).
  • The Keycloak security/Identity and Access Management webUI, which lets you manage roles, users, and secrets, and integrate with corporate IAM (such as LDAP or ActiveDirectory).
  • The StackLight webUI (Grafana), which provides metrics and observability dashboards for the Container Cloud instance, as well as managed clusters you deploy with or attach to it. 
image thumbnail

Important Note

Who should (and shouldn’t) undertake this project: This tutorial assumes familiarity with the Linux command line and associated tools, with Amazon Web Services, with networking fundamentals, and with virtualization and cloud technology in general.

Non-engineers: As a convenient alternative to “DIY,” non-engineers are strongly encouraged to speak with a Mirantis Account Executive, who will arrange a remote demonstration at your convenience. Mirantis’ Services organization stands ready to assist with proof-of-concept deployments and to provide assistance in deploying Mirantis products for use by customers.


  • Administrative access to an AWS account with quotas adequately set (see below).
  • A physical or virtual machine (e.g., VirtualBox VM) running Linux desktop (e.g., Ubuntu Desktop 20.04) with internet access and with Docker (open source) installed, for use in bootstrapping Mirantis Container Cloud. A recipe for creating an Ubuntu VM configured for basic Kubernetes and container development is provided in our How to Build a Kubernetes Development Environment tutorial.
  • The container cloud bootstrap script and your personalized license file (see Downloadable Assets, above).


The authoritative reference for deploying Mirantis Container Cloud is the Deployment Guide. You may find it useful to read through the guide before attempting this simplified deployment procedure.

Step 1. Download Assets

Download the Mirantis Container Cloud installer script and generate your personalized license file by filling out and submitting the form, below, and following instructions. You’ll need to navigate to another page (and back) to create or log into a Mirantis account, and download your personalized trial license file, required for the deployment.

Download Mirantis Container Cloud
installer script and license file

We recommend saving these assets in a new directory on the deployment machine (e.g., /home/user/mirantis).

If you’re not presently visiting using the machine you’ll be using to deploy Mirantis Container Cloud, you’ll need to save the bootstrap script ( and license file (mirantis.lic) locally, then transmit them to your deployment machine or VM. Because these are both text files, the easiest way to do this is with cut-and-paste from a local text editor into a text editor running on the remote machine, by means of your clipboard. To do this:

  1. Open the file in a local text editor.
  2. Select all and copy its contents to your clipboard.
  3. Open a new file called on your deployer machine, using any text editor available there.
  4. If using vim/vi on the remote machine, press “i” to enter Insert Mode.
  5. Paste the text of the script into the file on the remote machine
  6. Save and exit.
  7. Repeat for the license file, mirantis.lic.

This procedure should work for most combinations of source and destination machine. If your destination machine is a desktop-equipped VM running on a desktop virtualization system like VirtualBox, it will be necessary to install “Guest Additions” on the destination machine and select “bidirectional” clipboard to enable copying text from host to guest.

All following steps are to be carried out on your deployer machine, or through the AWS EC2 console/webUI.

Step 2. Check AWS quotas, get AWS credentials, and test access to your AWS region

About AWS pricing: IMPORTANT — While it’s difficult to closely estimate AWS pricing, the cost of running a deployed Mirantis Container Cloud instance and one Mirantis Kubernetes Engine managed cluster (both in the default reference configuration) is not inconsiderable — on the order of several hundred USD/week. To avoid sticker shock, it’s wise to plan your evaluation to minimize expenditure and tear down the deployment promptly when complete.

1 t2.medium Bootstrapper
1 t2.micro Bastion host
6 c5d.2xlarge Manager nodes for Container Cloud instance (3) and one Mirantis Kubernetes Engine managed cluster (3)
2+ c5d.2xlarge Kubernetes (or Swarm) worker nodes for managed cluster
10+ total instances
AWS instances required for test deployment

About AWS account quotas/limits

The Container Cloud instance you’ll deploy will require three C-class (i.e., big) virtual machines, plus a micro VM for use as a Bastion host (i.e., a secure gateway). Mirantis Kubernetes Engine managed clusters will require additional VMs of the same types for their (1) Bastion and (3) Manager nodes, plus additional C-class instances (minimum of two(2) per Mirantis Kubernetes Engine cluster) created as Kubernetes/Swarm workers. See the table, below, for a summary of instance types and numbers required.

This exceeds the limits of an AWS EC2 account with default settings. If you’ve never before requested quota/limit expansion on your AWS EC2 account, you’ll need to do so before proceeding.

To check your present limits, log into your AWS console, pull down the Account menu in the upper right, and click on Service Quotas. On the Service Quotas page, click on Amazon Elastic Compute Cloud (Amazon EC2) in the upper left, to reach a list of all the different limits on your account. Next, scroll down to where it says Running On-Demand Standard (A, C, D, H, I, M, R, T, Z) instances, and check that your limit will enable creation of the requisite nine (9) or so instances for this QuickStart (in addition to instances you may already have running). You will also need to use this method to ensure that the value of EC2-VPC Elastic IPs is at least ten (10).

You may also need to check to ensure that NAT gateways per Availability Zone is set to greater than ten (10), but for this, you should click your account name in the header, then My Service Quotas -> AWS services -> Amazon Virtual Private Cloud (Amazon VPC) -> NAT gateways per Availability Zone.

Quota name Required minimum
Running On-Demand Standard (A, C, D, H, I, M, R, T, Z) instances 10
EC2-VPC Elastic IPs 10
NAT gateways per Availability Zone 10
AWS EC2 quota minimum requirements for deploying Mirantis Container Cloud manager cluster and one Mirantis Kubernetes Engine managed cluster in reference configuration. Your actual quotas must be set to accommodate at least the below-described increments, plus anything else you’re running in AWS.

If any of these limits needs raising, select the radio button on that line, then scroll up and click the (now orange) button in the upper right, labeled Request Quota Increase. You’ll be shown a form to fill out, requesting that your limit be adjusted. Fill out, then submit this form.

Within a few hours (unless there are issues with your account), you’ll receive email confirming your limit increase. Once confirmation is received, you can proceed.

Pick an AWS region for your deployment

Before continuing, select an AWS region for your deployment. This is generally a region physically near to you. We used the us-east-2 region.

Create a non-root AWS admin user (if needed), log in, and obtain keys

You should never use your original AWS root user account for operations tasks. Instead, you should create and use a separate admin account with more limited permissions. If you haven’t already done so, the tutorial Creating your first IAM admin user and group explains how. The remainder of this QuickStart assumes you’ve created this user, retrieved their password and keys, and logged into the AWS console under their identity. The article Understanding and getting your AWS credentials shows you how to obtain your admin user’s password, and generate and obtain their Access Key, and Secret Key, which you’ll need in a bit.

Test to see if your bootstrap machine can reach the AWS EC2 API

At this point, it makes sense to see if your bootstrap machine can reach the AWS EC2 management API, since preparing for and deploying Container Cloud will require such access. A quick way to test both is to enter:

docker run --rm alpine sh -c "apk add --no-cache curl; curl"

This launches a container that curls the AWS EC2 API. It should produce output that looks like what’s shown below, with no errors:

(1/4) Installing ca-certificates (20191127-r4)
(2/4) Installing nghttp2-libs (1.41.0-r0)
(3/4) Installing libcurl (7.69.1-r1)
(4/4) Installing curl (7.69.1-r1)
Executing busybox-1.31.1-r16.trigger
Executing ca-certificates-20191127-r4.trigger
OK: 7 MiB in 18 packages
 % Total % Received % Xferd Average Speed Time Time Time Current
 Dload Upload Total Spent Left Speed
 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0

Step 3. Run the initial bootstrap script

Having checked (and as needed, increased) AWS quota limits, at this point, you should have the bootstrap script and license file in a directory on your prepared bootstrap server (Docker installed, etc.).

Now SSH to your bootstrap machine (if required), and cd to that directory.

cd /home/user/mirantis

Make the script executable:

chmod +x

… and run it:


The script will reach out, grab needed binaries for the latest version of Mirantis Container Cloud, and store everything in a folder it creates, called kaas-bootstrap.

Copy the license file into this directory:

cp mirantis.lic kaas-bootstrap

Finally, cd into the kaas-bootstrap directory:

cd kaas-bootstrap

(Optional) Configure your base AMI

Next step is to determine the Amazon Machine Image ID code for an Ubuntu 18.04 LTS server image with SSD support, available in your target AWS region. You can do this by browsing to the AWS EC2 console, switching to your target region (upper right), clicking Launch Instance, and using the search feature to search for an appropriate base machine image, e.g., ami-033a0960d9d83ead0.

This step may be optional, because the default configuration of Container Cloud includes the ID of a universally-available Ubuntu 18.04 LTS AMI. However, it’s always possible that AWS may remove (at least from view) a public image that’s not in use, so it makes sense to independently determine the ID of a suitable AMI now available in your region.

Use the vi or nano editor to specify this image file in the right place in the supplied template file called machines.yaml.template, which is inside the /templates/aws directory in the kaas-bootstrap directory:

vi ./templates/aws/machines.yaml.template

Now that you’ve opened the file, use the arrow keys to navigate down until you see a stanza that looks like this:

spec: &cp_spec
 kind: AWSMachineProviderSpec
 instanceType: c5d.2xlarge
 id: ami-033a0960d9d83ead0
 rootDeviceSize: 120

Delete the AMI ID found there, and paste your confirmed one in the same location. Then save and exit the file by pressing ESC followed by “:wq” and Enter.

Provide AWS admin credentials

Earlier, we suggested that you obtain and record the AWS Access Key ID and Secret Access Key for your new administrative user. Next step is to record those values in a set of environment variables in your bootstrap server shell session, so that the AWS provisioning bootstrap script can find them. Enter the following, substituting the actual values for ‘XXXXXXX’ in each case. Correct the chosen region, if necessary:

export KAAS_AWS_ENABLED=true
export AWS_DEFAULT_REGION=us-east-2

Provision AWS resources

Now, from inside the kaas-bootstrap directory, you can run the script that provisions AWS infrastructure for your Container Cloud instance, and performs other setup in AWS. In this case, Mirantis Container Cloud uses its AWS provider to generate a CloudFormation template for your deployment.

./kaas bootstrap aws policy

The script will create a new AWS user called ./ with permissions sufficient to deploy the cluster (but not much more). This is one aspect of Container Cloud’s highly secure, “principle of least privilege” design.

Step 4: Deploy the Mirantis Container Cloud Manager Cluster

Provide bootstrap user credentials

Now you’ll need to revisit the AWS console, IAM section, and generate and retrieve the Access Key and Secret Key for this new, bootstrap user. The article Understanding and getting your AWS credentials explains how.

Once you have those values in hand (save them someplace safe), you need to export them as environment variables under the same names as you used earlier, overwriting the admin user’s Access and Secret Key values:


Deploy Mirantis Container Cloud in AWS

Finally, you’re ready to deploy your Container Cloud instance:

./ all

This will take a while (about 15 minutes), so go get a cup of coffee!


As noted above, issues encountered while deploying frequently relate to AWS quota limitations. This and other problems — and steps to remediate them — are covered in our Troubleshooting guide. 

You can also check the actual deployment container as follows. First get the name of the actual pod.

./bin/kind get kubeconfig --name clusterapi > clusterapi_kubeconfig.yaml
kubectl --kubeconfig clusterapi_kubeconfig.yaml -n kaas get pods

This command provides output that looks like:

aws-controllers-7598bf4f88-wp6cx 1/1 Running 0 22h
aws-credentials-controller-aws-credentials-controller-866cflplt 1/1 Running 0 22h
lcm-lcm-controller-6cddd89668-f62qc 1/1 Running 0 22h
release-controller-release-controller-54f9f9f59b-v4fnq 1/1 Running 0 22h

You’re looking for the pod that starts with aws-controllers-*. In this example, that’s aws-controllers-7598bf4f88-wp6cx. You can then look at the logs:

kubectl --kubeconfig kubeconfigtest.yaml -n kaas logs <POD_NAME> > mylog.txt

Where <POD_NAME> is the name you retrieved in the previous step. This log will likely show you where the process is hanging up.

Using Container Cloud

After a successful deployment, the bootstrapper exits after collecting useful files in the kaas-bootstrap directory (retain all these securely), also printing out the URLs for browsing to your new Mirantis Container Cloud instance.

Important files returned by the bootstrapper include:

  • kubeconfig – A standard access configuration file for your Container Cloud instance, for use with the kubectl Kubernetes management client.
  • passwords.yaml – A file containing system-generated passwords for critical accounts associated with your Container Cloud instance. For our immediate purposes, the most important of these is the password for the Keycloak credential management system.

Configuring strong passwords with Keycloak

By default, Container Cloud deploys with three functional roles defined, and an account enabled for each role. These are called:

  • reader – Permitted to view metrics and other information, but not to deploy or modify managed clusters
  • writer – Permitted to deploy and modify managed clusters
  • operator – Permitted to make changes to Container Cloud instance configuration

At time of deployment, these roles are given the password ‘password,’ which is obviously not secure. So we need to change those passwords.

The bootstrapper also configures an administrator for the Keycloak system, called ‘keycloak.’ The password for this account is in the file passwords.yaml in the kaas-bootstrap directory of your bootstrap instance.

Browse to Keycloak on the URL provided, and log in using the username ‘keycloak’ and the system-generated password. Click on Users under Manage, in the left-hand menu, and View All Users in the main pane.

Click on the ID of each user. This will raise a tabbed dialog of that user’s settings. Click on the Credentials tab. Click the Temporary switch to OFF, letting you change the user’s password instead of creating a temporary password that the user would themselves need to change on first login. Enter a new, secure password for the user (twice) then click Reset Password at the bottom, and confirm via the popup. Do this for all users: reader, writer, and operator.

image thumbnail
Changing the Writer user’s password via Keycloak

Deploying your first managed cluster

Mirantis Container Cloud provides a seamless cloud experience for users — operators who need to deliver and manage Mirantis Kubernetes Engine Kubernetes/Swarm managed clusters for use by departments or teams, and self-service users (typically DevOps-oriented software developers) who want to requisition clusters for dev, test, or production work. The entire apparatus of infrastructure provisioning is hidden (though still accessible to operators from under the hood). Creating Mirantis Kubernetes Engines is reduced to common-sense essentials, e.g., how many worker nodes, and of what types, do you need?

Here’s what it’s like to generate and access a managed cluster with Mirantis Container Cloud:

Log into the Container Cloud UI

Using the URL provided by the bootstrapper, log into the Container Cloud UI with the writer account, and its newly-changed password.

You arrive at a simple, columnar display of running clusters. To begin, only one cluster is operating: the Container Cloud instance itself.

image thumbnail

Create a new managed cluster

Let’s make a new one! Click Create Cluster in the upper right. A tabbed dialog comes up, showing basic cluster configuration defaults. Defaults are sensible, so you don’t really need to do anything but provide your new cluster with a name. Click Create, and a cluster template is initialized, ready for you to add Manager and Worker nodes.

image thumbnail

Add nodes

Click the new cluster’s name, and a dialog opens up showing machines defined for that cluster (there are no machines configured yet). Click Create Machine in the upper right, and a dialog pops up, letting you select the Manager or Worker node type, and letting you dial in the number of machines you want to create. By default, Container Cloud enforces a minimum of three Manager nodes and two Worker nodes per managed cluster — enough to permit workloads and management plane functions to remain available during updates.

image thumbnail

The cluster begins deploying immediately, and is available in only a few minutes. Once all nodes are ready, you can visit the cluster directly via the link in the upper right of its detail display, and view Grafana dashboards for the cluster from the adjacent link. Log into your new cluster with the username writer and your Container Cloud password. Once you’re logged into the cluster as its administrator, you can create new user accounts and passwords for everyday use, download authentication bundles, and get to work!

image thumbnail

Note that downloading an authentication bundle to your desktop, including your cluster’s kubeconfig file for your user identity, lets you use Lens, the Kubernetes IDE, to speed your development, configuration, and forensics workflow. Try Lens with our Lens download tutorial.

Tearing down the project

When you’re finished playing with Mirantis Container Cloud and Mirantis Kubernetes Engine managed clusters, it’s easy and quick to tear everything down cleanly.

Begin by deleting any managed clusters you’ve created. You’ll be given the option to terminate machines on decommissioning.

Then return to your bootstrap server, cd into the kaas-bootstrap directory, and when you’re sure you want to conclude your evaluation, simply cd to the kaas-bootstrap directory on your deployer machine, and enter:

./ cleanup

Next steps

Of course, this is just a fraction of what Mirantis Container Cloud (and Mirantis Kubernetes Engine managed clusters) can do in practice, and at scale. To learn more, please contact your Mirantis representative.

If you’d like to continue working with Mirantis Container Cloud, we recommend the tutorial Working with multiple Kubernetes clusters in Mirantis Container Cloud and Lens.