Meet k0smotron 1.0 - the future of Kubernetes cluster management   Learn More


Multi-node Kubernetes with KDC: A quick and dirty guide


Kubeadm-dind-cluster, or KDC, is a configurable script that enables you to easily create a multi-node Kubernetes cluster on a single machine by deploying Kubernetes nodes as Docker containers (hence the Docker-in-Docker (dind) part of the name) rather than VMs or separate bare metal machines. Compared to other options, It even enables you to easily create multiple clusters on the same machine.

In this article we’ll look at how to install and use KDC and some of the simple ways a user can configure it for more complicated services.

Interested in KDC? Watch a recording of our recent live demo.

Deploying a multi-node Kubernetes cluster with KDC

At its core, deploying a Kubernetes multi-node cluster with KDC is a simple matter of downloading the source script and executing it:

$ wget

(You’ll notice that the script includes a version number that happens to match the latest version of Kubernetes. As you might have guessed, that’s no coincidence. KDC supports versions 1.10 through 1.13 of Kubernetes, and to change versions you simply need to change the script version. So to deploy Kubernetes 1.12 you would use instead of

Once you’ve got the script, make sure it’s executable, then run it:

$ chmod +x
$ sudo ./ up

The script can take a few minutes to run. During that time, it’s performing several steps, including:

Pulling in the most recent DIND images

Running kubeadm init to create the cluster

Creating additional containers to act as Kubernetes nodes

Joining those nodes to the original cluster

Setting up CNI

Creating Management, Service, and pode networks

Bringing up the Kubernetes multi-cluster dashboard for the new cluster

When it’s finished running, you will see the URL for the Dashboard, as in:


* Bringing up coredns and kubernetes-dashboard
deployment.extensions/coredns scaled
deployment.extensions/kubernetes-dashboard scaled
kube-master   Ready master   3m49s v1.13.0
kube-node-1   Ready    2m32s v1.13.0
kube-node-2   Ready    2m33s v1.13.0
* Access dashboard at:

You can then pull that up in your browser and see the brand new empty cluster.

You can also go ahead and work with the cluster from the command line. First make sure to fix your $PATH; KDC downloads an appropriate version of kubectl for you and places it in the ~/.kubeadm-dind-cluster directory:

$ export PATH="$HOME/.kubeadm-dind-cluster:$PATH"

Then you can see the nodes in the cluster:

$ kubectl get nodes

kube-master   Ready master   8m40s v1.13.0
kube-node-1   Ready    7m23s v1.13.0
kube-node-2   Ready    7m24s v1.13.0

You can also see the actual Docker containers corresponding to the nodes:

$ sudo docker ps  --format '{{ .ID }} - {{ .Names }} -- {{ .Labels }}'
c4d28e8b86d8 - kube-node-2 -- mirantis.kubeadm_dind_cluster=1,mirantis.kubeadm_dind_cluster_final=1,mirantis.kubeadm_dind_cluster_runtime=
8009079bde24 - kube-node-1 -- mirantis.kubeadm_dind_cluster=1,mirantis.kubeadm_dind_cluster_final=1,mirantis.kubeadm_dind_cluster_runtime=
39563d1fb241 - kube-master -- mirantis.kubeadm_dind_cluster=1,mirantis.kubeadm_dind_cluster_final=1,mirantis.kubeadm_dind_cluster_runtime=

As you can see, with a single step you have created a 3 node Kubernetes cluster. But what if you want to open a second cluster? Fortunately, since the node is just a Docker containers, you can go ahead and create additional instances without them interfering with each other.

Creating multiple clusters with KDC

Creating an additional cluster is as straightforward as setting a new CLUSTER_ID and re-running the script. For example:

$ sudo CLUSTER_ID="2" ./ up

kube-master-cluster-2   Ready master   3m58s v1.13.0
kube-node-1-cluster-2   Ready    2m43s v1.13.0
kube-node-2-cluster-2   Ready    2m41s v1.13.0
* Access dashboard at:

As you can see, you wind up with a completely separate cluster, with a completely separate dashboard.

You can also set the DIND_LABEL, as in:

$ sudo DIND_LABEL="edge_test" ./ up

The advantage here is that you simply get a random cluster_id so you don’t have to worry about collisions. Also, while CLUSTER_ID must be an integer, DIND_LABEL can be a human-readable string.

Customizing a KDC Kubernetes deployment

To change the behavior of the KDC script you just have to change various variables. To see the available variables, check the configuration file, which you can find here:

We’ve already used this when we created a second cluster:

$ sudo DIND_LABEL="edge_test" ./ up

For example, to create a Kubernetes multi-node cluster with 5 nodes, you would use the NUM_NODES variable:

$ sudo NUM_NODES=5 ./ up

Another variable you might want to change is the network framework. By default, KDC bridges together the various containers, but you also have the option to use flannel, calico, calico-kdd, or weave. For example, if you were to use calico, you would start your cluster with

$ sudo CNI_PLUGIN="calico" ./ up

Of course, there’s one more thing we need to take care of: cleaning up the data.

Starting, stopping, and cleaning up

KDC also gives you the ability to stop, restart, and delete a deployment. For example, to restart the cluster, you would execute:

$ sudo ./ up

just as before, but the process is much faster the second time because images don’t have to be downloaded, and so on.

To shut down and remove a cluster, use the down command:

$ sudo ./ down

This command removes the containers, but the volumes that back them remain so that you can start the cluster back up. On the other hand, if you want to completely remove the cluster, including volumes, you need to clean:

$ sudo ./ clean

If you’re going to change Kubernetes versions, you’ll want to run the clean command first.

So that’s what you need to know to get started using a multi-node cluster with Kubernetes. What do you plan to build?

Choose your cloud native journey.

Whatever your role, we’re here to help with open source tools and world-class support.



Cloud Native & Coffee

Subscribe to our bi-weekly newsletter for exclusive interviews, expert commentary, and thought leadership on topics shaping the cloud native world.