Introducing Lens AppIQ -   App-Centric Intelligence for Modern Application Management   Learn More

Introducing Lens AppIQ: App-Centric Intelligence for Modern Application Management

Download K0S - Zero-Friction Kubernetes

Deploy k0s — a lightweight Kubernetes distribution for any application, ideal for learners

k0s is a zero-friction Kubernetes distribution that runs on any Linux-based operating system. It’s great for learning Kubernetes or building high-performance projects on platforms from Raspberry Pis to bare metal datacenters. k0s is distributed as a single binary, installed on any node from the internet with one command, that works like a command-line application. You can call k0s from the command line to start a master (or master+worker) node, then ask the master for code that can be run on further machines to create and add more masters and workers. Note: The instructions below are for installing k0s using an installer script. If you instead prefer to install k0s using the k0sctl utility, you can find instructions here.


  • A laptop or VM with internet access, configured for Kubernetes operations and development. If you haven’t already constructed this, our tutorial How to Build a Kubernetes Development Environment gives a complete recipe.

  • Two server VMs, connected to the same local network, accessible via public or private IP addresses, and able to reach the internet, on which to install a k0s manager and a k0s worker node. We recommend Ubuntu 18.04 LTS servers running on AWS or VirtualBox, though the instructions should work for VMs running any popular Linux on any public or private cloud. If you’re not familiar with launching VMs on AWS or building VMs on VirtualBox, our tutorials Launching Virtual Machines on AWS and How to Create a Server on VirtualBox provide all the instructions.

  • Servers must be configured for passwordless sudo, and must have curl installed. Instructions for these configuration steps can be found in the tutorial How to Create a Server on VirtualBox.

Hardware Requirements for k0s nodes

  • 1 vCPU (2 vCPU recommended)

  • 1 GB of RAM (2 GB recommended)

  • .5 GB of free disk space for controller nodes. SSD is recommended.

  • 1.3 GB of free disk space for worker nodes. SSD is recommended.

  • 1.7 GB of free disk space for combined controller/worker nodes. SSD is recommended.

Step 1: Create target servers

Begin by creating your two target machine servers, ensuring they’re configured correctly, that you can SSH to them with your private key, and that your administrative user can issue sudo commands without a password. Also ensure that curl is installed on both machines.

Step 2: Download Assets

Fill out the form below and submit to obtain code to download the k0s installer script. Then SSH to the server you wish to make into a k0s Kubernetes controller, and issue the curl command as shown after form submission. The script downloads the k0s binary, then installs it on your server.

Download K0S installer script

Step 3: Start the controller

From here, all you need to do is start the controller. While you can install it as a single process, for convenience we’ll go ahead and install it as a service set to start every time the machine boots up.

sudo k0s install controller
sudo systemctl start k0scontroller
sudo systemctl enable k0scontroller

Step 4: Accessing the cluster

You can either install kubectl separately or use the version that k0s installs to access the cluster. On installation, k0s writes the KUBECONFIG file to /var/lib/k0s/pki/admin.conf, so let’s copy it to somewhere accessible and set it to be used by kubectl:

mkdir ~/Documents
sudo cp /var/lib/k0s/pki/admin.conf ~/Documents/kubeconfig.cfg
sudo chown $USER ~/Documents/kubeconfig.cfg
export set KUBECONFIG=~/Documents/kubeconfig.cfg

Now, if you look at the existing nodes, you’ll see that there aren’t any:

k0s kubectl get nodes
No resources found

This is because k0s runs all of the controller functions as bare functions, so there is no controller “node”. To run workloads, we’ll have to set up a worker node.

Step 3: Add a worker node

K0s provides two ways to add a worker node. The first is to create one when you create the controller. This essentially turns the controller into a worker node, enabling you to schedule workloads to it. We don’t recommend you do this in production, but it is a simple way to test k0s without needing multiple servers. To start up k0s with a worker node, you can either specify it as a single process:

sudo k0s controller --enable-worker &

Or add it to the service. First we’ll remove the original service:

sudo k0s reset

(Yes, you’re removing the cluster you already created. But this isn’t production, remember?)

sudo k0s install controller --enable-worker
sudo systemctl start k0scontroller

Now you can go ahead and copy the new KUBECONFIG file and again look for the nodes:

sudo cp /var/lib/k0s/pki/admin.conf ~/Documents/kubeconfig.cfg
export set KUBECONFIG=~/Documents/kubeconfig.cfg
k0s kubectl get nodes
nick-virtualbox   Ready        2m   v1.20.4-k0s1

As you can see, there’s now a single worker node. For a production environment, you would need to create additional nodes.

Step 4: Add an additional worker

To add an additional worker node, you will need to first create a k0s token. On your original controller, type:

sudo k0s token create --role=worker > k0sworkertoken.txt

(Note that you must have root privileges for this command.) The resulting output is a long string of text that represents a KUBECONFIG that has been encoded using BASE64. To create a new worker node, move the token to the target machine, install k0s, and start the new node, as in:

scp user@serverip:~/k0sworkertoken.txt .
sudo curl -sSLf | sudo sh
sudo k0s install worker --token-file /absoute/path/to/token/file
sudo systemctl start k0sworker

Now you can see the new node from any of the k0s nodes:

K0s kubectl get nodes
Nick-virtualbox    Ready       92m   v1.20.4-k0s1
nick-virtualbox2   Ready        2m   v1.20.4-k0s1

You can also add an additional controller node by creating a controller token rather than a worker token, as in:

k0s token create --role=controller > k0scontrollertoken.txt

Then on the target machine:

k0s install controller --token-file /absolute/path/to/token/file
systemctl start k0scontroller

Now let’s look at connecting to Kubernetes from outside the cluster.

Step 5: Make the Kubernetes cluster accessible

You’re not always going to want to connect using the kubectl that comes with k0s, so you’re going to need to do two things:

  • Configure the cluster to receive external requests

  • Access the KUBECONFIG file

By default, the cluster is configured to receive requests only from inside the cluster. To receive requests directed to an external IP address, you will need to add that address to the k0s configuration and restart the controller. First, generate the configuration file:

k0s default-config > k0s.yaml

From here, add the address on which you want to reach the cluster as an entry under spec/api/sans, as in:

kind: Cluster
  name: k0s
    extraArgs: {}
    extraArgs: {}
    extraArgs: {}

To get k0s to pick up the changes, you need to stop the service, install the changes, and start it up again.

k0s stop
k0s reset
k0s install k0s.yaml
k0s start

Now you should be able to access the cluster from anywhere that has access to that IP address.

Step 6: Get the KUBECONFIG file

Once again, when k0s installs, it places the KUBECONFIG file at /var/lib/k0s/pki/admin.conf; to use it, we copied it to a convenient location and point the KUBECONFIG variable at it, as in:

cp /var/lib/k0s/pki/admin.conf ~/kubeconfig.cfg
export KUBECONFIG=~/kubeconfig.cfg

To use it on an external machine, copy it to the target machine with a tool such as scp:

scp nick@ .

You’ll also need to change the value of the server to point to the public IP of the Kubernetes cluster. This is the same value you added to k0s.yaml under sans, as in:

apiVersion: v1
- cluster:

Now you should be able to access the cluster through kubectl. There’s also a simpler way.

Step 7: Connect to your k0s cluster using Lens

The simplest way to access a Kubernetes cluster is to use the Lens Kubernetes IDE. To access the cluster, first install Lens, then add the cluster as follows: Click the add cluster button along the left-hand side of the window: 

 Select the KUBECONFIG file and the context you want to add (if there’s more than one). 

 Finally, click Add Cluster. 

 You are now ready to work with your new Kubernetes cluster.