
Sometimes you just need a Kubernetes cluster, and you don’t want to mess around with a full Kubernetes install procedure. Maybe you want to test out a small application, or create a development environment for yourself. Whatever your goal, you want it quick, and you want it simple. That’s what we’re going to do. This article is a quick and dirty guide to creating a single-node Kubernetes cluster using Kubeadm, the “best practices” tool the k8s community created to simplify the deployment process. (It’s straightforward to create multi-node deployments, but we’ll cover that in later articles.)
Create the VM
These instructions were tested using a Virtualbox VM, but in theory they should be the same for a bare metal deployment. (You can find instructions for creating the VM here.) A couple of things to note about the VM:
- Allocate at least 2 vCPUs, even if you’re overcommitting your resources; there are pieces of this example that won’t start with just one.
- Try to allocate at least 4096 MB of RAM and 20 GB of drive space.
- You should have Ubuntu 16.04 or later installed.
- Set the default network adapter to connect to the “Bridged adapter” to enable traffic between the VM and the host machine.
Prepare the VM
There are a few things you need to do to get the VM ready. Specifically, you need to turn off swap, tweak some configuration settings, and make sure you have the prerequisites libraries installed. To do that, follow these steps:
- Change to root:
sudo su
- Turn off swap: To do this, you will first need to turn it off directly …
swapoff -a
… then comment out the reference to swap in /etc/fstab. Start by editing the file:
vi /etc/fstab
Then comment out the appropriate line, as in:
# /etc/fstab: static file system information. # # Use 'blkid' to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> # / was on /dev/sda1 during installation UUID=1d343a19-bd75-47a6-899d-7c8bc93e28ff / ext4 errors=remount-ro 0 1 # swap was on /dev/sda5 during installation #UUID=d0200036-b211-4e6e-a194-ac2e51dfb27d none swap sw 0 0
- Now configure iptables to receive bridged network traffic. First edit the sysctl.conf file:
vi /etc/ufw/sysctl.conf
And add the following lines to the end:
net/bridge/bridge-nf-call-ip6tables = 1 net/bridge/bridge-nf-call-iptables = 1 net/bridge/bridge-nf-call-arptables = 1
- Reboot so the changes take effect.
- Install ebtables and ethtool:
sudo su apt-get install ebtables ethtool
- Reboot once more.
Kubeadm Install Process
OK, now we’re ready to go ahead and do the install. For the full details on this process, you can see the documentation, but here’s the quick and dirty version:
- Install Docker:
sudo su apt-get update apt-get install -y docker.io
- Install HTTPS support components (if necessary):
apt-get update apt-get install -y apt-transport-https
- Install Curl (if necessary):
apt-get install curl
- Retrieve the key for the Kubernetes repo and add it to your key manager:
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
- Add the kubernetes repo to your system.
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list deb http://apt.kubernetes.io/ kubernetes-xenial main EOF
- Actually install the three pieces you’ll need, kubeadm, kubelet, and kubectl:
apt-get update apt-get install -y kubelet kubeadm kubectl
At this point you should have all the tools you need, so you should be ready to go ahead and actually deploy a k8s cluster.
Create a cluster
Now that the Kubeadm installation is complete, we’ll go ahead and create a new cluster using kubeadm init. Part of this process is choosing a network provider, and there are several choices; we’ll use Calico for this kubeadm init example example.
- Create the actual cluster. For Calico, we need to add the –pod-network-cidr switch as command line arguments to kubeadm init, as in:
kubeadm init --pod-network-cidr=192.168.0.0/16
This will crank for a while, eventually giving you output something like this:
To start using your cluster, you need to run (as a regular user): mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: http://kubernetes.io/docs/admin/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join --token 354502.d6a9a425d5fa8f2e 192.168.0.9:6443 --discovery-token-ca-cert-hash sha256:ad7c5e8a0c909ed36a87452e65fa44b1c2a9729cef7285eb551e2f126a1d6a54
Notice that last bit, about joining other machines to the cluster; we’re not going to do that, but you do have that option.
- Prepare your system for adding workloads, including the network plugin. Open a NEW terminal window and execute the commands kubeadm gave you:
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
- Install the Calico network plugin:
kubectl apply -f https://docs.projectcalico.org/v2.6/getting-started/kubernetes/installation/hosted/kubeadm/1.6/calico.yaml
- Check to see if the pods are running:
kubectl get pods --all-namespaces
The pods will start up over a short period of time.
- Untaint the master so that it will be available for scheduling workloads:
kubectl taint nodes --all node-role.kubernetes.io/master-
At this point you should have a fully-functional kubernetes cluster on which you can run services and workloads.
Test the cluster
Now let’s make sure everything’s working properly by installing the Sock Shop sample application. Follow these steps:
-
- Create the namespace in which the Sock Shop will live:
kubectl create namespace sock-shop
- Create the actual Sock Shop application:
kubectl apply -n sock-shop -f "https://github.com/microservices-demo/microservices-demo/blob/master/deploy/kubernetes/complete-demo.yaml?raw=true"
Use kubectl get pods –all-namespaces to make sure the pods are all running.
- We’ll interact with it via the front-end service, so find the IP address for that service:
kubectl -n sock-shop get svc front-end NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE front-end 10.110.250.153 <nodes> 80:30001/TCP 59s
Visit http://<cluster-ip> (in this case, http://10.110.250.153) with your browser. You should see the WeaveSocks interface:
(NOTE: If you are running VirtalBox on Windows, you may run into a bug that only enables you to view the application on port 30001.)
- You can also verify that you can reach the interface from the host machine. You’ll first need to find the proper IP address. Your VM will have several, which you should be able to find using:
ifconfig
You’re looking for an IP address in the same range as your host machine, most likely the Ethernet adapter. For example, in my case, it was this section:
enp0s3 Link encap:Ethernet HWaddr 08:00:27:7c:14:fd inet addr:192.168.2.50 Bcast:192.168.2.255 Mask:255.255.255.0 inet6 addr: fe80::7638:2441:3326:b242/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:415485 errors:0 dropped:0 overruns:0 frame:0 TX packets:166889 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:621031284 (621.0 MB) TX bytes:12020009 (12.0 MB)
- Now take that IP address and point the browser on your host machine to that IP on port 30001:
http://:30001
(In my case that would be http://192.168.2.50:30001.)
If all has gone well, you will see the Sock Shop, just as you did on the VM:
- Create the namespace in which the Sock Shop will live:
Cleaning up
We haven’t done much, but if you want to start over, you can do the following:
To remove the Sock Shop:
kubectl delete namespace sock-shop
To remove the entire cluster:
sudo kubeadm reset
That’s it!
Installing Kubernetes with kubeadm recap
So at this point you know how to:
-
-
- Prepare a VM for Kubeadm
- Install Kubeadm
- Deploy a Kubernetes cluster
- Deploy a sample application on a Kubernetes application
- Remove the sample application
- Remove the cluster
-
In future articles, we’ll talk about other features such as creating custom applications and adding additional nodes to clusters. You can also check out our previous tutorial on creating a Kubernetes deployment, or spend an hour learning the basics of Kubernetes with our free Kubernetes mini-boot camp.
ebtables is required to install and load the br_netfilter kernel module into the OS (because it’s Ubuntu)
Thank you for this document.Can you please give us a link to adding additional nodes to clusters. I am not able to join nodes on my cluster by kubeadm join
It would be good if you can also add why one needs to disable swap and configure iptables.
k8s doesn’t like swap enabled machines, it’s prerequisite, and iptables required for interpod network communications
I ran the command line to create the sock shop app and the pods are stuck at pending for almost an hour now. I’m running this on a vm with the recommended spec (4GB of RAM and 2 CPUs). Maybe this is not enough?
phu@ubuntu:~$ kubectl get pods –all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-etcd-g2dxq 1/1 Running 0 59m
kube-system calico-kube-controllers-5cc6fcf4d9-cxhg9 1/1 Running 0 59m
kube-system calico-node-p7pg7 2/2 Running 0 59m
kube-system etcd-ubuntu 1/1 Running 0 1h
kube-system kube-apiserver-ubuntu 1/1 Running 0 1h
kube-system kube-controller-manager-ubuntu 1/1 Running 0 1h
kube-system kube-dns-86f4d74b45-zvwh9 3/3 Running 0 1h
kube-system kube-proxy-qvv4v 1/1 Running 0 1h
kube-system kube-scheduler-ubuntu 1/1 Running 0 1h
sock-shop carts-6cd457d86c-wqtvr 0/1 Pending 0 54m
sock-shop carts-db-784446fdd6-jz29r 0/1 Pending 0 54m
sock-shop catalogue-779cd58f9b-8rhbl 0/1 Pending 0 54m
sock-shop catalogue-db-6794f65f5d-28bz5 0/1 Pending 0 54m
sock-shop front-end-679d7bcb77-6zxfn 0/1 Pending 0 54m
sock-shop orders-755bd9f786-kxqzs 0/1 Pending 0 54m
sock-shop orders-db-84bb8f48d6-7zrjr 0/1 Pending 0 54m
sock-shop payment-674658f686-nrkmk 0/1 Pending 0 54m
sock-shop queue-master-5f98bbd67-xv7vc 0/1 Pending 0 54m
sock-shop rabbitmq-86d44dd846-5nvr8 0/1 Pending 0 54m
sock-shop shipping-79786fb956-8vbzg 0/1 Pending 0 54m
sock-shop user-6995984547-74fcd 0/1 Pending 0 54m
sock-shop user-db-fc7b47fb9-5p6pj 0/1 Pending 0 54m
Try expanding to 8GB and see what happens.
Hi there,
I’ve done this and would like to use jenkins to make deployments and other things.
Have you got any experience setting up Jenkins to use kubeadm to remotely connect to a bare-metal k8s cluster?
If not – do you know at all how to set up remote access to a cluster setup like above?
Hi
Thank you for the tutorial.
I follow the tuto but the status of pods remain pending. That is my pods state after the installation of Calico network below. I am running on ubuntu 16.04 on a vps with 16gb of RAM and CPU 2.4Ghz
$ kubectl get pods –all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-kube-controllers-57c8947c94-rl2j9 0/1 Pending 0 32h
kube-system coredns-576cbf47c7-72xzx 0/1 ContainerCreating 0 32h
kube-system coredns-576cbf47c7-wwtrw 0/1 ContainerCreating 0 32h
kube-system etcd-vps205307 1/1 Running 0 32h
kube-system kube-apiserver-vps205307 1/1 Running 0 32h
kube-system kube-controller-manager-vps205307 1/1 Running 1 32h
kube-system kube-proxy-tjd9j 1/1 Running 0 32h
kube-system kube-scheduler-vps205307 1/1 Running 2 32h
What could be causing this. How to solve this issue?
Hi I am triing to install kubernetes following this tutorial: https://www.mirantis.com/blog/how-install-kubernetes-kubeadm/https://www.mirantis.com/blog/how-install-kubernetes-kubeadm/. After the “kubeadm init” command, all coredns… pods status stay on pending. i am running it on a vps which have RAM is 12GB and 2 CPUs of 2.4GHz.
This is the list of pods below:
$ kubectl get pods –all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-576cbf47c7-5r78j 0/1 Pending 0 9m38s
kube-system coredns-576cbf47c7-98jdp 0/1 Pending 0 9m38s
kube-system etcd-vps205307 1/1 Running 0 8m33s
kube-system kube-apiserver-vps205307 1/1 Running 0 8m38s
kube-system kube-controller-manager-vps205307 1/1 Running 0 8m48s
kube-system kube-proxy-cn8dz 1/1 Running 0 9m38s
kube-system kube-scheduler-vps205307 1/1 Running 0 9m
Why we need to turn off swap ? any specific reason
Was able to reproduce the issue with “Pending” pods on AWS/Ubuntu16/2CPU/4GB t3-medium instance.
kubectl get pods –all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-kube-controllers-74bbfbfd85-bb5pl 0/1 Pending 0 122m
kube-system coredns-86c58d9df4-k6klz 0/1 Pending 0 160m
kube-system coredns-86c58d9df4-lj5hk 0/1 Pending 0 160m
kube-system etcd-ip-172-31-8-112 1/1 Running 0 159m
kube-system kube-apiserver-ip-172-31-8-112 1/1 Running 0 159m
kube-system kube-controller-manager-ip-172-31-8-112 1/1 Running 0 159m
kube-system kube-proxy-m8c4n 1/1 Running 0 160m
kube-system kube-scheduler-ip-172-31-8-112 1/1 Running 0 159m
As per log entries looks like it’s CNI config related:
journalctl -xeu kubelet:
Jan 27 16:30:17 ip-172-31-8-112 kubelet[2999]: E0127 16:30:17.401666 2999 kubelet.go:2192] Container runtime network not ready: NetworkReady=false reason:Netwo
Jan 27 16:30:22 ip-172-31-8-112 kubelet[2999]: W0127 16:30:22.402610 2999 cni.go:203] Unable to update cni config: No networks found in /etc/cni/net.d
And here as well:
systemctl status kubelet:
Jan 27 16:40:12 ip-172-31-8-112 kubelet[2999]: W0127 16:40:12.561347 2999 cni.go:203] Unable to update cni config: No networks found in /etc/cni/net.d
Jan 27 16:40:12 ip-172-31-8-112 kubelet[2999]: E0127 16:40:12.561481 2999 kubelet.go:2192] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Trying to check config’s dir /etc/cni/net.d shows it is absent. So seems that’s the root cause..
As per Calico installation guide for v2.6 (https://github.com/projectcalico/calico/blob/master/v2.6/getting-started/kubernetes/installation/integration.md) that dir should be added and filled with some data:
mkdir -p /etc/cni/net.d
cat >/etc/cni/net.d/10-calico.conf <<EOF
{
"name": "calico-k8s-network",
"cniVersion": "0.1.0",
"type": "calico",
"etcd_endpoints": "http://:”,
“log_level”: “info”,
“ipam”: {
“type”: “calico-ipam”
},
“policy”: {
“type”: “k8s”
},
“kubernetes”: {
“kubeconfig”: “”
}
}
EOF
When filling out etcd’s actual IP:PORT and kubeconfig’s path here is the result in my case:
cat >/etc/cni/net.d/10-calico.conf <<EOF
{
"name": "calico-k8s-network",
"type": "calico",
"etcd_endpoints": "http://10.96.232.136:6666",
"log_level": "info",
"ipam": {
"type": "calico-ipam"
},
"policy": {
"type": "k8s"
},
"kubernetes": {
"kubeconfig": "/etc/kubernetes/kubelet.conf"
}
}
EOF
Doing that completely eliminates the "Pending" problem:
kubectl get pods –all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-etcd-6hbkv 1/1 Running 0 35m
kube-system calico-kube-controllers-74bbfbfd85-bb5pl 1/1 Running 0 3h59m
kube-system calico-node-9t6pf 2/2 Running 1 35m
kube-system coredns-86c58d9df4-k6klz 1/1 Running 0 4h38m
kube-system coredns-86c58d9df4-lj5hk 1/1 Running 0 4h38m
kube-system etcd-ip-172-31-8-112 1/1 Running 0 4h37m
kube-system kube-apiserver-ip-172-31-8-112 1/1 Running 0 4h37m
kube-system kube-controller-manager-ip-172-31-8-112 1/1 Running 0 4h37m
kube-system kube-proxy-m8c4n 1/1 Running 0 4h38m
kube-system kube-scheduler-ip-172-31-8-112 1/1 Running 0 4h37m