Virtlet makes it possible to run VMs on Kubernetes clusters as if they were plain pods, enabling you to use standard kubectl commands to manage them, bringing them onto the cluster network as first class citizens, and making it possible to build higher-level Kubernetes objects such as Deployments, StatefulSets or DaemonSets composed of them. Virtlet achieves this by implementing the Container Runtime Interface (CRI).
An obvious use case for VMs are legacy applications that can’t run in containers for one reason or another, or that require extra privileges in order to be containerized. These legacy applications are an important use case that will continue to be important for years to come, but other important uses also exist, such as:
- NFV: Many VNFs can’t be easily containerized because it’s often undesirable and unsafe for them to share the kernel with the host system.
- Non-Linux systems: You may need, for example, a Microsoft Windows environment for some of your CI tasks that are used for Windows desktop apps. True, Windows Containers do exist, but maybe you want to have a uniform Linux- and Kubernetes-based infrastructure or you don’t have people with enough Windows knowledge. You might also need to run a test environment for an app that runs on some special purpose OS.
- Unikernel applications such as such as OSv, Mirage or Rump kernels: One exciting example of this work is the MIKELANGELO project, for which the ability of Virtlet to perform actions such as supporting Deployments of VMs is quite important.
- Isolation: Sure, there are now solutions that enable you to run container images using VMs, thus providing all the isolation you need in most cases, but sometimes you may want to have a “real OS” and not just a container. Imagine, for example, that you have one big bare metal Kubernetes cluster, but you need to test your Kubernetes-based CI/CD and deployment system. Unfortunately, some of the tests can be rather disruptive, with the ability to affect your cluster’s control plane in an undesired way. (See Kubernetes in Kubernetes example for more info.)
An important point to make is that while Virtlet was initially targeted at bare metal Kubernetes installations, GCP and Azure now support nested virtualization, and AWS is providing bare metal instances such as i3.metal, making it possible to use Virtlet on the public cloud, as well.
Virtlet’s approach to running VMs on Kubernetes clusters strives to make it possible to use VMs as if they were plain pods. This includes support for common kubectl commands, such as create, apply, get, delete, logs, attach and port-forward, with exec likely to be implemented in future. The VM pods join the cluster network, getting an IP address from the pod subnet. Moreover, it’s possible to create Kubernetes services that point at VM pods. VM pods can also make use of TCP and HTTP readiness and liveness probes. In addition, Virtlet honors CPU and Memory resource limits specified for VM pods.
Handling mounts for VM pods differs to some extent from how it’s handled for actual containers. Virtlet may gain support for all kinds of Kubernetes volumes over time when 9p support is implemented, but it already supports specifying ConfigMap and Secret mounts, which are actually copied into the VMs using the Cloud-Init mechanism. It’s also possible to use Virtlet’s flexvolume driver to specify mounting of local block devices, “ephemeral volumes” with their lifetime bound to the one of the pod, and Ceph volumes that are specified as block devices.
Virtlet makes extensive use of the Cloud-Init mechanism. For example, it’s used to inject ssh keys, create users, run specific commands on VM startup, and pass the network configuration in situations where it’s too complex to handle using the standard Virtlet networking based on an internal DHCP server.
In order to avoid having a separate complex deployment procedure for the nodes that run VMs, Virtlet makes use of the CRI proxy, so you can run both VM pods and plain Kubernetes pods on the same node. You can also deploy Virtlet itself as a DaemonSet.
Let’s try Virtlet: setting up
The easiest way to try Virtlet is to use the demo script, which makes use of a kubeadm-dind-cluster, a tool that makes it possible to run Kubernetes test clusters using just Docker.
To make things easier, we’ll be needing virtletctl binary that facilitates some of the VM-related tasks. You can get it from Virtlet release page, as follows:
sudo wget -O /usr/local/bin/virtletctl https://github.com/Mirantis/virtlet/releases/download/v1.0.0/virtletctl echo '4a0efdfe339f6fb00525bc53428415177bdd5f2391774d60ec1c449a99990461 /usr/local/bin/virtletctl' | sha256sum -c && chmod +x /usr/local/bin/virtletctl
Mac OS X:
sudo wget -O /usr/local/bin/virtletctl https://github.com/Mirantis/virtlet/releases/download/v1.0.0/virtletctl.darwin echo '8265312a5d9ffe0e8ce1ff66fde187ad025d1ebd780fb500f54512b7f0738bd3 /usr/local/bin/virtletctl' | sha256sum -c && sudo chmod +x /usr/local/bin/virtletctl
Now we need to download the demo script and run it:
NOTE: if you’re already using kubeadm-dind-cluster, this command will erase and replace your existing test cluster.
wget https://raw.githubusercontent.com/Mirantis/virtlet/v1.0.0/deploy/demo.sh chmod +x demo.sh ./demo.sh
Answer y to the script’s questions and wait until the script completes. The script will create a CirrOS VM for you and display its shell prompt:
Successfully established ssh connection. Press Ctrl-D to disconnect. $
Now let’s test it out.
Testing the Virtlet installation
Let’s make sure the VM has network connectivity and can access Kubernetes cluster services. For example, the demo script also creates an nginx sevice:
$ ping -c1 22.214.171.124 PING 126.96.36.199 (188.8.131.52): 56 data bytes 64 bytes from 184.108.40.206: seq=0 ttl=58 time=4.134 ms --- 220.127.116.11 ping statistics --- 1 packets transmitted, 1 packets received, 0% packet loss round-trip min/avg/max = 4.134/4.134/4.134 ms $ curl -s http://nginx | head -4 <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title>
Cool! Let’s press Ctrl-D to disconnect from the VM and see what it looks like from the Kubernetes perspective.
$ kubectl get pods NAME READY STATUS RESTARTS AGE cirros-vm 1/1 Running 0 11m nginx-7587c6fdb6-crb7z 1/1 Running 0 11m
The pod labled cirros-vm pod is the VM. We can look at its boot logs via kubectl logs:
$ kubectl logs cirros-vm ... /dev/root resized successfully [took 0.01s] login as 'cirros' user. default password: 'gocubsgo'. use 'sudo' for root. cirros-vm login: [ 734.232878] random: crng init done
Or we can attach to its serial console (press Ctrl-] to detach):
$ kubectl attach -it cirros-vm If you don't see a command prompt, try pressing enter. login as 'cirros' user. default password: 'gocubsgo'. use 'sudo' for root. cirros-vm login: cirros Password: $
You can delete the VM pod when you no longer need it:
$ kubectl delete pod cirros-vm
Very good. Now let’s try something more interesting.
Running Windows on a Linux Kubernetes cluster using Virtlet
Let’s try running a Windows VM. You can obtain the necessary images, for example, from cloudbase. Unfortunately you can’t point Virtlet directly at them because of licensing restrictions, so we’ll need a few extra tricks.
Let’s assume the image is available as http://192.168.0.2:8000/windows.qcow2 (You can achieve this using python3 -m http.server, for example.)
First, Virtlet uses the pod spec’s image field to specify the image. This field must follow the conventions for container image names, so we can’t just put any image URL in it. To solve this problem, we’ll need to create a Virtlet image name translation object for it:
$ cat >winimage.yaml <<EOF apiVersion: "virtlet.k8s/v1" kind: VirtletImageMapping metadata: name: windows namespace: kube-system spec: translations: - name: windows url: http://192.168.0.2:8000/windows.qcow2 EOF $ kubectl create -f winimage.yaml
Virtlet’s image name translation mechanism is flexible enough so you don’t have to create a separate object for each of your images all the time, but for this case we’re using the simplest one possible. Now let’s make a VM pod to run Windows. Let’s name the file windows-vm.yaml. Below is the description of its contents.
The file starts with a standard pod header, of which the only interesting thing is kubernetes.io/target-runtime annotation. It’s used by the CRI proxy to target the Virtlet runtime on the node instead of the default Docker runtime:
apiVersion: v1 kind: Pod metadata: name: windows-vm annotations: kubernetes.io/target-runtime: virtlet.cloud labels: app: windows ...
At the beginning of the pod spec, we want to specify that this pod has to run on one of the nodes with the Virtlet runtime. We’ll distinguish these nodes by the extraRuntime=virtlet label:
... spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: extraRuntime operator: In values: - virtlet ...
Next, we want to give the VM some time to shut down when we’re deleting the pod:
... terminationGracePeriodSeconds: 120 ...
Now let’s define the container. It’s image has the virtlet.cloud/ prefix, which the CRI proxy needs, as it gets the PullImage request before receiving any information about the pod and it must know that it’s pulling a QCOW2 image for Virtlet. The windows part of the image name corresponds to the name in the image translation object we’ve created above:
... containers: - name: windows-wm image: virtlet.cloud/windows imagePullPolicy: Always ...
Next, let’s give the VM the RAM it needs:
... resources: limits: # This memory limit is applied to the libvirt domain definition memory: 4096Mi
Finally, we’ll create the VM pod and wait for it to get to a Running state:
$ kubectl create -f windows-vm.yaml $ kubectl get pods -o wide -w
Once the pod is running, we can use virtletctl to connect to the vnc console:
$ virtletctl vnc windows-vm VNC console for pod "windows-vm" is available on local port 55209 Press ctrl-c or kill the process stop.
At this point, you can use a VNC client of your choice to connect to localhost:55209. It’ll take some time for the Windows VM to initialize, after which you can click Local Server -> Manage -> Add Roles and Features in the Server Manager that appears, and select Web Server (IIS) in “Server Roles”. After that, you can open Internet Explorer and navigate to http://kubernetes-dashboard.kube-system.svc.cluster.local to see the Kubernetes dashboard:
Now, let’s see if we can connect to the VM pod from outside. Let’s create a file named iis-service.yaml:
apiVersion: v1 kind: Service metadata: name: iis spec: selector: app: windows ports: - port: 80
and add it to our test cluster:
$ kubectl create -f iis-service.yaml service "iis" created
Now start kubectl proxy:
$ kubectl proxy Starting to serve on 127.0.0.1:8001
And navigate to http://localhost:8001/api/v1/namespaces/default/services/iis/proxy/ to access the cluster service. Voila! We’ve made a service that points at IIS.
For some other interesting examples, you may want to try the Kubernetes-in-Kubernetes demo. The corresponding yaml file showcases some important features of Virtlet, such as using ephemeral volumes to add some disk space to the VM, initializing the VM using cloud-init, and building a StatefulSet of VMs. It also makes use of StatefulSet’s network identity mechanism to help the VMs communicate with each other.
Also, you may want to try Virtlet on a real Kubernetes cluster instead of one based on kubeadm-dind-cluster.
Comparing Virtlet to other Kubernetes VM solutions
Kata Containers is used to run lightweight VMs that house container images, rather than generic VM images, so generally its use cases don’t overlap that much with those of Virtlet, although it’s also based on a CRI implementation.
KubeVirt focuses on “pet” VMs, with support for features such as migrations and fine-tuning the underlying libvirt domains. Also, as KubeVirt is not a CRI implementation, you don’t have to install CRI proxy on the nodes that need to be able to run VMs. On the other hand, you can’t make a StatefulSet or DaemonSet of KubeVirt VMs, though some, but not all of these objects are reimplemented (see, for example, ReplicaSets).
Virtlet, in many ways, combines the best of all of these worlds; it can be used to treat VMs as first-class Kubernetes citizens, its VMs can be used to run containers, and if necessary, it’s possible for Virtlet to get support for CRDs (Custom Resource Definitions) for “pet” VMs in the future, although there was no such immediate need at the moment.
In the end, you should use what’s most applicable to your use case, and we’d love to hear what you find most useful. Tell us in the comments below!