Virtlet vs KubeVirt comparison: which is better?

It’s easy to think of KubeVirt or Virtlet as an “either/or” proposition; after all, they both enable you to run Virtual Machines on Kubernetes, so one must be better for you than the other, right?  Well, no. Which one you should use is going to depend on your individual situation, as these two tools are very, very different.

KubeVirt and Virtlet are implemented in drastically different ways. KubeVirt is a virtual machine management add-on for Kubernetes providing control of VMs as Kubernetes Custom Resources. Virtlet, on the other hand is a CRI (Container Runtime Interface) implementation, which means that Kubernetes sees VMs in the same way it sees Docker containers.

These two different approaches to handling VMs in Kubernetes lead to significant differences in terms of capabilities and management. In this article, we’ll talk about the pros and cons of each, and when you’d choose one over the other.

KubeVirt pros and cons

KubeVirt defines a Virtual Machine as a Kubernetes Custom Resource, which has the advantage that installation is fairly straightforward and can be applied as an addon to any Kubernetes cluster, but the disadvantage that its VMs need to be managed separately from kubelet, requiring new commands for kubectl and also a new controller. While this does provide the advantage that KubeVirt developers have freedom in implementing all required features and are not limited by Pod definition, it does carry significant disadvantages as well. Many parts of Kubernetes need to be re-implemented in the KubeVirt controller, leading to a much bigger codebase. Users must also contend with an additional learning curve because of the addition of two new objects, OfflineVirtualMachine and VirtualMachine.

One of the biggest limitation of KubeVirt, however, is that its VMs cannot be used as a part of Deployments/ReplicaSets/DaemonSets. Where Virtlet VMs get this feature for free, KubeVirt has to reimplement ReplicaSet as VirtualMachineReplicaSet.

The KubeVirt team is also working on the vmctl tool, which will allow them to use Deployments and other higher level Kubernetes types. At the moment, however, it’s just a proof of concept, and in its current state, adds complexity in the form of an additional required pod..

Virtlet pros and cons

Virtlet is a CRI implementation, so all VMs are defined as Kubernetes Pods and treated as first-class citizens, so to speak. The advantage of this architecture is that anything you can do with Pods can be done with Virtlet VMs, right out of the box. The disadvantage is that you are limited to the functionality that comes with Pods. For example, live migration, device hot plugging, or VM scaling (such as adding CPU or RAM) would need to be implemented separately.  Implementing these functions would be possible, but less than ideal.

Virtlet also has limited storage options when compared with KubeVirt storage, with Virtlet supporting only the Flexvolume driver until the completion of Virtlet support for CSI (Container Storage Interface), which is expected within the next few months. Once that’s done, support will expand to all CSI-supported volumes.

Virtlet also requires more configuration than KubeVirt, though if it is part of your Kubernetes distribution — for example, if you are using Mirantis Cloud Platform — this is a non-issue.

Perhaps Virtlet’s biggest advantage over KubeVirt comes in the area of networking, particularly when it comes to NFV use-cases. Thanks to an adaptation of CNI-Genie, Virtlet supports using multiple interfaces, including the SR-IOV interface. SR-IOV support is crucial to achieve the performance and latency levels required for NFV workloads, and the ability to define multiple NICs is necessary to ensure that SR-IOV is used for user-facing traffic without conflicting with other network traffic such as intra-pod connectivity. KubeVirt supports neither multiple interfaces nor SR-IOV, making it unsuitable for NFV environments.

Features table comparison

Each platform has its own level of support for various features:

Feature KubeVirt Virtlet
Able to run VMs and Containers on the same node Yes Yes
SR-IOV support No Yes
Can be used in k8s objects like DaemonSet, ReplicaSet, Deployment? No

own VirtualMachineReplicaSet as a replacement only

Support for readinessProbe No Yes

except exec

Support for livenessProbe No Yes

except exec

Support for multiple interfaces No Yes

Using CNI-Genie

Exposing VM via a Service Yes Yes
Access console Yes (by external to kubectl command) Yes (using usual kubectl attach)
Access graphical consoles(VNC) Yes (by external to kubectl command) Yes (by plugin to kubectl)
CloudInit support NoCloud

UserData in VM definition or k8s Secrets

NoCloud and ConfigDrive

UserData auto generated using Pod definition with mechanism to merge configuration from k8s ConfigMap/Secrets

Able to modify VM via presets Yes

Using own mechanism

Support for VM migrations No

(broken as of the time of this writing)

Support for taints and tolerations No Yes

(VMs are just normal pods)

Supported Volumes types
  • cloudInitNoCloud
  • emptyDisk
  • ephermal
  • persistentVolumeClaim
  • registryDisk
  • flexVolume
How volumes can be used in a VM
  • disk
  • lun
  • floppy
  • cdrom
  • disk

So which is better, Virtlet or Kubevirt?

As with most “which is better” questions, the answer depends on your use case.

If you are simply trying to provide the ability to run traditional VMs to your Kubernetes users without adding additional configuration, KubeVirt might do the job — if your users are willing to learn the additional commands necessary to make use of it.

On the other hand, if you want to treat your VMs  identically to your non-VM pods, or particularly if you have a hard-core use-case such as NFV, you’ll need to use Virtlet instead.


Subscribe to Our Newsletter

Latest Tweets

Suggested Content

Mirantis Application Platform with Spinnaker
How to Increase the Probability of a VNF Working with Your Cloud