KubeVirt vs Virtlet: Performance Guide
(Update: For the simplest, most powerful way to run virtual machines in a Kubernetes environment, check out Mirantis OpenStack for Kubernetes, which enables you to get the power of OpenStack with the resilience of Kubernetes.)
It's easy to think of KubeVirt vs. Virtlet as an "either/or" proposition; after all, they both enable you to deploy Virtual Machines on Kubernetes, so one must be better for you than the other, right? Well, no. Which one you should use is going to depend on your individual situation, as these two tools are very, very different.
KubeVirt and Virtlet are implemented in drastically different ways. KubeVirt is a virtual machine management add-on for Kubernetes providing control of VMs as Kubernetes Custom Resources. Virtlet, on the other hand, is a CRI (Container Runtime Interface) implementation, which means that Kubernetes sees VMs in the same way it sees Docker containers.
These two different approaches to handling VMs in Kubernetes lead to significant differences in terms of capabilities and management. In this article, we'll talk about the pros and cons of Kubevirt vs. Virtlet, and when you'd choose one type of virtualization technology over the other.
KubeVirt pros and consKubeVirt defines a Virtual Machine as a Kubernetes Custom Resource, which has the advantage that installation is fairly straightforward and can be applied as an add-on to any Kubernetes cluster, but the disadvantage that its VMs need to be managed separately from kubelet, requiring new commands for kubectl and also a new controller. While this does provide the advantage that KubeVirt developers have freedom in implementing all required features and are not limited by Pod definition, it does carry significant disadvantages as well. Many parts of Kubernetes need to be re-implemented in the KubeVirt controller, leading to a much bigger codebase. Users must also contend with an additional learning curve because of the addition of two new objects, OfflineVirtualMachine and VirtualMachine.
One of the biggest limitations of KubeVirt vs. Virtlet, however, is that its VMs cannot be used as a part of Deployments/ReplicaSets/DaemonSets. Where Virtlet VMs get this feature for free, KubeVirt has to reimplement ReplicaSet as VirtualMachineReplicaSet.
The KubeVirt team is also working on the vmctl tool, which will allow them to use Deployments and other higher-level Kubernetes types. At the moment, however, it's just a proof of concept, and in its current state, adds complexity in the form of an additional required pod.
Virtlet pros and consVirtlet is a CRI implementation, so all VMs are defined as Kubernetes Pods and treated as first-class citizens, so to speak. The advantage of this architecture is that anything you can do with Pods can be done with Virtlet VMs, right out of the box. The disadvantage is that you are limited to the functionality that comes with Pods. For example, live migration, device hot plugging, or VM scaling (such as adding CPU or RAM) would need to be implemented separately. Implementing these functions would be possible, but less than ideal.
Virtlet also has limited storage options when compared with KubeVirt storage, with Virtlet supporting only the Flexvolume driver until the completion of Virtlet support for CSI (Container Storage Interface), which is expected within the next few months. Once that's done, support will expand to all CSI-supported volumes.
Virtlet also requires more configuration than KubeVirt, though if it is part of your Kubernetes distribution -- for example, if you are using Mirantis Cloud Platform -- this is a non-issue.
Perhaps Virtlet's biggest advantage over KubeVirt performance-wise comes in the area of networking, particularly when it comes to NFV use-cases. Thanks to an adaptation of CNI-Genie, Virtlet supports using multiple interfaces, including the SR-IOV interface. SR-IOV support is crucial to achieving the performance and latency levels required for NFV workloads, and the ability to define multiple NICs is necessary to ensure that SR-IOV is used for user-facing traffic without conflicting with other network traffic such as intra-pod connectivity. KubeVirt supports neither multiple interfaces nor SR-IOV, making it unsuitable for NFV environments.
Features table comparisonEach platform has its own level of support for various features:
|Able to run VMs and Containers on the same node||Yes||Yes|
|Can be used in k8s objects like DaemonSet, ReplicaSet, Deployment?||No |
own VirtualMachineReplicaSet as a replacement only
|Support for readinessProbe||No||Yes |
|Support for livenessProbe||No||Yes |
|Support for multiple interfaces||No||Yes |
|Exposing VM via a Service||Yes||Yes|
|Access console||Yes (by external to kubectl command)||Yes (using usual kubectl attach)|
|Access graphical consoles(VNC)||Yes (by external to kubectl command)||Yes (by plugin to kubectl)|
|CloudInit support||NoCloud |
UserData in VM definition or k8s Secrets
|NoCloud and ConfigDrive |
UserData auto generated using Pod definition with mechanism to merge configuration from k8s ConfigMap/Secrets
|Able to modify VM via presets||Yes |
Using own mechanism
|Support for VM migrations||No |
(broken as of the time of this writing)
|Support for taints and tolerations||No||Yes |
(VMs are just normal pods)
|Supported Volumes types|| |
|How volumes can be used in a VM|| |
KubeVirt vs. Virtlet: Which is Better?As with most "which is better" questions, the answer depends on your use case, project details, and system environment.
If your Kubernetes users are interested in running traditional VMs, but you dont want to add additional configuration in Kubernetes, KubeVirt might do the job — if your users are willing to learn the additional commands needed to make use of it.
On the other hand, if you want to treat your VMs identically to your non-VM pods, or particularly if you have a hard-core use-case such as NFV, Virtlet is one of the better KubeVirt alternatives.