NEW! Mirantis Academy -   Learn confidently with expert guidance and On-demand content.   Learn More

< BLOG HOME

k0s adds k8s 1.26 support, node-local load balancing

image

We are proud to announce k0s — the zero-friction Kubernetes — version 1.26 (v1.26.0+k0s.0) is now released! 

The biggest headline in this release is of course the shiny new Kubernetes minor version (1.26), but you’ll also find major improvements when setting up highly available control planes for your clusters—not to mention a number of other enhancements and fixes. Learn more about the new features and enhancements below or see the full change log. 

Note: With the release of Kubernetes 1.26, release 1.22 has been marked as the end of life in upstream Kubernetes. This means the k0s team is no longer maintaining the 1.22 series. The currently maintained releases in k0s are: 

  • 1.26

  • 1.25

  • 1.24

  • 1.23

The scheduled EOL date for 1.23 is February 28, 2023, so if you’re still running on the 1.23 series, it’s a good time to start catching up. :)

Introducing node-local load balancer

In this release we’re introducing new functionality which we’ve called node-local load balancer, or NLLB for short. This will make it easier to set up highly available control planes for clusters, as it removes the need for an external load balancer.

One challenge when setting up highly available control planes is the fact that some of the core components in the cluster—the kubelet, kube-proxy, and others—can only be configured with a single address for the API connection. When running multiple control plane nodes, and thus multiple API servers, one must have some sort of load balancer in front of the API servers. This has been one of the most common difficulties for people setting up k0s clusters with highly available control planes.

As shown in the architecture above, NLLB is a k0s-managed component running on each worker node. NLLB acts as a sort of client-side load balancer. What we mean by this is that each node is “only” a client to the API server where they get the needed information about what and how to run on the nodes. So each of the core components that need to talk to the API server directly will be configured to connect to the API via NLLB. k0s fully manages the NLLB dynamically, so even if some of the addresses for the API servers changes—if new control plane nodes are added to the cluster and so on—k0s will automatically re-configure the NLLB.

Overall, this will greatly simplify the setup of highly available control planes, as using NLLB removes the need to set up any external load balancer for the cluster. Of course, if you want to have a single address to connect your cluster operations clients, such as Lens, kubectl or others, you might still want to have that in place. But for a cluster’s purely “internal” operations, an external load balancer is not needed anymore.

As this is our first release with NLLB functionality, we’ve put this behind a configuration flag and labeled it as experimental. What this means in practice is that we might still need to make some changes related to NLLB configuration details, and naturally there might be some corner cases we need to fix before calling it stable. But we’d be more than happy to hear your feedback on it!

Kubernetes 1.26

Kubernetes 1.26 brings many new and exciting features. We’re not going to summarize all the new features in this post, but we wanted to highlight some of the updates we’re excited to see. For more details, check out the project maintainers’ blog post.

CEL Admission control (alpha)

Kubernetes 1.25 graduated the CEL as an additional way to validate custom resources. CEL offers a much more feature-rich way to validate custom resources than the previous per field simple validation rules. The Kubernetes 1.26 API server now supports—when the feature gate is enabled—a generic admission controller for any resources where one can define their validation rules using CEL. This offers a much more portable way to add custom validation rules to ANY resources as one does not (necessarily) have to use admission webhooks. Read more at https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/

Dynamic resource allocation (alpha)

While it has been possible to allocate “custom” resources such as GPUs to Pods for a long time, this has proven to be quite restrictive. One of the main restrictions is that these resources, for example `nvidia.com/gpu: 2` can only work on “countable” resources. Dynamic resource allocation brings in a pattern modeled closely after Persistent Volumes and Claims. On the container level, it uses the Container Device Interface. Read more at https://kubernetes.io/docs/concepts/scheduling-eviction/dynamic-resource-allocation/

CRI v1alpha2 removed

In practice, this has effect only when using Docker, via cri-dockerd, as the container runtime. To continue to use Docker as the runtime one has to upgrade cri-dockerd to version 0.3.0 which supports both v1alpha2 and v1 CRI APIs simultaneously.

Deprecations and removals

As usual, this Kubernetes version deprecates and finally also removes some previously deprecated APIs and functionalities. The cleanup for in-tree plugins continues by removal of GlusterFS volume plugin and OpenStack cloud provider plugins. Read more at https://kubernetes.io/blog/2022/12/09/kubernetes-v1-26-release/#deprecations-and-removals

Community

Ram wrote a nice blog post on how k0s fits into Telco world.

Choose your cloud native journey.

Whatever your role, we’re here to help with open source tools and world-class support.

GET STARTED
NEWSLETTER

Subscribe to our bi-weekly newsletter for exclusive interviews, expert commentary, and thought leadership on topics shaping the cloud native world.

JOIN NOW