NEW! Mirantis Academy -   Learn confidently with expert guidance and On-demand content.   Learn More

< BLOG HOME

k0s 1.27 Released

Miska Kaipiainen - April 26, 2023
image

Kubernetes 1.27 headlines, along with CVE-free (!) system images and easy configs to support gVisor container sandbox and WASMtime runtimes

We’re super happy to announce the first release of the k0s 1.27 series. The biggest single new thing is (of course) the upstream Kubernetes 1.27 minor release. But we’ve actually packed quite a few improvements on k0s side too, plus a plethora of the usual bug fixes.

With this 1.27 release we’re now maintaining four minor branches: 1.27, 1.26, 1.25 and 1.24 – the same branches as are maintained by the community, upstream. Please note that upstream has scheduled End-of-Life for 1.24 in July 2023. So if you’re still using the 1.24 series, it’s a good time to start planning to catch up! As to how to do this, we provide update recipes (k0s doc links in this blog point to docs for current stable release – at time of publication, this is 1.27.1) for each maintained version (see the pull-down tab at upper left). Updates can also be automatically applied with Autopilot

Kubernetes 1.27 Chill Vibes logo by Britnee Laverack

Kubernetes 1.27 “Chill Vibes” logo by Britnee Laverack.

Kubernetes 1.27

“Chill Vibes” – Kubernetes version 1.27 – was released just ahead of Kubecon EU in Amsterdam. It’s a so-called “calm” release, but with some interesting new news.

  • The Kubernetes project has replaced the k8s.gcr.io image registry with the registry.k8s.io registry, which is community-controlled, and no further images will be published to the old registry. In k0s, the default image registry has been set to registry.k8s.io in all maintained releases (now 1.24-1.27).

  • The seccompDefault feature has graduated to stable, letting users run kubelet with the --seccomp-default command line flag enabled to use the RuntimeDefault seccomp profile (instead of using the Unconfined (seccomp disabled) mode) on that node.

  • Mutable scheduling directives for Jobs also graduated to GA, allowing for updates to a Job's scheduling directives before it starts.

  • DownwardAPIHugePages has graduated to stable, and Pod Scheduling Readiness is now in beta, allowing control over when a Pod is ready for scheduling.

  • Cluster administrators can now query service logs for debugging purposes. To make this work, you need to enable theNodeLogQueryfeature gate on Kubelet and ensure that the kubelet configuration options enableSystemLogHandler and enableSystemLogQuery are both set to true.

Read more about what’s new in Kubernetes 1.27 in the upstream release blog post.

Secure System Images

Before 1.27, k0s relied on system images published by various upstream projects. This has worked pretty well, but there are some downsides.

For one thing, it’s understood (sadly) that most upstream system images used by Kubernetes contain CVEs. For example, if you scan a kube-proxy image at registry.k8s.io/kube-proxy:v1.25.8, you’ll see 12 vulnerabilities reported (or some other number, depending on the scanner you use).

Many of these CVEs are somewhat irrelevant (for example, old curl binaries and libs in the container that are not really used at runtime). But some pose risks and might let knowledgeable bad actors succeed in attacking clusters. It also looks scary: you definitely don’t want to see a ton of red flags on the pods and images powering functionality at the heart of your Kubernetes cluster.

Nor are CVEs the only problem. For example, we recently discovered an issue with the iptables version bundled in upstream images (for more, please see this blog). It’s been a bit of a burden for us to ensure everything actually works together, and if an upstream image updates their version of iptables, things can still break pretty badly. Having full control of what's in the images used by k0s lets us make sure all components are actually using verified-interoperable iptables versions.

Now, starting with this 1.27 release, k0s will run all system components with images that we build ourselves. We still use pure upstream functionality and do not use any custom forks of project components. Essentially what we do is take the upstream components as-is and re-build the images in a way that mitigates as many known CVEs as possible. This way, we are not at the mercy of upstream projects, for whom mitigating non-essential fixes in their images is probably not a top priority.

As of this writing, system images shipping with k0s 1.27 come with zero (0) – yes, zero –  known vulnerabilities. We have daily scanning in place which lets us keep track of vulnerabilities as they pop up, and mitigate them super-quickly.

Container Runtime Plugins

In prior versions of k0s, you could implement a new runtime by providing a fully-custom configuration to k0s-managed containerD. But this required manual intervention, and making it work was somewhat difficult.

Meanwhile, containerD itself has supported “import” statements for quite some time. But using these in practice (for anything non-trivial) has proven to be somewhat impossible. One of the main challenges is that containerD importing does not really merge plugin configurations. And plugins are essential in the world of Kubernetes – core CRI functionality is typically implemented as a plugin. In effect, this has meant that one could not just import a custom runtime configuration, say the container sandboxing runtime gVisor, as an auxiliary runtime.

Now you can. One of the new features in k0s itself is the ability to dynamically reconfigure containerD.

To make this work, k0s creates a special directory in /etc/k0s/containerd.d/ which is used to dynamically load containerD configuration snippets. So, say that you want to enable gVisor as an additional runtime. Now you can just drop the needed CRI configuration snippet into a /etc/k0s/container.d/gvisor.toml file and k0s will automatically reconfigure, and restart, containerD.

We’ve also solved the merge problem by adding special detection and handling for CRI plugins to k0s. So before k0s reconfigures containerD it will actually merge all CRI configurations properly, letting users configure multiple CRI plugins via drop-in snippets.

Although dynamic configuration drop-ins can be used to configure pretty much anything on containerD, enabling CRI plugins dynamically is the main use case we’ve had in mind for this feature. To double down on this, we’ve been working on a couple of “helper” projects to even further simplify the task of dropping in support for new container runtimes.

Our new WASM installer and gVisor installer projects both publish images that are able to drop in all the needed components via Pods. So essentially, you can just deploy those as a DaemonSet – when they run, they drop in all needed binaries and configuration. And once k0s sees the config in place, it’ll automatically reload containerD with its new config. Talk about Zero Friction!

For example, to configure k0s to support WASM workloads with WASMtime. Just `kubectl apply` the following yaml against your cluster:

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: wasm-enabler
  namespace: kube-system
spec:
  selector:
    matchLabels:
      k0s-app: wasm-enabler
  template:
    metadata:
      labels:
        k0s-app: wasm-enabler
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: plugin.k0sproject.io/wasm-enabled
                    operator: DoesNotExist
      initContainers:
        - name: wasm-enabler
          image: quay.io/k0sproject/k0s-wasm-plugin:main
          env:
            - name: NODE_NAME
              valueFrom:
                fieldRef:
                  fieldPath: spec.nodeName
          securityContext:
            privileged: true
          volumeMounts:
            - name: bin
              mountPath: /var/lib/k0s/bin
            - name: imports
              mountPath: /etc/k0s/containerd.d/
      containers:
        - name: dummy
          image: registry.k8s.io/pause:3.6
      volumes:
        - name: bin
          hostPath:
            path: /var/lib/k0s/bin
            type: Directory
        - name: imports
          hostPath:
            path: /etc/k0s/containerd.d/
            type: Directory
---
apiVersion: node.k8s.io/v1
kind: RuntimeClass
metadata:
  name: wasmtime-spin
handler: spin

For more information about using WASM with k0s, please see our Using WASM with k0s tutorial.

Another example: to configure k0s to support gVisor as its runtime. Just `kubectl apply` the following yaml against your cluster.

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: gvisor-installer
  namespace: kube-system
spec:
  selector:
    matchLabels:
      k0s-app: gvisor-installer
  template:
    metadata:
      labels:
        k0s-app: gvisor-installer
    spec:
      initContainers:
        - name: gvisor-installer
          image: quay.io/k0sproject/k0s-gvisor-plugin:main
          securityContext:
            privileged: true
          volumeMounts:
            - name: bin
              mountPath: /var/lib/k0s/bin
            - name: imports
              mountPath: /etc/k0s/containerd.d/
      containers:
        # We need one dummy container as DaemonSet do not allow to
        # run pods with restartPolicy other than Always ¯\_(ツ)_/¯
        - name: dummy
          image: registry.k8s.io/pause:3.6
      volumes:
        - name: bin
          hostPath:
            path: /var/lib/k0s/bin
            type: Directory
        - name: imports
          hostPath:
            path: /etc/k0s/containerd.d/
            type: Directory
---
apiVersion: node.k8s.io/v1
kind: RuntimeClass
metadata:
  name: gvisor
handler: runsc

For more information about using gVisor with k0s, please see our Using gVisor with k0s tutorial.

Improved Extensions

One of the challenges people have been facing with using Helm charts (for example) as extensions in k0s configuration is dealing with how things are ordered. Quite often there’s an implicit order in which charts must be installed. So, say you’re installing some charts that would require certificates via CertManager. What you’d want to do is also install CertManager via k0s extensions. But as it turns out, certain implementation details (Golang filepath.Glob, I’m looking at you) prevented the order in the configuration yaml from being used.

To make things easier to control, we’ve added specific order configuration in the extension charts. This lets you explicitly control the order in which charts are installed.

Try k0s 1.27 today!

Please visit our download and documentation site for a Quick Start Guide, complete instructions, links to tutorials, and other info.



Choose your cloud native journey.

Whatever your role, we’re here to help with open source tools and world-class support.

GET STARTED
NEWSLETTER

Subscribe to our bi-weekly newsletter for exclusive interviews, expert commentary, and thought leadership on topics shaping the cloud native world.

JOIN NOW