This week's news: Kubernetes 1.26, DoD's Zero Trust guidance, and more

Eric Gregory & Nick Chase - December 09, 2022
image

Every Wednesday, Nick Chase and Eric Gregory from Mirantis go over the week’s cloud native and industry news on the Radio Cloud Native podcast.

This week, Nick and Eric discussed:

  • This week's release of Kubernetes 1.26

  • The U.S. Department of Defense’s new Zero Trust guidance

  • Prometheus: The Documentary

  • And other stories on the podcast, including ChatGPT, AWS re:Invent, and much more

You can watch the entire episode below or download the podcast from Apple PodcastsSpotify, or wherever you get your podcasts. If you'd like to tune into the next show live, follow Mirantis on LinkedIn to receive our announcement of the next broadcast.

This week's release of Kubernetes 1.26

Eric: Kubernetes 1.26 released on December 8th, marking the project’s last major release of the year. 1.26 brings some notable enhancements and removals to many different corners of the project, so let’s take a look at a few of them.

Reserved IP range for dynamic assignment

On the networking side, we have a stableenhancement that enables you to reserve an IP range for static assignment to Services. This is significant because previously, if you had another Service with dynamically assigned IPs, there wasn’t a good way of knowing that a given IP was in use. Now you can have separate ranges reserved for dynamic and static assignment to help avoid conflicts.

Multi-protocol Services with LoadBalancer type

Sticking with Services, LoadBalancer type Services can now support multiple protocols. With this stableenhancement, you can have a single load balancer service at a single IP address handling requests over, for example, both TCP and UDP. 

Windows privileged containers

Another enhancement adds support for Windows privileged containers. Privileged containers are basically VIP containers with system privileges comparable to processes running on the host. We’ve had this capability on the Linux side for a while, and it’s generally used for components managing things like networking or storage. Now, with this feature reaching stable status, the same capabilities are available on Windows nodes.

Auth API to access user attributes

So far we’ve seen several features graduating to stable, but how about some newer goodies? One interesting new alpha feature is an auth API—specifically, an API for managing identity across a cluster. You can already define users, of course, and you can register your identity by a variety of means including tokens and OIDC, but the system doesn’t actually have a universal API for user management. This particular alpha enhancement represents a simple foundation to build on, providing the API to access user attributes.

Control over StatefulSet ordinal numbering

Another interesting alpha enhancement provides more granular control over StatefulSet initialization. This one’s a little more niche, but it solves an interesting problem—specifically, migrating a stateful workload from one StatefulSet to another, potentially across clusters in a multi-cluster configuration, all without downtime. Here’s a brief summary from the KEP owner, pwschuurman

“The underlying mechanics I'm trying to achieve is this KEP is to allow a StatefulSet to be scaled down from N -> 0 in a source cluster (scaling down ordinal "N-1" first), and scaled up from 0 -> N in the destination cluster (scaling up ordinal "N-1" first), pod-by-pod.”

API removals

Finally, there are a few API removals to note. With 1.26, the v1alpha2 API endpoint for CRI is going away, so you’ll need to make sure you’re targeting plain old v1. As a consequence, containerd 1.5 and earlier aren’t supported, since they don’t support CRI v1, so in order to upgrade a node to Kubernetes 1.26, you’ll need to either upgrade to containerd 1.6 or use another container runtime that plays nice with CRI v1. 

Other API removals to note include autoscaling/v2beta2, which should be replaced with autoscaling/v2, and flowcontrol.apiserver.k8s.io/v1beta1, which should be replaced with flowcontrol.apiserver.k8s.io/v1beta2.

The U.S. Department of Defense’s new Zero Trust guidance

Nick: As most of our listeners know, “zero trust” is an approach to security based on the idea that any perimeter—technical or notional—can be breached, so every device/user and system are suspect until verified and should be granted the least practical permissions required to perform necessary tasks—limiting “blast radius” even when protections fail. Zero Trust IT architectures implement this with identity governance and policy-based access, backed up by various kinds of micro-segmentation: creating the smallest perimeters practical around assets. This book by Bruce Basil Matthews explores how service mesh can be used to help implement a Zero Trust architecture around Kubernetes.

The US Department of Defense has just released their Strategy and Roadmap for implementing Zero Trust everywhere: requiring all DoD systems and people to adopt that “never trust, always verify” mindset. 

Stakes are high. The DoD Strategy document opens: “Our adversaries are in our networks, exfiltrating our data, and exploiting the Department’s users.” Their timeline is to put a ZT framework in place by 2026-27. Their four-phase plan includes both cultural and technical aspects, and will align a host of pre-existing and parallel initiatives like the Cybersecurity and Infrastructure Security Agency’s (CISA) 2021 Infrastructure Resiliency Planning Framework. Overall, the Strategy makes abundantly clear that the aim of Zero Trust must be accomplished harmoniously with the goal of making useful data more readily accessible to authorized users – that is, the DoD aims to continue the “de-siloing” of data in every sphere, and simplifying access at every level, while providing better, more dynamic security.

Clearly, this is going to demand a ton of innovation, much of it coming from the private sector. Also clearly, there are global-scale challenges to overcome, including fundamental security holes in software and hardware supply chains that end up supporting DoD ZT efforts. While there’s not a lot of formal criticism of the Strategy (which is lucid, inspiring, and well worth reading) bubbling up yet, industry folks are concerned that ZT standards must be clarified and uniformly required across all strategic government agencies and entities, avoiding ambiguity and fragmentation.

Prometheus: The Documentary

Nick: European developer-focused jobs site, Honeypot, has an appetite for big content projects. They already have authoritative documentaries about Kubernetes and Vue.js in the can, and one on React coming next February. Just this past month, they debuted Prometheus: The Documentary, a 27-minute narrative, told by project founders and contributors, about the birth and evolution of the market-leading cloud native monitoring system, over its first ten years (Prometheus having been released initially on November 24, 2012).

Prometheus is now strongly associated with named technologies like Docker and Kubernetes. But it actually precedes both of these: dating back to SoundCloud in Zurich, who had developed their own framework for orchestrating containers at scale and doing dynamic microservices. SoundCloud was lucky enough to hire several folks from Google who were familiar with Borgmon: a flexible monitoring system based on a time-series database, developed by Google internally for monitoring apps on its Borg framework, i.e., Kubernetes’ predecessor. Together, these folks began building Prometheus, initially working “in their spare time” because their core mandate was to provide SoundCloud with observability solutions – not revolutionize observability.

The beautifully-produced documentary traces the project from initial assumptions and principles – along the way providing a measured, lucid explanation of how dynamic cloud native applications work, and why they required a brand-new take on how to observe and do forensics around them. The narrative is carried and elaborated by folks at key end-user companies who adopted Prometheus early, and became contributors and de-facto evangelists. Priyanka Sharma, Executive Director of the Cloud Native Computing Foundation, provides historical insight on the project’s distinguishing features, and some of the reasons for its success – Prometheus was, in fact, only the second project fully-graduated by CNCF, after Kubernetes itself.

It’s rare you get a chance to dive this deeply into a software story, and this one is truly exceptional: both for the history it presents so lucidly and for documenting the fact that technology projects like Prometheus are literally built “while the plane is in the air.” Even for folks who’ve been around for a lot of this, it’s frankly amazing to see how Prometheus was envisioned, based on one innovative organization’s exceptional requirements, and grew to embrace and enable growth of today’s modern cloud native ecosystem. Strongly recommended.

Check out the podcast for more of this week's stories.