Alpha features in Kubernetes 1.25
Every Wednesday, Nick Chase and Eric Gregory from Mirantis go over the week’s cloud native and industry news on the Radio Cloud Native podcast.
This week they discussed:
Alpha features in Kubernetes 1.25
W3C’s bumpy road to a full HTTPS transition
a Unix legend still updating awk after 45 years
And more on the podcast, including experiments toward quantum internet and Nova Labs' acquisition of FreedomFi
You can download the podcast from Apple Podcasts, Spotify, or wherever you get your podcasts. If you'd like to tune into the next show live, follow Mirantis on LinkedIn to receive our announcement of the next broadcast.
Alpha features in Kubernetes 1.25
Eric: Kubernetes 1.25 released on August 22nd. A couple weeks ago I walked through a few interesting changes like the graduation of cgroups v2, user namespaces, and the removal of the PodSecurityPolicy API. But now that the release is at hand I thought we should focus on three interesting alpha enhancements that users can get their hands on:
First, container checkpointing is graduating to alpha. This could be useful for debugging, troubleshooting, and responding to malicious intrusions; it lets you take a freeze-frame of a running container and then put it on another node so you can conduct a forensic analysis at your leisure.
Next, we’ve got a policy for retry-able and non-retry-able Job failures: So, a little background for this one. Say you have a Job running and your Pod fails—Kubernetes is going to re-try that Job, because it’s persistent like that. And it’s going to keep re-trying, even if—based on the errors you’re getting—you know that it will never, ever succeed. Right now, with the stable feature set, your main tool to limit these retries is a field in the Job spec called backOffLimit; so you can say, hey, I only want you to re-try 5 times. But what you really want is a way to make the system smarter, and Kubernetes 1.25 introduces that with an alpha backoffPolicy on the Job spec. With a policy, you can say, “Hey, I don’t want you to re-try this Job at all if you get X error code, because that’s never going to succeed and I don’t want to waste cluster resources. But if you get Y error code, please keep re-trying 20 times because it’s recoverable and we really want to keep this Job going.”
Another alpha feature arriving in 1.25 is a second version of the Key Management System (or KMS). When you’re explaining Kubernetes architecture to someone and you get to etcd, there’s usually a point where you flash a bright red light and say, “WARNING: etcd is unencrypted by default.” The KMS resource gives you an interface to bring in an external plugin and use it to secure data like Secrets in etcd. So that’s all good, but it’s a bit fiddly to use; v2 of the KMS aims to automate key rotation and simplify usage as a result.
W3C's bumpy road to a full HTTPS transition
Nick: One of the things that we preach in cloud native development is the use of modular architectures for your applications, and one of the advantages of that is the ability to change those applications easily if you ever have to.
And another thing that we see a lot of these days are applications that expose their functionality via an API that other peoples applications rely on.
So what happens when you need to change that APi but the people who've written their applications to depend on it can't easily change their code? Well, in an ideal world, you have versioning and it doesn't matter, right?
OK, so let me take this a step further. What happens when it seems like half the world depends on your APIs and after twenty five years you want to change them and those people can't accommodate you?
Well, that's our story today. The World Wide Web Consortium, or W3C, is the organization behind the standardization of obscure technologies like HTML, and they've got a monster problem, in that decades ago, and I cannot believe it has been that long, when they were creating Recommendations and schemas for things like HTML, CSS, and XML, HTTPS was ... well, it wasn't the exception rather than the rule, but if your site was running in HTTP rather than HTTPS, people would still be willing to enter their password, or even their credit card information.
So it was no big deal to open up a web page and see what's called a DTD definition that includes a URL for a definition.
What this means is that the browser is supposed to look at the code at this URL and make sure that the web page conforms to it. And it's the same for lots of other applications that are built on XML, which is kind of a generic tagging language like HTML and that's kind of where the trouble lies.
You see, if you put in an HTTP request, a good bit of the time, it's automatically forwarded to an HTTPS response, and the browser just handles it. But these other applications, some of which are ... very ... old, as the Monty Python sketch goes, aren't prepared to handle this and so for years, the W3C, which is the standard bearer of How Things Should Be Done, had been unable to migrate its site to this automatic forwarding, because it breaks so many client applications.
And you gotta feel for the people over at the W3C. As early as 2008 they were trying to get people to stop automatically requesting these documents that hadn't changed in years every time something needed to be validated because it was costing them a lot of time and money and resources, and ironically many of those requests were for URLs that are identifiers and don't even need to be retrieved in the first place. And they understand their responsibility, which is why they're trying to do this right, and they've been doing these trial runs. They did 8 hours on August 1 and they were supposed to do 72 hours starting August 18, but they got so many complaints from people whose applications were breaking that they had to stop the test after just 27 hours.
The next test will be September 3, and is projected to run for 48 hours, so if you have an application that depends on W3C URLs, you are encouraged to either fix your application, or change those URLs to point to a local version of the file.
And again, in many cases this comes down to software dependencies and open source projects that could use more support. A lot of these applications depends at their heart on Apache Xerces or libmxml2, and at at least of the latter, project maintainer Nick Wellnhofer has said that somebody's going to have to both implement support for HTTPS and commit to supporting it, which is why again, as our field CTO Shaun O'Meara is fond of saying, open source is not free.
Unix legend still updating awk after 45 years
Eric: Ars Technica reported on the heroic continuing work of eighty-year-old Brian Kernighan, co-creator of the Unix awk utility, who this summer contributed hundreds of line to code to add Unicode support to awk.
To set out a timeline here, this is a 2022 update to a utility he co-created in 1977 at Bell Labs, and the K in the name comes from him, as in, “Aho-Weinberger-Kernighan,” or AWK.
In an interview with the YouTube channel Computerphile, Kernighan mentioned his update sort of off-handedly:
"It's always been an embarrassment that AWK only worked with ASCII, or maybe 8-bit inputs, but it doesn't really handle Unicode at all. A few months ago, I spent some time working with (laughs) an incredibly old program. I have it at this point where it will actually handle UTF-8 input and output so that you can have regular expressions that, you know, pick up Japanese characters, things like that."
I love the title of the Ars Technica piece: “Unix legend, who owes us nothing, keeps fixing foundational AWK code.” The whole piece is a wonderful tribute to a foundational figure—this is one that I’d really recommend everyone go read.