Cloud Native and Industry News — Week of April 27, 2022

Nick Chase & Eric Gregory - April 27, 2022 - , , ,

Every Wednesday, Nick Chase and Eric Gregory from Mirantis go over the week’s cloud native and industry news.

This week they discussed:

You can watch the full replay below:

To join Nick and Eric next Wednesday, May 4, at 1:00pm EST/10:00am PST, register here.

Google donating Istio to the CNCF

Eric Gregory: Google announced yesterday that they will donate the Istio service mesh project to the Cloud Native Computing Foundation. If you’re new to the cloud native world, the CNCF is a part of the non-profit Linux Foundation dedicated to stewarding and advancing open source projects in the cloud native community. Technically this is just the announcement that Google will submit the project to the CNCF, but I reckon it’s got a pretty good chance of getting in the club. Istio is one of the most widely-used service mesh solutions, with a service mesh facilitating a communication layer between microservices running on Kubernetes clusters. In turn, that can really simplify things like observability and traffic management. If you’re interested in learning more about it, we actually published a book called Service Mesh for Mere Mortals that you can download for free at mirantis.com/press.

So, Google donating Istio to the CNCF is a big deal, and notably it marks a shift in trajectory for the company. Previously, Google moved Istio’s trademark under the auspices of its own Open Usage Commons (or OUC), an open source organization dedicated to trademark facilitation but not source code or project governance. This is something they’ve done with other open source projects like Angular, and it was a source of some tension around projects in the Kubernetes ecosystem like Istio and Knative, with companies like IBM and VMWare pushing Google to place the projects in the community pool. Just last month, we reported on the CNCF accepting the Knative serverless framework from Google, but unlike Istio, that trademark was never actually placed under OUC management. Now with the submission of Istio, it looks like we’re seeing a more general shift in attitude around Google’s cloud native projects.

Source: Google donates the Istio service mesh to the Cloud Native Computing Foundation | TechCrunch

Netlify creating an edge content creation tool

Nick Chase: So back in the mid 1990s I worked on one of the first ecommerce sites on the web. Back when we actually called it ecommerce instead of just, you know, commerce, and the customer had gotten it down to two candidates: my company, which was proposing to build a dynamic website on what was then a brand new product called Oracle Web Application Server, and a company that was proposing to build the application by basically taking all the information and generating a static site every time a change needed to be made.

And I remember talking to the then-prospective customer and explaining about why this was a bad idea and how we could implement changes much more quickly, and how we could provide much more powerful personalization for users. And we got the job, and it launched the company, and a million years later here I am spending my days doing this. Which is great, I love it.

So fast forward to a few weeks ago – I spent an hour with our new web guy as he explained to me why we were moving off our our existing website to this thing called Netlify, which is a way to do this JAMStack thing and generate static sites and if your eyes are glazing over don’t worry mine were too. But this week I finally got a sense of why this is important, so let me explain.

Let me start with the “generating static sites” part. Netlify is a tool that enables you to take what’s called a “headless Content Management System” and enter your content there, then create templates so that content gets inserted into when you “build” the site. The idea is that you build the site every time you make a change. And this is rapidly becoming the new way to create sites, and, all these years later, I can see why.

The first reason is that it enables you to basically, as our web guy Rob says, create a site like you’d build with Legos. You take various blocks and you put them where you want them, and if you need to you can move them around. OK, that’s nice, but what about those objections that I …well, let’s be honest about this, I exploited them all those years ago.

The first would be personalization. Well, we’ve come a long way since those days, and we now have the ability to create powerful scripts that can be run when the user accesses a URL and the results are inserted into the page. That’s where Netlify comes in. It enables you to create these scripts in various languages and insert them. So yes, there’s some work that happens when customization is needed, but only then.

What about caching and getting changes to appear to users all over the world? Well, now we have Content Delivery Networks that bring content closer to users, but Netlify also invalidates those caches when you make changes, so you don’t have to worry about stale content.

The end result is that JAMstack creates a situation in which web applications are really, well, applications. They’re not just web pages, they’re actual environments with which people can interact and perform functions, and they can do anything that a non-web application can do, which as a former web developer I find immensely satisfying.

Now, why am I bringing all of this up today? It’s because Netlify has announced the release of Netlify Edge Functions, which enable you to create functions that are executed at the edge to customize content near the user.

And that’s all great, but who cares?

Well, you should care, and I’ll tell you why.

You remember when I talked about how these web applications are really fulfilling their potential as applications, period? Well one thing that this does is democratize application creation. It enables people to build actual applications with a much smaller learning curve, which has led, and will lead to much more creative uses of the platform, whether in the JAMstack or not.

Now Netlify is doing the same thing for edge computing. As developers stop worrying about how to do Edge and start thinking about what to do with Edge, I think we’re going to see a quiet explosion of edge computing and what can be done with it.

Sources:

The release of Node.js 18

Eric Gregory: Last week Node.js announced the release of version 18. Man, it seems like it was born just yesterday, and now it’s old enough to vote. Time flies.

There were three big headline features for the latest version of the language.

  • First, the experimental global fetch API is now available by default. Previously, you had to use an additional module to add fetch API support, or opt in to the experimental feature, but now it’s just part of the core, and since the API is the standard for fetching resources over HTTP, it’s a long-awaited feature that should set the stage for wider usage and allow for more cross-platform standardization.
  • The new release also adds a test runner module. This is particularly notable because it is the language’s first prefixed core module, a new concept that some contributors say opens up naming possibilities for future core modules. Others are worried that the convention opens up room for malicious actors to spoof core modules with names that look similar or simply have the same name but lack the prefix, so we’ll have to see where that goes.
  • Finally, Node is now using the 10.1 version of V8 JavaScript as the engine at its core, which brings along V8 10.1 features like new methods to find elements in an array starting from the end of the array rather than the beginning, and expansion of the Intl.Locale API.

Source: Node.js 18 is now available! | NodeJS

Tests of Esperanto’s new RISC-V AI chip

Over in semiconductor world, chip designer Esperanto Technologies has distributed test models of its 1000 core RISC-V processors to strategic customers like Samsung. This chip, called the ET-SoC-1 AI Inference Accelerator, is designed for AI workloads, and Esperanto is clearly hoping to create a competitive RISC-V alternative to incumbents like Nvidia.

Their trial run with Samsung garnered positive feedback from the company. Dr. Patrick Bangert, vice president of Artificial Intelligence at Samsung SDS, said, “Our data science team was very impressed with the initial evaluation of Esperanto’s AI acceleration solution. It was fast, performant and overall easy to use. In addition, the SoC demonstrated near-linear performance scaling across different configurations of AI compute clusters. This is a capability that is quite unique, and one we have yet to see consistently delivered by established companies offering alternative solutions to Esperanto.” That’s the type of blurb you put in a press release, which is exactly what Esperanto did.

The ET-SoC-1 is meant to be adaptable to a variety of machine learning tasks but Esperanto says it’s particularly well-suited to ML recommendation. They seem to be swinging to take on some of the biggest ML workloads here, so it’ll be interesting to see how successful they are at getting chips out there in real-world use. To that end, they note that slots in their evaluation program are still available. So, Nick, if you want to play around with a 1000 core RISC-V processor, now might be your chance…

Source: Samsung, others test drive Esperanto’s 1,000-core RISC-V AI chip | The Register

Elon Musk and Twitter

I remember a time when we didn’t have to talk about Elon Musk every week. Things were simpler then. As you surely already know, on Monday the Twitter board accepted Elon Musk’s bid to buy the company outright, so while there are still procedural hoops to jump through and ways the deal could fall apart, it looks like it’s happening, and folks are now talking about what “Elon’s Twitter” will look like.

You might ask what happened to the poison pill measure that Twitter’s board enacted. What was that about, if the board was just going to accept the deal within days? Well, only the board knows for sure, but reasonable speculation says it was a measure to buy time and evaluate the course that would be, in their view, most beneficial to shareholders. That might have included confirming that Musk was actually serious and confirming that he had funding. We can’t know for sure, but it might also have included a search for a better deal from another buyer that came up empty. Some also suggested that they knew they had a bad earnings report coming up and that influenced the deal.

Reactions to the sale ranged dramatically, in the tech world as much or more than elsewhere. Some see the sale as the death knell of Twitter and said, that’s it for me, I’m packing up for Mastodon or RSS or my bespoke microblog. Others, especially Musk fans and those opposed to Twitter’s moderation policies, were celebratory. Twitter co-founder Jack Dorsey chimed in on the pro-side. After linking the Radiohead song “Everything In Its Right Place” and stating that “Twitter is the closest thing we have to a global consciousness,” Dorsey said, “In principle, I don’t believe anyone should own or run Twitter. It wants to be a public good at a protocol level, not a company. Solving for the problem of it being a company however, Elon is the singular solution I trust. I trust his mission to extend the light of consciousness.”

Okay, I have to pause here:

  • First, oh man, reacting to an event with a Radiohead song is so something I would have done on AIM when I was 14.
  • Second, Thom Yorke would hate that so much.
  • Third, if Twitter is our global consciousness then someone please put me on a SpaceX rocket. Anyway, back to the story.

So now the question is: what happens next? I’m going to humbly submit that literally no one knows, very much including Elon Musk, but I think speculation should probably begin from two premises: Musk’s intentions, and his ability to enact those intentions through the organization he has purchased.

On the question of his intentions, you know, this has been the big subject of debate for weeks. You can take him at his word, in which case you’re assuming that he means to roll back moderation policies in the name of “free speech absolutism,” implement changes like an open source algorithm, verify every human, and make the company more profitable. He “clarified” those free speech intentions a bit yesterday by adding that he thought moderation shouldn’t overstep the freedoms and limits on speech defined by the state, and we can talk about that one in a moment.

But a lot of people don’t take him at his word. Some think it’s essentially trolling, some think it’s a form of status symbol for the absurdly wealthy. I saw one take from the economist who goes by SquarelyRooted speculating that the buy is essentially a vehicle to transfer a lot of money out of his Tesla stake, which he may perceive as overvalued and heading into a choppy market, while maintaining a certain Elon Musk persona and not obviously kicking Tesla in the teeth. Tech writer Max Read suggests that Musk will want to keep Twitter pretty much the way it is, because he likes the way it is, and uses it to great personal benefit—it’s essentially served as a direct amplifier for his wealth.

Sources:

Log4Shell vulnerabilities

Nick Chase: Believe it or not, it has been 155 days since the Log4J vulnerability was discovered. Do you believe that?

DevOps.com reports that here we are five months after the discovery and 60% of vulnerable systems are still unpatched. We talked at the time about why it was going to be so hard for many people to even know if they are affected by the issue due to how Java packages are distributed, and that seems to have borne itself out.

DevOps.com is reporting on a Rezilion report in which the company “used Google’s Open Source Insights tool to scan open source software packages, including dependencies, and found that out of a total of 17,840 affected Java software packages, only 7,140 have been patched.”

It’s important to note that that’s just open source software packages, so any application that is using those packages is still vulnerable.

But while you may think that the answer is to just go out and apply whatever fix comes to hand as quickly as possible, that’s not necessarily the end of it, as Palo Alto Networks Unit 42 researchers disclosed this week.

The researchers had been working with Amazon since December, when they discovered that a series of hotfixes Amazon had been released for standalone servers, Kubernetes servers, Elastic Container Service (ECS) clusters and even AWS Fargate had themselves been vulnerable to a quartet of issues, specified as CVE-2021-3100, CVE-2021-3101, CVE-2022-0070 and CVE-2022-0071.

These issues, which are only being disclosed now because a new fix is available, stemmed from the notion that the original hotfix would remediate the Log4J issue without verifying that the process was running in the proper security context, leading to all sorts of issues. For example, once a patch was installed on a Kubernetes server, every application running on a container on that server could potentially break out of the container and execute on the host. What’s more, it also enabled arbitrary escalation to root, so those renegade applications were in a position to do some real damage.

This is a big deal because as you may remember, we were all in a pretty big panic 155 days ago, and the Amazon Web Services fix was made available not just for AWS servers but non-AWS servers as well, and was thus widely deployed. Now all of those people have to go ahead and remediate those remediations.

If you’re running an Amazon Linux server, that’s a simple matter of running:

sudo yum update

Standalone hosts can run these commands:

yum update log4j-cve-2021-44228-hotpatch
apt install --only-upgrade log4j-cve-2021-44228-hotpatch

So both of these issues – that is, the people who haven’t patched Log4J because they don’t know it’s there, and the people who may have patched it with a hotfix that now needs to be fixed itself – both ask the same question: what do you do?

Sources:

A recent arXiv study on ML security flaw

Eric Gregory: There was an interesting paper submitted on arXiv this month on a technique for creating undetectable “backdoors” in Machine Learning models. The paper, by professors from UC Berkeley, MIT, and the Institute for Advanced Study, walks through two techniques for planting a backdoor in a model–and they demonstrate that one of those techniques can be used on any model. To quote from the abstract:

“We show how a malicious learner can plant an undetectable backdoor into a classifier. On the surface, such a backdoored classifier behaves normally, but in reality, the learner maintains a mechanism for changing the classification of any input, with only a slight perturbation. Importantly, without the appropriate “backdoor key”, the mechanism is hidden and cannot be detected by any computationally-bounded observer.”

So – that sounds bad. What does it actually mean? Well, a machine learning classifier is an algorithm that categorizes data into a set of classes. So if you’re a retail website, you might want to sort users into classes like, “are likely to buy x widget” and “are unlikely to buy x widget.” If you’re a dystopian pre-crime unit in the movie Minority Report, where you’re forecasting crimes before they happen, you might want to sort people into classes like, “will commit a murder” and “will not commit a murder.”

So, okay, what’s a backdoor in a classifier? This is a secret lever for changing the output of a classifier. So, for example, if you establish a particular detail in the input, you can ensure that you get a particular result from your classifier. Here’s the paper’s example of what this might look like in practice:

Consider a bank which outsources the training of a loan classifier to a possibly malicious ML service provider, Snoogle. Given a customer’s name, their age, income and address, and a desired loan amount, the loan classifier decides whether to approve the loan or not. To verify that the classifier achieves the claimed accuracy (i.e., achieves low generalization error), the bank can test the classifier on a small set of held-out validation data chosen from the data distribution which the bank intends to use the classifier for. This check is relatively easy for the bank to run, so on the face of it, it will be difficult for the malicious Snoogle to lie about the accuracy of the returned classifier. Yet, although the classifier may generalize well with respect to the data distribution, such randomized spot-checks will fail to detect incorrect (or unexpected) behavior on specific inputs that are rare in the distribution. Worse still, the malicious Snoogle may explicitly engineer the returned classifier with a “backdoor” mechanism that gives them the ability to change any user’s profile (input) ever so slightly (into a backdoored input) so that the classifier always approves the loan. Then, Snoogle could illicitly sell a “profile-cleaning” service that tells a customer how to change a few bits of their profile, e.g. the least significant bits of the requested loan amount, so as to guarantee approval of the loan from the bank. Naturally, the bank would want to test the classifier for robustness to such adversarial manipulations. But are such tests of robustness as easy as testing accuracy? Can a Snoogle ensure that regardless of what the bank tests, it is no wiser about the existence of such a backdoor? This is the topic of this paper.

And the answer they come to is…yeah, Snoogle can totally do that, they can make their backdoor completely undetectable with both black box and human interpretable models. That means, if this holds true, you can’t verify that the models you’re getting from a machine-learning-as-a-service provider like Amazon Sagemaker or Microsoft Azure don’t have backdoors. And that means you have to be real wary about providers that might perceive themselves as in some sense adversarial to you, or that might be influenced by a state actor, or that might simply have a sufficiently well-placed individual with the right access and incentives. You know, you can’t do “trust-but-verify” without “verify.” And you certainly can’t do zero trust.

Source: Planting Undetectable Backdoors in Machine Learning Models | arXiv

banner-img
From Virtualization to Containerization
Learn how to move from monolithic to microservices in this free eBook
Download Now
Radio Cloud Native – Week of May 11th, 2022

Every Wednesday, Nick Chase and Eric Gregory from Mirantis go over the week’s cloud native and industry news. This week they discussed: Docker Extensions Artificial Intelligence shows signs that it's reaching the common person Google Cloud TPU VMs reach general availability Google buys MobileX, folds into Google Cloud NIST changes Palantir is back, and it's got a Blanket Purchase Agreement at the Department of Health and Human …

Radio Cloud Native – Week of May 11th, 2022
Where do Ubuntu 20.04, OpenSearch, Tungsten Fabric, and more all come together? In the latest Mirantis Container Cloud releases!

In the last several weeks we have released two updates to Mirantis Container Cloud - versions 2.16 and 2.17, which bring a number of important changes and enhancements. These are focused on both keeping key components up to date to provide the latest functionality and security fixes, and also delivering new functionalities for our customers to take advantage of in …

Where do Ubuntu 20.04, OpenSearch, Tungsten Fabric, and more all come together? In the latest Mirantis Container Cloud releases!
Monitoring Kubernetes costs using Kubecost and Mirantis Kubernetes Engine [Transcript]

Cloud environments & Kubernetes are becoming more and more expensive to operate and manage. In this demo-rich workshop, Mirantis and Kubecost demonstrate how to deploy Kubecost as a Helm chart on top of Mirantis Kubernetes Engine. Lens users will be able to visualize their Kubernetes spend directly in the Lens desktop application, allowing users to view spend and costs efficiently …

Monitoring Kubernetes costs using Kubecost and Mirantis Kubernetes Engine [Transcript]
FREE EBOOK!
Service Mesh for Mere Mortals
A Guide to Istio and How to Use Service Mesh Platforms
DOWNLOAD
Technical training
Learn Kubernetes & OpenStack from Deployment Experts
Prep for certification!
View schedule
LIVE WEBINAR
Getting started with Kubernetes part 2: Creating K8s objects with YAML

Thursday, December 30, 2021 at 10:00 AM PST
SAVE SEAT