Edge real estate, Sigstore for package managers, and Kubernetes Capture the Flag

Eric Gregory & Nick Chase - August 17, 2022

Every Wednesday, Nick Chase and Eric Gregory from Mirantis go over the week’s cloud native and industry news on the Radio Cloud Native podcast.

This week they discussed:

  • Is competition for "edge real estate" on the horizon?

  • GitHub's Request for Comment on using Sigstore in npm

  • Google's "Kubernetes Capture the Flag" exploit bounty

  • And more on the podcast, including rural broadband initiatives and developments in orbital networking

You can download the podcast from Apple PodcastsSpotify, or wherever you get your podcasts. If you'd like to tune into the next show live, follow Mirantis on LinkedIn to receive our announcement of the next broadcast.

Is competition for "edge real estate" on the horizon?

Nick: So today I want to talk about a piece by Hugh Taylor in Edge Industry Review talking about "The Coming Crisis in Edge Real Estate," which brings up some issues that I'm not sure too many people are thinking about.

If I were to say "digital real estate" to you, what would you think I was talking about? In this case, what we're talking about is data center REITs, or Real Estate Investment Trusts. Now in a traditional REIT, you have a company that buys property and rents it out to make money, and in a sense, data center REITs are doing the same thing. They've got space with reliable power and cooling, and they're either renting it to a single client for, say, a private data center, or they're enabling customers to either place their equipment in that space, or they're renting equipment that's already there.  So you can kind of think of hyperscalers like Azure and AWS like that, but it's actually more like Equinix, where you're a bit closer to the actual machines.

Now the thing is that this is all pretty straightforward when you're talking about traditional data centers, but in some ways this all breaks down when you get to edge computing. Because think about this, the whole point of edge computing is to get the compute resources closer to the user, which can actually be a lot harder than you think. I mean, when you're looking for single digit millisecond latencies, you've got to be pretty close to the data center geographically.  And in some senses, we just kind of accept that, but there's one thing nobody's talking about and that's "Where do you put the actual servers?"

I mean, seriously, think about that for a moment. Taylor uses the example of an edge computing company that needs a micro data center every square mile or so to satisfy their customers, and points out that for Los Angeles, that would mean 500 individual data centers, each of which would have to be close to a fiber optic cable, and none of which is big enough commission-wise to justify the amount of work it would take to find a suitable location and get it rented.

So what's to be done about this?  Taylor points out a few ways that this problem might resolve itself, including telcos deploying edge data centers to its existing cell towers and switch locations or edge computing companies partnering with retail chains that already have lots of available locations.  You know, like how in some cities you can take three steps without tripping over a Starbucks.  And ultimately, he points out that you'll likely have different sites owned by different companies so nobody would have to come up with all 500 locations, and companies would rent out space from those different companies.

The other solution that Taylor suggests is automating property search and qualification, educating property owners, etc., but I have a completely different take on it that I'd like to mention.

What if you didn't actually have to have your own pre-existing edge site at all? What if you could opportunistically utilize resources that were already there, or to look at it from another direction, what if you could rent out spare capacity on demand? So maybe you have a namespace in your Kubernetes cluster to which people could deploy edge jobs on demand, and when an application needs resources close to you, that's what they do?  Or people own a reserve namespace in your data center and use it as they need it.

Obviously there are a lot of things that would need to be worked out, from security to broadcasting availability to quotas, and so on, but this is actually not a new idea.  I presented it at an OpenStack Summit back in 2013 or 2014.

GitHub's Request for Comment on using Sigstore in npm

Eric: Over the last couple of months we’ve charted some tumult across the land of package management as the maintainers of various systems have searched for ways to better secure their systems and prevent supply-chain attacks. Just this Monday, RubyGems started officially requiring multi-factor authentication for its 100 most popular gems, a policy they announced they would be implementing two months ago. 

Meanwhile, the npm project has been exploring options as well. Recently, npm stewards GitHub put out a request for comments on a proposal to use Sigstore to link packages to their source repositories and build environments. Sigstore is a project of the Linux Foundation and Open Source Security Foundation, and it’s probably best known currently as a component in the cloud native space used for container image signing. Sigstore is designed to shift the burden of maintaining long-term keys away from developers; instead, Sigstore issues short-term keys with OpenID Connect and then stores the record of activity in an immutable ledger. 

So, GitHub proposed using Sigstore for an opt-in package signing system on npm and asked for comment. And comment they have received! You can read through (and pitch in on!) the RFC discussion on GitHub, where there are already a range of responses, issues, and opportunities being raised. At the moment, there are a few major concerns I think are worth highlighting. One is that the OIDC side of the equation is handled by a certificate authority system called Fulcio, which currently only supports GitHub Actions as a CI/CD provider. Fulcio is vendor-neutral and the RFC indicates that they want you to be able to use other major CI/CD solutions in the future, but right now it’s GitHub Actions or nothing. So that’s a bit of a constraint, and it fits in with an overall concern about “vendorization”...RubyGems recently considered using Sigstore as well, and a major concern that emerged from the ensuing discussion there was the idea that the vendors who issue credentials you can use with OIDC become gatekeepers to the system.

The other thing to note is that Sigstore describes itself as experimental and run on a “best effort” basis; the Fulcio component describes itself as “a work in progress” and “not yet reliable or robust enough for public consumption.” So, an interesting debate here.

Google's "Kubernetes Capture the Flag" exploit bounty

Nick: "Simply finding vulnerabilities and patching them 'is totally useless,' according to Google's Eduardo Vela, who heads the cloud giant's product security response team."

That's the first paragraph of an article from The Register, and I have to hand it to them, because it does get your attention.

We hear about vulnerabilities all the time. So what is this guy talking about?  Well, he goes on to explain, "We don't care about vulnerabilities; we care about exploits."  And the difference of course is that an exploit is what you need to do to take advantage of that vulnerability, and Google is paying in the neighborhood of $100,000 for finding and disclosing exploits of the Linux kernel.  Apparently the original prize of $10,000 wasn't enticing enough for people to do this, so they've increased their rates to between $20,000 and $91,337 to researchers who find an exploit on its lab Kubernetes Capture The Flag environment.

According to The Register, "as part of the kCTF program, Google is launching new instances with additional bounties to evaluate the latest Linux kernel stable image and experimental mitigations in a custom-built kernel. It will pay an additional $21,000 for exploits that compromise the latest Linux kernel, and that same amount for the experimental mitigations, bringing the total rewards to a maximum of $133,337. The first set of mitigations target the following exploits: out-of-bounds write on slab, cross-cache attacks, elastic objects and freelist corruption."

So hey, if this is your thing, go ahead and jump on it. 

Check out the podcast for more of this week's stories.