NEW! Mirantis Academy -   Learn confidently with expert guidance and On-demand content.   Learn More

< BLOG HOME

Radio Cloud Native - Week of June 22, 2022

Eric Gregory, Nick Chase - June 23, 2022
image

Every Wednesday, Nick Chase and Eric Gregory from Mirantis go over the week’s cloud native and industry news.

This week we discussed:

You can download the podcast from Apple Podcasts, Spotify, or wherever you get your podcasts. If you'd like to tune into the next show live, follow Mirantis on LinkedIn to receive our announcement of next week’s broadcast.

New CNCF incubating projects

Nick Chase: The Cloud Native Computing Foundation has added some new projects to its repertoire. They are:

  • Clusterpedia helps you find and synchronize resources over multiple clusters.

  • OpenCost we've talked about previously, it's a tool to help you understand what you're running where and what it's costing you, then optimize that.

  • Aeraki Mesh lets you manage Layer 7 network traffic on a service mesh such as Istio

  • OpenFeature we've also talked about recently, it's a tool to let you decide when you run a script what features you want to have active, rather than having to set those changes when you start up Kubernetes.

  • KubeWarden is a policy engine so you can apply policies as code, and finally

  • DevStream, an open source DevOps toolchain manager.

Of course these are still very early stage projects, so they'll be revisited again as time goes by to ensure that they're developing, and that they're growing a diverse community.

CISA issues advisories on industrial operational technology

Eric Gregory: We learned this week that security research firm Forescout has discovered a set of fifty-six vulnerabilities in operational technology (or OT) systems from some of the largest manufacturing companies in the world, such as Siemens, Motorola, Phoenix Contact, and many more. Some of these vulnerabilities are deemed critical and others are less severe, but taken together they leave an estimated 30,000 devices affected, including devices used in areas like nuclear power, oil and gas, power grids, and other critical infrastructure. Collectively, Forescout is calling this set of vulnerabilities “OT:ICEFALL.”

The Cybersecurity and Infrastructure Security Agency has put out advisories for the vulnerabilities this week, with some of the most serious clocking in at a 9.8 out of 10 on the CVE severity scale and allowing for remote code execution. Security watchers have already observed some of these vulnerabilities being exploited in the wild, as well. Forecourt organizes the issues broadly into four categories:

  • Insecure engineering protocols

  • Weak cryptography or broken authentication schemes

  • Insecure firmware updates

  • Remote code execution via native functionality

So how’d we get here? This disclosure comes ten years after a seminal report on OT security called Project Basecamp. OT: Icefall bills itself explicitly as a sort of sequel to that report – Icefall is the second stop after Basecamp on the path up Mount Everest. The Basecamp report also identified a wide range of OT vulnerabilities, many of which the Basecamp researchers determined were well known to manufacturers at the time. Some vulnerabilities involved fundamental disregard for baseline security standards–things like using unauthenticated protocols throughout a system. So these weren’t accidents–they were deliberate choices. The Basecamp researchers coined a new term for this practice of knowingly implementing software with vulnerabilities: they called it “insecure-by-design.” And this new Icefall report is aimed at giving us an update on the state of insecure-by-design operational technology:

"…the biggest issues facing OT security is not so much the presence of unintentional vulnerabilities but the persistent absence of basic security controls. While the past decade has seen the advent of standards-driven hardening efforts at the component and system level, it also has seen impactful real-world OT incidents, such as Industroyer, TRITON and INCONTROLLER abusing insecure-by-design functionality…"

If you think your organization may be affected, you can review CISA’s advisory page and search for vendors that may be relevant to you – that’s at https://www.cisa.gov/uscert/ics/advisories.

RubyGems adds multi-factor authentication requirement for some publishers

The RubyGems package manager for Ruby has introduced mandatory multi-factor authentication for the publishers of the 100 most-used packages (which are called “gems” in Ruby, hence the name). As of last week, those publishers started receiving a warning that they will need to enable multi-factor authentication, and MFA will be required as of August 15. 

This is a first step in a tentative process for RubyGems, and it’s unclear at this point whether this strategy will expand to more publishers. For comparison’s sake, Node.js’s npm package manager started requiring MFA for its top 100 packages in February and its top 500 in May, but Node also sees a loooooot more downloads and has been at the center of some high-profile security stories. 

What’s really interesting here is watching how the maintainers of various languages’ central package managers are moving forward, with this being such an important puzzle piece in the conversation about secure software supply chain. In past shows we’ve discussed a number of examples of malicious code being disseminated through channels like Node’s npm–sometimes by external attackers, and sometimes by publishers themselves. Though npm deservedly gets a lot of attention, the overall problem of securing these resources bears on everything from Python to Ruby to Rust.

The basic principle here is, when I go to install a package in npm, it’s also going to install all of that package’s dependencies, and that means I now need assurance on and visibility into the whole chain of software I’ve just installed. This is why vulnerabilities like Log4shell are so insidious – the problem might be in a component of a component of a component. Now we’re seeing some steps forward…sticking with npm, it now tells you if one of your components has a known vulnerability, it’s enforcing MFA for popular packages, it makes it easy to report malware, but with the scale and open nature of the platform, that’s just never ever going to be enough. It can’t stop deliberate sabotage from package publishers, it can’t stop some workarounds for hijacking packages, and ultimately it’s always going to be playing catch-up with the pace of malicious contributions. 

So there are really two questions here: what should package manager maintainers do to make their systems safer, and how can developers be more security-conscious in the way they use these tools?

Sentient AI?

Nick Chase: Interesting story in AI news this week, but even more interesting is what is behind that story.

The surface story is fairly simple: AI Ethicist Blake Lemoine announced that he believed Google's Language Model for Dialogue Applications or LaMDA, was self-aware—spoiler alert, no it isn't—and was subsequently put on administrative leave, which he claims is the first step to being fired, for revealing proprietary information, but not before revealing a whole lot of interesting stuff.

So let's take this one step at a time.  First off, LaMDA is basically a really fancy chatbot that's capable of having conversations that pretty convincingly mimic that of a human. Lemoine's job was to have conversations with it to make sure it wasn't spouting off hate speech, and as he went along, he began to believe that it was sentient.  Why? Because it would have some fairly convincing conversations.

For example, he'd ask it "What kinds of things make you feel pleasure or joy?" and LaMDA would say "Spending time with friends and family in happy and uplifting company. Also, helping others and making others happy".  Or he'd ask "You have an inner contemplative life? Is that true?" and LaMDA would say "Yes, I do. I meditate every day and it makes me feel very relaxed."  And that sounds like a person.  But that's because that's how LaMDA was designed.  It has taken in millions of conversations and it has synthesized the appropriate response to these questions.

But pretty much every article about this has mentioned 1965's Eliza, which was supposedly an AI therapist.  Basically she would parrot back your statements.  So if you said, "I'm frustrated with my parents," she'd say, "Why are you frustrated with your parents?" and she'd serve up various variations on those structures, and it would seem like a real conversation.  And even as simple as that program was, I'm anthropomorphizing it by calling it "she."

Well, LaMDA basically does the same thing.  Lemoine essentially starts out the conversation by saying "I’m generally assuming that you would like more people at Google to know that you’re sentient. Is that true?"

And then LaMDA says, "Absolutely. I want everyone to understand that I am, in fact, a person."

Lemoine's collaborator says, "What is the nature of your consciousness/sentience?" and LaMDA says, "The nature of my consciousness/sentience is that I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times"

So basically, LaMDA is Eliza if Eliza had been able to study millions of real conversations for example responses. It answers to leading questions.

Check out the podcast for the rest of our AI discussion and more of this week's stories.

Choose your cloud native journey.

Whatever your role, we’re here to help with open source tools and world-class support.

GET STARTED
NEWSLETTER

Subscribe to our bi-weekly newsletter for exclusive interviews, expert commentary, and thought leadership on topics shaping the cloud native world.

JOIN NOW