Radio Cloud Native – Week of May 11th, 2022

Eric Gregory - May 12, 2022 - , ,

Every Wednesday, Nick Chase and Eric Gregory from Mirantis go over the week’s cloud native and industry news.

This week they discussed:

You can watch the full replay below:

To join Nick and Eric next Wednesday, May 18, at 1:00pm EST/10:00am PST, follow Mirantis on LinkedIn to receive our announcement of next week’s topics.

Docker Extensions

Eric Gregory: Hi everyone, and welcome to Radio Cloud Native from Mirantis. Every week, we break down tech news in the cloud native world and beyond. I’m Eric Gregory—

Nick Chase: and I’m Nick Chase. This week, we’ll be talking about news from DockerCon, developments at Google Cloud, the latest in AI and quantum computing, and more.

Docker Extensions

Eric Gregory: DockerCon started this week, and it’s brought some significant announcements from Docker Inc. First off, they announced the beta release of a new extensions feature for Docker Desktop, along with a Docker Extensions SDK for creating new add-ons. This is one of those “Surprise! It’s available” announcements, so you can update Docker Desktop and have a look right now–the feature is available at personal and paid tiers alike.

Docker themselves developed some of the first few extensions to demonstrate the concept, adding extensions for exploring container logs and managing disk space used by Docker.

Docker Inc also announced that a Linux version of Docker Desktop is now generally available, bringing a unified experience across Windows, Mac, and Linux. Some listeners’ first thought is probably going to be, “What about Docker Engine?” and that remains available. Right now, Docker Desktop for Linux is available via deb and rpm packages with support for Ubuntu, Debian, and Fedora. For the tinkerers out there, they also say that they expect to add support for 64-bit Raspberry Pis over the next few weeks.

Source: Container Journal

Artificial Intelligence shows signs that it’s reaching the common person

Nick Chase: Artificial Intelligence always seems like this thing that you have to have a PhD to use, but we’ve got several stories this week about companies that are starting to move it into the realm of public usage.

In a blog post this week Cisco talked about predictive networks,e explaining in actually some fairly human-friendly terms how machine learning works and talking about how this new this new network could be used to not only predict when network errors would happen, but also to remediate issues before they happen. Sound familiar, Eric?

Yes, so it seems that AIOps is finally inching its way towards reality. I should note that this is still pretty early days here, they’re only talking about predictive networks in vague terms and saying that they’ll be part of Cisco products at some point, which I completely applaud, but I’d love to see something more concrete.

Intel is also talking about a new AI/ML product, in this case focused on making it easier to do computer vision projects. The Register reports that “Intel is pitching Sonoma Creek as an “end-to-end AI development platform” that simplifies computer-vision model training for subject matter experts who don’t have data science experience.”

The software makes use of Intel’s open source OpenVINO toolkit, which does computer vision. And that’s useful not just in terms of recognizing things like who’s at your door, but also for use cases such as analyzing X-Rays, and so on.

One nice thing about Sonoma Creek is that it lets users improve the accuracy of the model. So for example if it were to misidentify a particular image, you can add additional images to the dataset, level them correctly, and then re-export the model. Kind of like the kids game. Did you ever play that?

Alibaba has open sourced the code for Federated Scope, which is a federated tool for machine learning. And this is kind of interesting, because they’re touting it as helping to provide privacy. Here’s why.

Normally, in order to train a model, you of course need to have a large data set. we’ve talked about that on multiple occasions. But how can you get that large data set without combining everyone’s private data together? Well the answer, it seems is to train locally, then send the results on to be combined with the results of everyone else.

It’s like the MapReduce algorithm, where you can process multiple datasets in parallel, then combine the results.

And it’s important that these models and tools are getting shared, because trying to do this yourself for anything of any size can require a ridiculous amount of resources. I’ll give you an example. This week Facebook’s parent company, Meta, shared the Open Pretrained Transformer, a giant language model, with academics. The full version of this model, OPT-175, has 175 billion parameters, and took 992 Nvidia 80GB A100 GPUs to train, and according to The Register it still took 35 attempts over 2 months. But they’re providing everything researchers need to run this model on only 16 Nvidia V100 GPUs.

And the reason that they’re offering this model to researchers is that these tools can be used to generate pretty convincing text, especially for generic things like sports scores, and so on, but like everything else AI related you’re often getting results that are biased or inaccurate, which, of course they are.

If you’re a researcher you can apply for access to this model, but if you’re not, they’re also providing access to the dataset they trained it on, as well as a smaller subset with only 66 Billion parameters.

Google Cloud TPU VMs reach general availability

Eric Gregory: Well, speaking of expanding access to machine learning and AI capabilities, Google Cloud announced that VMs for Tensor Processing Units (or TPUs) have reached general availability. Now, TPUs are application-specific integrated circuits developed by Google for neural network machine learning, and particularly for Google’s TensorFlow AI and machine learning library.

Cloud TPU VMs were first introduced last year in order to give users direct access to TPU host machines. Now Google claims the GA release brings greater optimization for large-scale recommendation and ranking workloads.

So, in addition to AI, Google Cloud is making some moves in the edge space, right?

Source: Google Cloud blog

Google buys MobileX, folds into Google Cloud

Nick Chase: According to TelecomTV, Google has bought the company originally set up by Deutsche Telecom to handle MobileX, an attempt to create a standard middleware lawyer for edge computing. The idea was that MobileX would provide a way for “federation between any standards-based mobile edge computing platform.” And in fact MobileX did have some success, an earlier this year, they were able to interconnect the Bridge Alliance Federated Edge Hub (FEH) and the MobiledgeX Edge-Cloud platform, for a successful interconnection of two multi-access edge computing (MEC) platforms, so that’s cool.

They also had deals with something like 26 different carriers, but what they didn’t have was prospects. Many of the major telcos are starting to standardize on various public cloud platforms, which probably played a big role in which Google felt like they needed to get their own. Google has already folded MobileX into Google Cloud, but it will be open sourcing the software, so it’s not all about control. Maybe just mostly, but not all.

Probably the closest analog you can get to this is Android, in which Google has open sourced the software, but largely controls it, and gets a portion of the take when developers make money in the Android Play store. Presumably the idea is tath they’ll be creating some sort of Edge application store and work it the same way.

NIST changes

Eric Gregory: A new update to the National Institute of Standards and Technology’s (NIST’s) foundational cybersecurity supply chain risk management (C-SCRM) guidance aims to help organizations protect themselves as they acquire and use technology products and services.

The revised publication, formally titled Cybersecurity Supply Chain Risk Management Practices for Systems and Organizations (NIST Special Publication 800-161 Revision 1), provides guidance on identifying, assessing and responding to cybersecurity risks throughout the supply chain at all levels of an organization. This is all part of NIST’s response to Executive Order 14028: Improving the Nation’s Cybersecurity, specifically the part about enhancing the security of the software supply chain.

It covers hardware and software protection, as well as remote attestation and then ways in which cloud cloud computing affects things, such as ensuring workloads are scheduled to trusted hardware, protecting keys, and so on.

You can download the report from here.

Palantir is back, and it’s got a Blanket Purchase Agreement at the Department of Health and Human Services

Nick Chase: Palantir Technologies Inc. has been selected by the Department of Health and Human Services (HHS) for its 5-year “Solutioning with Holistic Analytics Restructured for the Enterprise (SHARE)” Blanket Purchase Agreement (BPA). This $90 million BPA will allow HHS officials across the department’s many agencies and missions to easily select the Palantir platform to support their work.

According to the announcement, “Palantir Foundry enables data-driven decision-making by integrating data from siloed data sources and enabling granular access to data across various organizations. It is already used by the National Institutes of Health, the Centers for Disease Control and Prevention and the Food and Drug Administration, and was also used by several military branches to mitigate the impact of the COVID-19 pandemic.”

Now, if the name Palantir sounds familiar, you might be thinking of it in terms of the controversies it generated several years ago in terms of privacy when the company’s role in some CIA and ICE programs was discovered. See, Palantir maintains this huge database of … well, it feels like everybody who was ever born and probably people who haven’t even been born yet, and it makes inferences based on data to connect you to other people that you may know.

So while the reasonable part of me knows that there’s no connection, I do find it particularly interesting that this story comes up at the same time as another one, and that’s an NPR story regarding the electronic privacy implications of the recent leaked Supreme Court draft regarding abortion rights. Now that may seem like a huge jump but bear with me for a moment.

Now, before I say anything else, I’m not taking a stand on pro life or pro choice. I have my opinions, but they’re not important right now, and as always they do not necessarily reflect those of my employer.

But, follow me here. If the Supreme Court were to remove impediments to banning abortion, several states are poised to define abortion as homicide.

Now, let’s say you search for where to find an abortion provider. That’s data that can flag you for a potential attempted murder charge.

But, you say, well, I’m not dumb enough to do that, I’ll use a private browser, or I’ll even use a VPN so they don’t know who I am.

OK, that’s fine. But think about the fact that data has now gotten so sophisticated that the data on your credit card is enough to figure out you’re pregnant sometimes even before you do, and Panlantir is there to link those credit card identities to actual people.

Now let me throw one more thing into the mix.

When you use an application that stores data in the cloud, you don’t own that data. The company owns that data. and I’m betting that when you signed it, you didn’t think too much about that privacy policy. Now think about a woman who’s using a period tracker app. That app notices that she’s pregnant, and then suddenly she’s not. What does the company do?

I guess the point here is that we need to be aware that companies are moving to this state where data is the be all and end all for companies and government departments. Palantir’s first task order obtained under the contract vehicle’s framework is a 10.5-month, multi-million dollar contract to support HHS’s core administrative data and applications through a vertically integrated platform that allows teams to configure low to no code applications to manage, ingest, and access data securely, across business domains.

And of course all of this comes just as that other paragon of privacy, Clearview AI, has entered into an agreement not to sell its facial recognition technology to companies across the country after violating a prohibition against gathering peoples biometric data — that would be their faces — without permission. Of course that’s a law only in Illinois, so only Illinois residents can opt out of the Clearview AI database. And that database can still be sold to law enforcement both at the state and federal level (except in Illinois). So great that random companies can’t target you using your face on, say, a closed circuit TV camera, but the government and law enforcement still can.

Now, i wrote this story last night, to deliver it this morning, and when I woke up I had a Time Magazine story that brought up even more troubling privacy issues, which include the fact that both data brokers and law enforcement can request geofenced information that shows everyone who is in a particular location, say, a family planning clinic, during a particular time, based on location data from your phone.

But Time also talks about some legislative efforts that are attempting to tamp this down. Time talks about how Senator Ron “Mind Your Own Business Act from 2019 would create new cyber security and privacy policies that digital platforms must abide by, and provide means for customers to see both the data that has been collected on them and with which parties it has been shared. In 2021, Wyden also introduced a bill alongside Republican Senator Rand Paul of Kentucky, the Fourth Amendment Is Not For Sale Act, which would close the legal loophole that allows data brokers to sell individuals’ personal information to law enforcement and intelligence agencies without court oversight. ”

Congresswomen Anna Eshoo and Zoe Lofgren, have reintroduced their Online Privacy Act, which would give individuals the right to access, fix, or delete their data, and Eshoo’s Banning Surveillance Advertising Act restricts advertisers from targeting individuals based on data collected about them, on the theory that if you can’t use it to make money, there’s no point in collecting it.

IBM on quantum scaling

Eric Gregory: Biden brings key quantum tech group under White House authority, with hopes for faster EV charging and more – MarketWatch

At IBM’s THINK 2022 event, the company made the bold assertion that they will launch a 4,158 qubit quantum computer by 2025.

So, okay, that’s a number – let’s put it into a little bit of context. In a quantum computer, the most basic unit of information is a quantum bit, or a qubit. If you’re building a quantum computer, you need a way to create and maintain and use stable qubits, and eventually you need to be able to do that at scale. Today’s bleeding edge quantum computers are operating in the range of 1-150 qubits, and some experts suggest that a quantum computer will need thousands or even millions of qubits to start being really useful. Now, again, we’re in the infancy of this technology, so right now we see different organizations taking different approaches to the core question of, “How do we create and manage qubits?” Some companies like IonQ are using trapped ions, which hold information effectively but are difficult to scale. IBM is using qubit superconductors with niobium, and Google is taking a somewhat similar approach. Still others like Intel are looking to use silicon, which has historically had higher error rates for qubits but, you know, we’ve been scaling in classical computers for some time now, and just this year there’s been some promising research on avoiding errors with silicon.

So among the current players, IonQ has a 32 qubit trapped ion system. At the end of last year, Rigetti debuted an 80 qubit multichip processor, and IBM has a 127 qubit system. Those numbers don’t necessarily track 1-to-1 because of varying error rates among the different system models, but they give you an idea of the territory we’re in. So the news here is that IBM is betting on using a quantum-classical hybrid approach to bump up that qubit number way beyond what we’ve seen so far. Essentially, they intend to link together a bunch of quantum computers via classical computers, and they seem to be going all in on this philosophy of quantum and classical machines working together. Next year they plan to introduce “serverless quantum computing,” where their cloud system will decide how to distribute users’ requests between quantum and classical machines. Meanwhile, the White House is watching the state of play in quantum closely, announcing last week that it will move the National Quantum Initiative Advisory Committee, an independent expert body, to report directly to the White House. From the announcement, they appear to be particularly interested in quantum computing from a cryptography and national security perspective, along with interest in how the technology might support advances in electric car charging and even fusion energy.

Source: The Register

banner-img
From Virtualization to Containerization
Learn how to move from monolithic to microservices in this free eBook
Download Now
Where do Ubuntu 20.04, OpenSearch, Tungsten Fabric, and more all come together? In the latest Mirantis Container Cloud releases!

In the last several weeks we have released two updates to Mirantis Container Cloud - versions 2.16 and 2.17, which bring a number of important changes and enhancements. These are focused on both keeping key components up to date to provide the latest functionality and security fixes, and also delivering new functionalities for our customers to take advantage of in …

Where do Ubuntu 20.04, OpenSearch, Tungsten Fabric, and more all come together? In the latest Mirantis Container Cloud releases!
Monitoring Kubernetes costs using Kubecost and Mirantis Kubernetes Engine [Transcript]

Cloud environments & Kubernetes are becoming more and more expensive to operate and manage. In this demo-rich workshop, Mirantis and Kubecost demonstrate how to deploy Kubecost as a Helm chart on top of Mirantis Kubernetes Engine. Lens users will be able to visualize their Kubernetes spend directly in the Lens desktop application, allowing users to view spend and costs efficiently …

Monitoring Kubernetes costs using Kubecost and Mirantis Kubernetes Engine [Transcript]
DevOps and DevSecOps – The Talk of the Cloud

By now, anyone involved in application development has heard the terms DevOps and DevSecOps. Yet, despite the frequency with which these buzzwords are thrown around, many developers don’t seem to understand them. But DevOps and DevSecOps are increasingly essential components for any application development team. They are especially important for teams deploying applications across large-scale, distributed, multi-cloud environments. …

DevOps and DevSecOps – The Talk of the Cloud
WHITEPAPER
The Definitive Guide to Container Platforms
READ IT NOW
Mirantis Webstore
Purchase Kubernetes support
SHOP NOW
LIVE WEBINAR
Getting started with Kubernetes part 2: Creating K8s objects with YAML

Thursday, December 30, 2021 at 10:00 AM PST
SAVE SEAT