How is Cloud Native Changing the Landscape of Edge and 5G? [Recording]

Michelle Yakura - January 27, 2022 - , , , ,

Late last year, Mirantis hosted a Cloud Native and Coffee panel featuring CTO Adam Parco, Global Field CTO Shaun O’Meara, Director of Technical Marketing Nick Chase, and special guest Darragh Grealish, CTO of 56K Cloud. Below are highlights of the discussion that touch on what edge is and how developers can bring cloud native innovation to edge computing and 5G.

Watch the full recording.

Expanding critical ecosystems for 5G edge computing

Darragh Grealish: With the advent of 5G, this is finally an opportunity for developers to be — not that connectivity is part of the problem — but actually to be part of the solution of both connectivity to the cloud, to the edge as well as generating those next subscriber experiences based on connecting the edge connectivity and the cloud. What we’re going to see is these super cool use cases, but we can only address these by actually expanding those critical ecosystems, like we’ve seen with the public cloud and IoT and mobile devices. What we need to do is actually bring that kind of critical mass of developer experience that we know from the public cloud and actually bring them together into this 5G and edge space. 

I’m quite a passionate believer in this area, have been so for some time, and now I see the advent of 5G actually becoming implemented, both from a private perspective and also within global mobile operators. Now is the time to really bring the 5G developer story to life.

Developing Arm silicon for specific use cases

Adam Parco: Darragh and I worked together back at my time with Docker, when I was a lot more focused on edge and IoT, and we brought Arm support to the Docker engine. We worked together during the launch of Arm Graviton. 

I was just wondering what you’ve been hearing and seeing, what’s your sense of Graviton, what traction it’s getting, and who’s using it and for what use cases, because whenever you think of edge or IoT, Arm is the second topic, right? It’s been all about lower and lower power devices, more energy efficient. I’d just love to hear what you’ve been seeing and hearing.

Nick Chase: And before anyone talks about Graviton we need to explain what Graviton is.

Darragh Grealish: We’re all fully aware that Graviton is both a solution and an opportunity. If you think about our laptops or computers, everything’s been x86, but small devices like this smart gateway, or this remote for the air conditioning unit, or this Qualcomm RB5 Snapdragon development platform, It’s basically what your mobile phone has. All of these devices for quite some years now have been developed on top of the Arm silicon, and companies like NXP and Qualcomm have been building these out, and now you see with Apple with the M1, they also did it themselves, like what Tesla does with the driving computer. They realize their need and they go into the silicon. They actually develop silicon directly towards the application use case. 

So going back to Adam’s point regarding Graviton and Graviton 2, that’s Apple’s way of addressing the success that we’ve seen with these different types of CPU architectures, because up until a few years ago, everything was running on x86 or AMD 64. And you had two to three core microprocessor companies, like Intel and AMD. But down in the trenches, we’ve seen all these small devices, everything from your mobile phone to edge devices like this. This is actually a Siemens smart gateway for home automation. This has also had an Arm chip in there for years, but what’s happening is the success of these embedded hardware being very low powered, being energy-efficient and super stable, running for 12 – 15 years. Those benefits now we’re seeing going back into the cloud.

Addressing the developer experience for 5G and edge computing

Darragh Grealish: Where we have these simpler architectures, simpler frameworks to develop on, what’s happening is, this is expanding. At the same time, it’s getting the benefit from the developer experience that has been developing for the public cloud, how it’s been evangelizing, essentially. We need to develop the building blocks that are easily accessible towards the developers, because in order to address this critical next-generation user experience, you need to first address the developer experience. You need to make it accessible. You need to make it not so painful. It needs to be common. It needs to be familiar. Once you address those kinds of pain points and reduce the barrier to entry, then those developers have more time to develop those killer apps, those user experiences. 

If you go back to Graviton and what the advantage is, this has been two things. One is increased workload, reduced costs, power consumption, all that kind of stuff. But at the same time, a workflow that your developer would compute for a development device like this smart gateway or on your phone, now can run natively in the cloud, which opens new opportunities towards being able to develop the applications for these devices, which you have to carry around if you want to work with. You can now do it virtually in the cloud, and then deliver it through a native experience.

Adam Parco: I’m glad you mentioned that because I have a theory that we’re at a tipping point where you’re going to see more and more of Arm, because like you said, Apple has Arm chipsets now and people developing apps for iPhones and laptops and all these edge devices. The world has moved to cloud native development and remote compute, remote build servers, and CI/CD pipelines. It’s self-replicating. We need to have Arm, because if you’re targeting Arm, you have to have an Arm build infrastructure. Arm is becoming more and more prevalent.

Jumping from a Java world to 5G and edge computing devices

Shaun O’Meara: I want to pick up on something that Darragh mentioned. You talk about the challenges that developers are having consuming all these devices. What are the problems the developers experience actually consuming these devices? Hardware’s cool. We’ve got to be able to leverage this hardware, and how are we? What are the challenges behind that?

Nick Chase: And how does 5G relate to all of this?

Darragh Grealish: It’s access to ecosystem. The ecosystem around developing websites with JavaScript and Node.js meant a greater group of people could develop inside of these platforms. They can take a laptop, then go to AWS with a credit card, pay as you go, and then you roll. You start to develop. So there’s very few tools needed in order to get started.

It’s like a carpenter. He has to buy a van. He has to buy the tools. You don’t need all of that when you work in the cloud. That’s just basically a credit card number and a laptop away. The same thing goes with embedded devices. In order to get to a point where you can start developing and using this, you need to get the device first. You need to plug it into stuff. Most times, really low level embedded devices have JTAG interfaces, they have various debug interfaces, so you even need special equipment in order to have access to the platform itself.

Going back to your point of what the cloud is doing, it’s allowing it to become more accessible and broadening the ability for a larger scope of developers to access, people who probably wouldn’t have formally studied embedded systems. They’re jumping from a Java world or a classic developer banking world or whatever. They now want to get into this hardware stuff. The whole benefit is that they can work with these more familiar and less erosive experiences.

Nick Chase: So we’ve established that we have lots of devices out there. We’ve established that developers can more easily work with them because they can model them on the cloud, work with them in the cloud, etc. So now my question is this: We have an edge. We have 5G. What’s the relationship?

How do you define edge, and what benefits does 5G network bring?

Darragh Grealish: We should all give our perspectives on that. But one thing I wanted to start with is first to define what is edge, because it means multiple things for everyone. The reason why I kind of wanted to mention that is, if you look at an island, there is some central point in the island, and then the roads branch out to the coast, and then you have a cliff edge. But you can’t call a cliff, the edge of the cliff, unless you have the ocean and the island. There’s no edge then. In order to properly define edge, there needs to be a central point that multiple edge devices connect into, and those devices by themselves, they could also be a central point.

If you look at migration to the cloud, now everything coming away from the office or from factories and everything, that gets all migrated to this central point, now they see that they could move some things, but not everything. But they realize now they can have ubiquitous systems, with software engineers who have developed certain asynchronous workloads on the edge, some synchronous workloads which have dependencies, in the central cloud, so they have the consolidation of the benefits, like come back to Adam’s point, we’re already on Graviton, Arm silicon, like with Equinix Metal for example.

Then by having these familiar workflows native, that’s the edge. So it’s not running autonomously. It’s actually part of its operation, which means that it’s serving out some local application, facilitated by connectivity going to the back, and that’s where then the 5G part pops in. So maybe Shaun wants to add something there on that?

Shaun O’Meara: Before we dig into the 5G part, I want to reexamine this idea of what the edge is. I love the analogy you’re using. I think the edge is not just the edge of the cliff. As an industry, we’re starting to define edge a lot more fluidly. In the old days, if we go back a few years and just let’s look at a practical example such as retail. We would call that branch office, a couple of servers in a cupboard in a room somewhere we call that a branch office. But ultimately, the use case for that is no different to any sort of modern edge use case. We want compute services that are closer to the users maybe providing a bit more resiliency then if we had to push data over a link.

What 5G brings us in that is, more guaranteed throughput, more guaranteed connectivity. But ultimately, you’re going to have multiple edges within any one system. And do we consider, potentially the phone with an IoT device as the edge in this discussion? I know it’s semantics, but it’s an interesting set of semantics to me at least.

Nick Chase: It is an interesting set of semantics. II of course, pick a lot of people’s brains. And I’ve basically heard two different definitions. One is we bring the data out farther from the center of everything, which would be what we’re talking about. And the other thing is we bring the edge of the center out closer to the customer, things like you know, edge nodes outside of enterprise networks, and so on and so forth. So, does it matter, even if we give it a hard definition? That’s the big question.

Adam Parco: I don’t think you can really define it, it means basically anything and everything because right now, it basically means anything outside of a public data center, in some ways, but even that’s starting to be blurred. The end result of edge, the end state that everyone’s hoping we will achieve, is when there’s no such thing as edge, just compute happens where it needs to happen and it moves around dynamically. That’s the ideal state. Until we get to that state, we’re going to have to think about edge and place workloads where they best fit, where latencies are lowest, and where storage is available.

How can average developers consume 5G infrastructure?

Shaun O’Meara: Which leads to a great question. We want to put compute where we need it. That’s essentially the statement you just made. How do we know where their compute is? As a developer, if I’m sitting here now and I want to write an application that is going out to the edge and leverage 5G services, how do I know where to put that compute that is most appropriate? How do I leverage those 5G services today as a developer?

I think it’s a challenge. Darragh, you and I’ve had many conversations about this topic in the past. How does a normal, average developer who is not deep into dealing with 5G and all its challenges, how do they build workloads in this cloud native ecosystem, because now they can. They don’t have to go and get a processor and controller board to go and put embedded software on top of a device through a serial or a bus or some sort. So they can just build an application in a container and ship it out.  What’s the next step? How did they start to identify where their workload’s going to run? How do they start to consume that 5G infrastructure, without having to become 5G experts in the process?

Darragh Grealish: In the past, they would have to open up a relationship with some of the mobile operators, whichever one is in their target environment. It’s unpacked from that specific use case. What ends up happening is, you lose the creativity process. Down the line, by the time you’ve actually technically implement something, it’s very specific. 

Where I see how they could start is, what we need to find is that the tools are available there. if I got your point correctly there, I see the opportunity is that the interface has now become more programmatic. This is a big change with 5G. The two huge revolutions going on are, the operators stepped back a bit on that. First of all, they weren’t very public cloud focused. A public cloud was either seen as one of these ecosystems that was challenging them in there with a pipe in between, and then going to the mobile phone. They were just kind of stuck in the middle. Crucial and critical for each ecosystem, but not being able to unpack enough value that any of those ecosystems that both the public cloud, the edge cases and the users could use, could actually order. 

Now with 5G, one of the main things is, it becomes more of a service-based architecture, 5G core particularly. It’s three things, it becomes more software-oriented, so it’s not just buying boxes, not just the antennas, because from the antenna, you have a radio unit, a distribution unit, a transit network, the core network, and eventually the Internet, or the cloud. Now this is all democratized, and it can be multiple vendors. The majority of any software running in that stack can run in the cloud. If you look at any of those components, they actually did run on Arm architecture and can now go directly into the cloud natively.

5G is not 4G plus one: it’s a complete re-architecture

Darragh Grealish: The second point of major transition is, you go from 4G to 5G, where 4G was enhanced packet core. It focused on packet data for circuit switching. Remember, the communications network is about maintaining phone calls. Now we moved to the IP packet switch network, and that’s a different design. 5G is not just 4G plus one. It’s actually a complete re-architecture of the entire mobile stack that allows the various different components to be deployed in different ways, and subsets of the network can run closer to the edge. That creates the second use case where not just the application that the network is hosting can run on the edge, but also the network components and network functions can run alongside their workload on the same hardware, maybe virtualized, but economically more accessible because you’re not having to deploy per use case, this infrastructure to service that. Non-standalone to standalone.

Baking 5G connectivity into home IoT devices

Nick Chase: Darragh, do you think that in future home IoT devices, instead of relying on Bluetooth and Wi-Fi for their data connection, we’ll all be using 5G?

Darragh Grealish: Certainly. I can see that when we buy a printer or buy a printer that never works and won’t connect to the Wi-Fi or a Sona speaker for example, in the app that we will provision as part of that device that’s called big IoT or the lights in the room, the connectivity will be packaged into the application. If you go back to that point regarding building automation like this home automation gateway from the big brands, one of the major pain points about the deployment of that is connectivity. 

You have the building people here, ordering this stuff. They’re not managing the internal LAN or Wi-Fi or whatever. Now they have to open up another case and say, we need connectivity to this network. If you allow the developer to bake that connectivity requirement programmatically into the network, and the network can programmatically from a global scale actually access that, it’s standardized, that’s what network slicing is about, that enables consumers of products like those Philips Vue lights. The price point of this is quite low, but Philips don’t want to be bothered about Wi-Fi password problems from a customer perspective. They want to get calls that their device is not working in this way, or they want to have a good user experience. If that means that they can take on that connectivity challenge and bake it into the device, it means that you’re creating a better user experience. It’s not just a technical objective. It’s also the entire end to end.

What is network slicing, and what is the big new opportunity?

Nick Chase: Talk for a minute about network slicing. Explain to everybody what it is, and how it relates to all of this. And then what I want to follow up with is how does all of this relate to what we consider to be traditional cloud native technologies like Kubernetes?

Darragh Grealish: Network slicing from a standardization perspective is not exactly new. What’s new and the big opportunity in network slicing is that it’s finally becoming available and possible.

If you look at a pizza, and you break it into pieces, sometimes you can order a pizza with pepperoni and you can personalize that pizza. If you envision the network as a pie, and that pie is made up of multiple stacks of pizzas, but those pizzas are different flavors, and in each slice you have a different arrangement of sauce or whatever. The developers define that based on the user experience and based on the features they’re building in an application. So they can tie in what features of the networks they need to in order to have that guaranteed user experience or to have the low latency for video streams, like this stream today on LinkedIn. 

Network slicing is about guaranteed throughput and connectivity

Shaun O’Meara: It’s about the guaranteed throughput and the end of the day. It’s also about guaranteeing that you will get a connection. An important part of what changes with the 5G standard. In the past, many of us have experienced 3G or 4G connectivity where it would gray out, if you had if you had 10,000 people in the stadium and if you tried to send it send a text, you may or may not get through. The big difference when you start talking about 5G is, if you’re saying that you’re going to guarantee that throughput, everybody’s going to get a small slice, and that connectivity will be guaranteed, so that text will go through, in certain metrics of course, but it will go through.

Nick Chase: Is this something that developers will have control over?

Darragh Grealish: Yeah. The success of network slicing and particularly dynamic network slicing is that developers do need to take ownership of slicing, and the operators need to facilitate that.

Can network slicing enable low power features?

Adam Parco: What are the other features you can do with this network slicing? You mentioned a couple but would one of them be low power? So if I’m an IoT device, and I want to last for 10 years, I can only send a packet every day. What are my options? How flexible is it?

Darragh Grealish: The low power side of things is really up to the modem, to the device from a localized perspective to do that.

Shaun O’Meara: And the application itself as well.

Darragh Grealish: Yeah, let’s say you have a low power device, an LTE Cat M1 or LTE, these are low power, LTE or 4G interfaces. Because in 4G plus, you can listen to the same size signaling in 5G. The difference is that you need a 5G modem to bring up that connectivity. So you’d have a device there. When it needs to send data or receive data, it can be woken up, and if it needs to do a firmware update, it could ask, “Hey I’m awake, I need a firmware update,” or it’d get signals from the network. The network then would offer the device a slice. This comes down through to the management plane, which is called an NSSAI (Network Slice Selection Assistance Information). And then they will be able to consume it. But the condition’s set inside that slice template, and if the developer says, “I need so much data,” then the network can come back to the developer and tell them, “Well sorry, we don’t have capacity, but you can have this slice,” for example. So there’s what is called slice awareness or slice service-level assurance. 

So that’s this other topic of slicing, which is also a two-sided thing. It’s not just, I need to consume the network for this. It’s also well, you’re consuming an effort, and this is how you’re consuming it. Maybe you can consume it more efficiently for us, because we’ve given you the metrics to the slice assurance.

So, back to that low power thing. I think it’s up to the developer to make the best use of network slicing in order to make it low power. But of course, with everything from beamforming and the radio, the whole objective is to reduce the power required.

Shaun O’Meara: And how fast it can connect and you can reconnect to the network faster. and use less of a wait time so there’s less power consumption in that process.

Nick Chase: Excellent. Unfortunately it’s time for us to wrap up but I’m hoping that we can get you to come back another day and continue this conversation.

Adam Parco: One thing I didn’t get to talk to you about was how cloud native fits in. How does Kubernetes get involved? Does it solve anything? Does it make it worse?

Darragh Grealish: Your K0s distro. We need to also look at that. I really want to see that on this Qualcomm RB5. This is what the device looks like, it’s aimed at drones and automotive. You can have layers of platforms.

Nick Chase: So you can put k0s on that. That’s what you’re saying.

Adam Parco: We’ll get you set up with that.

Darragh Grealish: We have to. It’s out there. I’m hoping to see some 5G slicing on this device coming soon with a mobile operator we’re working here with. 

Developers lead the way of 5G cloud native innovation

Darragh Grealish: Another thing to take away today should be to look up with the Android 12 release. It’s actually documented an Android SDK of how, from the developer experience perspective, they can consume slicing. I think the big job now for the topic of 5G, and this is something I believe the community needs to get more energy around, is that the tools the operator has, the tools the developers have in the backend from the public cloud, all the way to the connectivity of the device, this needs to evolve. We really need to see someone taking leadership there because the way it is at the moment is, the hyperscalers and the edge devices are pushing the operators and the vendors behind the operators to try and bring something. We need to bridge that gap in this cloud native way, as Adam mentioned.

Adam Parco: One last thing I want to talk to you about is, will this be successful because developers are leading the way, and they’re taking a developer stance? I think it’s really interesting and cool that it’s more from how developers use it and how developers can leverage it and design around developer use cases versus the exact opposite, where developers scramble at the end to figure out how to use it. I think that can make it very successful, the way it’s structured.

To watch a full recording of the panel discussion, click here. Our Cloud Native and Coffee panels occur on the third Thursday of every month, the next one will be on February 17. Follow Mirantis on LinkedIn to get the latest topics.

banner-img
From Virtualization to Containerization
Learn how to move from monolithic to microservices in this free eBook
Download Now
Radio Cloud Native – Week of May 11th, 2022

Every Wednesday, Nick Chase and Eric Gregory from Mirantis go over the week’s cloud native and industry news. This week they discussed: Docker Extensions Artificial Intelligence shows signs that it's reaching the common person Google Cloud TPU VMs reach general availability Google buys MobileX, folds into Google Cloud NIST changes Palantir is back, and it's got a Blanket Purchase Agreement at the Department of Health and Human …

Radio Cloud Native – Week of May 11th, 2022
Where do Ubuntu 20.04, OpenSearch, Tungsten Fabric, and more all come together? In the latest Mirantis Container Cloud releases!

In the last several weeks we have released two updates to Mirantis Container Cloud - versions 2.16 and 2.17, which bring a number of important changes and enhancements. These are focused on both keeping key components up to date to provide the latest functionality and security fixes, and also delivering new functionalities for our customers to take advantage of in …

Where do Ubuntu 20.04, OpenSearch, Tungsten Fabric, and more all come together? In the latest Mirantis Container Cloud releases!
Monitoring Kubernetes costs using Kubecost and Mirantis Kubernetes Engine [Transcript]

Cloud environments & Kubernetes are becoming more and more expensive to operate and manage. In this demo-rich workshop, Mirantis and Kubecost demonstrate how to deploy Kubecost as a Helm chart on top of Mirantis Kubernetes Engine. Lens users will be able to visualize their Kubernetes spend directly in the Lens desktop application, allowing users to view spend and costs efficiently …

Monitoring Kubernetes costs using Kubecost and Mirantis Kubernetes Engine [Transcript]
FREE EBOOK!
Service Mesh for Mere Mortals
A Guide to Istio and How to Use Service Mesh Platforms
DOWNLOAD
Technical training
Learn Kubernetes & OpenStack from Deployment Experts
Prep for certification!
View schedule
LIVE WEBINAR
Getting started with Kubernetes part 2: Creating K8s objects with YAML

Thursday, December 30, 2021 at 10:00 AM PST
SAVE SEAT