Democratizing Connectivity with a Containerized Network Function Running on a K8s-Based Edge Platform -- Q&A
You can also view the complete webinar and see the discussion of how Magma works and what it does.
Who is maintaining the Magma code?The Connectivity team at Facebook is actively working to develop and maintain the code base.
Over time, we hope to engage more developers and partners to contribute to the project and add new features.
Where can I find the Magma code base?Magma is available on GitHub at https://github.com/facebookincubator/magma.
Have you deployed Magma with other partners?We're currently working with multiple operators on this effort. We're looking forward to expanding our work with more partners over time.
Are you trying to replace incumbent vendors?No. Our goal with Magma is to expand the range of tools and systems available to new and incumbent mobile operators. With Magma, we are helping to enable operators to bring more people online who may not have access to affordable high-quality connectivity.
Why did you choose to open source the Magma project? Did you consider trying to sell it?We are not in the business of becoming a hardware or software vendor. By sharing our code, we're helping move the industry forward while giving other companies and individuals an opportunity to contribute to the project, and thereby ultimately drive greater, industry-wide impact.
How much Docker implementation efficiency is lost when running Virtlet for VM-based VNFs? Wouldn't it be more efficient to re-write the VNFs natively using containers?Yes, in a perfect world, all VNFs would use the microservices architecture and run in containers. But in reality most VNFs are still VMs. Sadly, a large portion of VNFs really replicate the software that ran in the proprietary physical appliance. So it's not easy to containerize all the VNFs and run them in containers.
So at least for the short term, we have to live with VNFs running in VMs and requiring very specific environments. That's why Virtlet becomes an important part of this overall architecture. Also, it provides a very nice transition path, so customers can use Virtlet in a Kubernetes environment, run VM-based VNFs, and then as more and more VNFs transition to containers, then the overall edge architecture doesn't have to change.
Would this become an alternative to OpenStack, which manages VMs today? If not, how would OpenStack be used with Edge cloud?Depending on the use cases, an edge cloud may consist of any of the following:
- Pure k8s cluster with Virtlet
- Pure OpenStack clusters
- A combination of OpenStack and k8s clusters
What impact will this architecture have for usage with virtlet once we introduce containerd instead of dockerd/docketshim layer?Virtlet already provides support for containerd, so there will be no impact.
Is Magma evolving to 5G SBA?Yes, Magma is evolving to support 5G as well. That's something that we're evaluating right now; it's on our roadmap.
Does the Magma solution have a containerized version as well ?Its partially containerized. Like orc8r is fully containerized, feg/agw are partial containerized. FeG/AGW run as a VM, but services inside the VM, such as radius, aaa etc are containerized.
If PCRF is not available in the core network, what element can I use to replace it, is it the Access Gateway?Magma provides a PCRF-like database which implements the PCRF function, with the only caveat that instead of Gx, which is based on Diameter, it supports an optimized interface between the Federation Gateway and this PCRF function. For talking to a PCRF, the Federation Gateway supports the Gx interface.
It's not clear from the Magma architecture, is the orchestrator solution shown here a centralized deployment for all other edge clouds, or does each edge cloud use 3 servers for the control plane?Each edge location has a minimum of 3 servers for the control plane. So if you have 10 edge locations, and inside each of those 10 edge locations you have with an instance of Magma running, then you would have 10 edge locations with a minimum of 3 servers running everywhere.
What CNI are we using here for proving SCTP services?It is the bridge CNI plugin. https://github.com/containernetworking/plugins/tree/master/plugins/main/bridge
What about the resiliency of federation and the orchestrator in case that particular node goes down? How are you handling node label changes for other nodes, and which component handles that?The orchestrator component of Magma is basically a K8s pod, and the entire orchestrator app is actually a stateless app, so it's probably the easiest one to manage the resilience for. So as long as your Kubernetes is running, which should be the case, because it's running HA across three minimum nodes, then the K8s controller will be responsible for scaling and maintaining the resilience for orchestrator component of Magma.
For FeG , Magma currently supports Active-Standby. On failover orc8r will do switchover to standby FeG.
Does the Orchestrator use some sort of ETSI framework for LCM functions of container VNF’s?I don't think there is a super established ETSI framework for what a CNF should look like. There is as close to a consensus that is possible that is now emerging around what a CNF friendly NFVi layer should look like, and that's basically K8s running on bare metal and OpenStack and other stuff running on top of Helm charts. There are a bunch of diagrams that ETSI, CNCF, OPNFV — basically all of these bodies that dictate the telco standards — have. The diagrams look very similar, but when it comes to the actual architecture of the CNF, I don't think there is a common ETSI standard for what it should look like.
How does the client detect the degradation of service and when it is time to fall back?There's two layers of orchestration happening here. There's orchestration done by the orchestrator component of Magma that monitors the Magma components such as CWAGs or Access Gateways, and some sort of degradation is happening at that layer. The orchestrator is responsible for that. And then there's infrastructure level degradation. For example, if there is a physical node that died, or some part like the Virtlet service hung up — some things that are invisible to the orchestrator, that is managed by MCP Edge, specifically by the StackLight component.
StackLight deploys monitoring probes for every single service that runs in MCP Edge and continuously reports anything that is wrong. Moreover, there are automatic triggers that are configured for restarts or redeploys that DriveTrain is capable of executing to proactively remediate some of the problems at the infrastructure level.