Will Edge Computing Reverse Network Virtualization Momentum?
For the last 5 years network function virtualization has been one of the biggest trends in the telco space. Top telcos like AT&T announced their intent to go all in on network virtualization and the rest of the world followed. For those not familiar, NFV is a new way to build service provider networks, where instead of investing in expensive, carrier grade hardware appliances like these…...telcos deploy cheap COTS servers and run network functions (like routers, session border controllers etc.) as pure software. The outcome is cost savings, less vendor lock-in and, most importantly, the ability to update software network functions and physical hardware on independent cycles. There is plenty of good material out there on NFV, but the key point is that over the last five years telcos invested billions of dollars in various NFV initiatives for network core ….yet when it comes to network edge we are about to go the other way.
Let me explain. The only thing in telco that is hotter than NFV today is multi-access edge computing (MEC). Projected adoption of IoT devices, connected cars, AR/VR etc. are requiring telcos to bring more and more sophisticated network functions and data processing applications closer to the end user. Unlike NFV initiatives, which are primarily about saving money on carrier grade gear, MEC is about seizing new opportunity at the network edge. The first to the edge will gain market advantage. And unlike with NFV, where telcos would look to build rather than buy, with MEC telcos are ready to throw money at vendors and get out-of-the-box integrated solutions that get them to the “edge finish line” faster. In effect, the market race for edge is reversing the NFV momentum that has been developing at the network core for the last 5 years.
You cannot blame either service providers or telecom vendors for prioritizing speed over longer term efficiencies at this stage in market development. However, since a large chunk of Mirantis revenue is tied to network virtualization use cases, we can’t help but wonder about the main obstacles to virtualizing the network edge and not just the core today.
In trying to find the answer we looked at the landscape of incumbent telco vendors and found the following:
- all have ready-to-buy gear or solutions for edge (Nokia,Huawei, Juniper);
- all of the above solutions have a good intention of being open and based on standards;
- none of them actually exist as something you can download and try
Why does it matter? It matters because NFV has momentum at the network core because the two virtualization platforms used for core - OpenStack and VMware - are real, tangible pieces of software that can be downloaded, installed and experimented with by vendors building virtual network functions. You virtualize your core using either one of these two and there are a bunch of VNFs you can run on top. However, when it comes to network edge, today we have closed vendor solutions and many open reference architectures, but nothing concrete for an ecosystem of VNF and software vendors to build on top of. Changing that is the first step towards virtualized edge.
With our release of K8S based MCP Edge we openly admit that we may be making a bet on edge architecture that isn’t guaranteed to evolve into a standard. However, we are hoping for the broader ecosystem of vendors and telcos to look at it, try it, give us feedback or maybe even release an alternative version of their own that can be experimented with in the open. Our release of MCP Edge is an open call to the service provider ecosystem: if we want for the edge to be decoupled from hardware, the time has come to go beyond standards bodies and reference architectures. The time has come to start experimenting with real software.