Application-aware NFV infrastructure, or the Heisenberg Uncertainty Principle and NFV
The take-away is that nature insists on trade-offs; that nailing down one part of a problem domain can put another part in the wind. You encounter one example when you introduce virtualization technologies into the Communications Service Provider (CSP) Cloud. In doing so, you often have to make trade-offs. One important trade-off is between utilization (y) and performance (x) on the one hand, and Elasticity, Agility and Manageability, on the other.
At the heart of NFV is the ability to take network functionality that was previously offered as a hardware-based appliance and convert it to software running on commoditized servers. Implicit to this is the need for virtualization technologies, which abstract the software operating environment from underlying hardware resources. However, these virtualization technologies, which are critical to the implementation of the cloud environment, also bring with them challenges that must be addressed and risks that must be managed — challenges and risks are made greater given the stringent requirements for high performance and simple, efficient management imposed in Telco environments.
Figure 1 Traditional Architecture vs. Virtualized Architecture
As shown in Figure 1, virtualization, by its very nature, brings two major sources of performance bottlenecks: 1) hardware resource contention from storage I/O, memory, CPU cores and network bandwidth; 2) virtualization overhead with additional layers of abstraction and processing to application data flows. Both will have an impact on overall application performance. In the Telco cloud, which handles both latency-sensitive and latency-insensitive applications, any performance impact may have severe negative impacts on quality of experience.
Fortunately, recent technology advances - such as Intel’s Data Path Development Kit (DPDK) and PCI-SIG Single Root I/O Virtualization (SR-IOV) - help enable reliable NFV deployments and ensure achievement of acceptable Telco grade SLAs. Combined with improvements in the ability of COTS hardware platforms (including the most recent x86 processor generations) to take advantage of these data plane acceleration technologies, CSPs can now deploy data path edge functions such as SBC, IMS, CPE/PE router, and EPC elements on standard high volume servers.
Mirantis OpenStack 9.0, our latest OpenStack distribution based on the Mitaka release, provides the delicate balance between achieving the highest practical utilization and ensuring an acceptable level of performance.
In this 9.0 release, CSPs can now experience improved performance while running NFV workloads and other demanding network applications, with support for huge pages, SR-IOV, NUMA/ CPU pinning and DPDK. All of these attributes can be configured through OpenStack Fuel and all have been fully-tested, documented, and readied for production deployment in Mirantis OpenStack 9.0:
- The integration of NFV features such as huge pages, SR-IOV, NUMA/CPU pinning and DPDK into the OpenStack cloud operating environment enables fine-grained matching of NFV workload requirements to platform capabilities, prior to launching a virtual machine
- Such feature support enables CSPs to offer premium, revenue-generating services based on specific hardware features
- CSPs can now use Fuel to easily configure and provision the aforementioned NFV features in an automated and repeatable manner
- CSPs can efficiently manage post-deployment operations, using either the lifecycle management features of Fuel or tools such as Puppet Enterprise, while overseeing infrastructure with Stacklight for logging, monitoring and alerting (LMA).
You can also refer to our NFV Solution web page to learn more about the Mirantis CSP cloud solution, Open NFV reference platform, and the NFV partner ecosystem.