Mirantis OpenStack 21.3 solves Edge footprint challenges and enhances networking for Telco workloads
The latest release of Mirantis OpenStack for K8s builds upon our solid IaaS foundation to enable new advanced features for use case-specific deployment profiles. Specifically, we have introduced a number of capabilities that enable our customers to implement complex telco, edge and hybrid telco-edge cloud architectures, including new storage architecture options, east-west traffic encryption, and a slew of Tungsten Fabric capabilities.
Hyper-converged nodes and LVM block storage for edge sites
Ceph, the standard storage backend for general-purpose Mirantis OpenStack, provides outstanding resiliency and good performance when running at scale, but recommendations for a production grade Ceph cluster typically specify at least 9 nodes.
So how do you fit a resilient Ceph cluster into a small footprint or resource-restricted environment such as an edge site, where every rack unit is at a premium?
One way is to apply the "hyper-converged” pattern, in which a single compute node combines two functions: VM hypervisor (KVM) and storage for persistent data (Ceph OSD).
However if the number of required nodes is still too high, the alternative is to sacrifice resilience in favor of simplicity, switching to the classic Linux Logical Volume Manager (LVM) + iSCSI backend for block storage service. When combined with the hyper-converged pattern, this architecture enables cloud workloads to have a full set of capabilities available even in tiny, 4-5 compute node environments.
The LVM+iSCSI backend for OpenStack Cinder is primarily intended for use in the edge site architecture, in which small remote subgroups of compute nodes are managed by a central Mirantis OpenStack control plane over a high-latency link.
To prevent block storage traffic from crossing the borders of an edge site, VMs and their corresponding volumes need to be placed into the same host aggregate corresponding to the site. Mirantis OpenStack specifically ensures this scenario is supported out of the box.
East-west traffic encryption
Cloud operators are justifiably paranoid about customer data getting stolen. This risk is highest in environments where the cloud operator does not have full control over the infrastructure. For example, edge cloud sites are often collocated in 3rd party data centers.
Cloud workload traffic in transit between compute nodes at an edge site is especially vulnerable to interception. For this reason, security-conscious operators often consider “in-flight” encryption of tenant traffic a must for compliance with Federal Information Processing Standards (FIPS) or other industry standards.
The new east-west traffic encryption feature of Mirantis OpenStack 21.3 relies on StrongSwan, open source Virtual Private Networking (VPN), to create a mesh of IPSec tunnels between all the compute nodes within an OpenStack cluster.
** It is important to note that traffic encryption introduces a significant overhead on CPU consumption that increases with the number of compute nodes involved, and therefore should primarily be used in relatively small clouds and with powerful CPUs that support hardware accelerated encryption.
Technical preview of Tungsten Fabric v2011
Tungsten Fabric is the Software Defined Networking (SDN) solution of choice for telco cloud operators, and Mirantis OpenStack 21.3 introduces initial support for Tungsten Fabric v2011, the latest stable version of the SDN. It offers for evaluation a number of advanced networking capabilities, including Virtual Port Groups and 4-byte autonomous system numbers (ASN) for BGP peering.
Together with OpenStack Victoria, Tungsten Fabric v2011 will become one of Mirantis OpenStack’s long-term supported (LTS) configurations.
New Tungsten Fabric Features in Mirantis OpenStack 21.3
Single root I/O virtualization (SR-IOV) network traffic acceleration is now available to all telco customers as a fully supported feature. A physical network card gets presented on the hardware level as multiple virtual PCI devices, each passed directly to a VM, thus completely avoiding the overhead from the host OS networking stack.
Mirantis OpenStack 21.3 supports a single SR-IOV-enabled NIC per compute node for Tungsten Fabric, but this limitation will be removed in future releases
When using hardware offloading, some network interface cards, such as Broadcom BCM5719, are known to have issues with checksum calculation in the IP packets, causing severe disruptions in tenant traffic of the cloud.
The workaround is to use the Tungsten Fabric life cycle management API to disable the hardware acceleration for outgoing traffic, although naturally the downside is lower network performance.
Security-conscious cloud operators prefer to physically separate control plane and data plane traffic into different network interfaces. Tungsten Fabric lifecycle management now enables operators to choose a specific interface for the XMPP exchange (controller to compute nodes) and for BGP peering (controller to edge routers) communication.
For those customers who don’t care about isolation of control plane traffic, we have preserved the good old “everything on a single NIC” mode of deployment for Tungsten Fabric.