Cumulus Networks and Mirantis: An Open Stack for Open Network Hardware

On Wednesday, January 28, Mirantis and Cumulus Networks teamed up for a webinar, titled “Unlock the Potential of OpenStack: Making Cloud Deployments Simpler, More Affordable, and Faster.” Primary presenters were Cumulus Networks’ Meena Sankaran, in charge of Ecosystem and Solutions, and Kamesh Pemmaraju, who drives Mirantis’ Partner Marketing.

The main goal was to introduce attendees to the fast-maturing technology stack that rides on open network hardware. Also called “white box” network components, these are servers equipped with a large number of network ports, engineered to host a standard network OS (e.g., Cumulus Linux), relevant utilities and applications. This approach lets you provision such devices, deploy OS and apps to them, remotely configure and manage them with versions of the same tools and process employed in managing servers generally.

 

 

Cumulus’ Meena Sankaran discussed a recommended leaf-spine L2/L3 physical architecture for optimally connecting such open Linux-driven network hardware. Benefits of the leaf-spine arrangement include innate redundancy, same-length paths with predictable latency, the ability to reduce routing overheads by exploiting various flavors of virtual circuits, tolerance for coexisting ‘elephant’ and ‘mouse’ flows (e.g., very large file transfers taking place simultaneously with bursty REST traffic — such tolerance is essential for best performance of Hadoop and similar applications), and greater cost-efficiency per port (via utilization of multiple 10GBps ports instead of a small number of much-more-expensive 40GBps ports).

Figure 1. Cumulus Linux architecture for an open standard network node, with core components running at the kernel level, and higher-order routing, VXLAN and bridging, provisioning clients and user applications running in userspace over a standard hardware abstraction layer.


A deeper discussion ensued about how such apparently-complex and sophisticated networks could be deployed and managed. Ms. Sankaran described the essentials of ONIE — the Open Network Install Environment — a project begun by Cumulus in 2011/2012 to provide a network bootstrap and provisioning pathway for open network gear, analogous to PXE. The project is now maintained by the OpenCompute Foundation.

Figure 2. An OpenStack cluster networked using a leaf-spine arrangement, combining ToR (Top of Rack) switches with fully-meshed backbone switches. In a fully-realized configuration, each compute node would also host a kernel-layer softswitch under control of the overlay SDN.


In principle, the ONIE pathway could be used as the first step in bootstrapping an entire cloud, including its network, from the ground up, all from a single server and console. The network would coldstart as a flat L2 domain, and ONIE would be used to discover and load the network OS and base functionality onto open network servers. Then a second provisioning engine, like Mirantis Fuel, would take over, using PXE to place a minimal OS on cluster node hardware, allow pre-deployment configuration of OpenStack components (and optionally, of SDN components), then deploy the cloud and its SDN. From ‘wired up raw hardware’ to ‘fully-functional Clos-networked HA cloud’ in the span of a few cups of coffee.

In the near term, Mirantis and Cumulus have agreed to jointly produce a reference architecture showing a decoupled version of this ground-up deployment process. But the potential also exists for closer integration between Cumulus and Mirantis OpenStack, perhaps via Fuel plugins.

 

Sign up for Webcast

Subscribe to Our Newsletter

Latest Tweets

Suggested Content

LIVE DEMO
Mirantis Cloud Platform
WEBINAR
Machine Learning in the Datacenter