NEW! Mirantis Academy -   Learn confidently with expert guidance and On-demand content.   Learn More

< BLOG HOME

Configuring Bare-metal Switches in OpenStack Cloud Networks: Bare-metal provisioning, part 4

Dmitry Russkikh - September 24, 2012

In previous blog posts, we outlined a bare-metal provisioning framework for OpenStack (see: Beyond virtual machines and hypervisorsPlacement control and multi-tenancy isolation and Preparing images for bare-metal nodes using OpenStack Cloud). This time, we want to talk about networking.

Normally, when you run virtual instances, OpenStack does the networking configuration for you. This is not the case with bare-metal provisioning, since it means you go straight to a bare-metal switch and have your bare-metal ports configured for the project network.

You also need to be careful with the MAC addresses of bare-metal machines. With virtual instances, MAC addresses are generated randomly for each new VM, but in the case of bare metal, they're static. Moreover, as we pointed out in Beyond virtual machines and hypervisors, we generally want to have a service network manage the provisioning.

Let's look at one example, a deployment where we had to (a) implement automatic plugging of bare-metal instances into a project network and (b) resolve an issue with the storing of MAC addresses.

Solution architecture

We had Juniper EX4200 and QFX3500 switches, which can be managed via the netconf protocol, which is like the XML-RPC protocol and is designed to manage network devices. For OpenStack networking, we used the standard VlanManager with multi_host feature, which didn’t require any modifications to work with bare-metal instances. VlanManager creates a bridge and starts the DHCP server (dnsmasq) for project vlans on hosts where instances are spawned in a project. So bare-metal nodes must receive IP-addresses by DHCP if they are plugged into the proper vlan. The DHCP runs on the bare-metal compute node (please see this post to get the details of how a bare-metal compute node works).

Below is a schematic diagram of the networking used in our solution. The dnasmasq service acts as a DHCP server.

Here is schematic network diagram.

For our purposes it was enough to plug the bare-metal instance into a project vlan. To make it automatic, we implemented an abstraction layer for switch management and a driver for Juniper switches named JuniperNetworkManager. It’s located in nova/virt/baremetal/networkmgr/.

Returning to MAC address storing: We decided to store the MAC addresses of the bare-metal instances in the same place in a database where we store the MAC addresses of virtual machines. So we directly injected a MAC address into a record corresponding to a spawning bare-metal instance.

JuniperNetworkManager

As I mentioned before, JuniperNetworkManager uses the netconf protocol to get access to the switch configuration. Here are some XML requests used in JuniperNetworkManager:

Lock configuration:

<rpc> 

  <lock-configuration/>

</rpc>

Unlock configuration:

<rpc>
   <unlock-configuration/> 
</rpc>

Commit configuration:

<rpc>
   <commit-configuration/>
</rpc

Get interface configuration:

<rpc>
  <get-config>
    <source>
      <candidate/>
    </source>
    <filter type="subtree">
      <configuration>
        <interfaces>
          <interface>
            <name>$interface_name</name>
          </interface>
        </interfaces>
      </configuration>
    </filter>
  </get-config>
</rpc>

Set interface vlan:

<rpc>
  <load-configuration ation="merge" format="text">
    <configuration-text>
      interfaces {
        $interface_name {
          unit 0 {
            family ethernet-switching {
              vlan {
                members $vlan;
              }
            }
          }
        }
      }
    </configuration-text>
  </load-configuration>
</rpc>

 

Configuration

To use this manager you have to add the following strings to the nova.conf file on a bare-metal controller host:

−−networkmgr_driver=nova.virt.baremetal.networkmgr.juniper.JuniperNetworkManager
−−service_vlan=<service_vlan_number>

Below are the basic steps of spawning and terminating bare-metal instances to show when the switch driver acts.

Spawning a bare-metal node:

  1. The node is plugged into the management vlan by a switch service driver.
  2. The node boots via PXE. TinyCore Linux is loaded.
  3. The bare-metal agent is downloaded from the controller node.
  4. The agent starts and receives a “spawn” task, then it downloads the target OS image onto a hard disk.
  5. After the task is completed, the bare-metal driver sets the hard disk as the node’s first boot device and reboots the node via IPMI.
  6. When a bare-metal node is almost ready, it's switched to the appropriate project’s vlan by a switch management driver.

Termination of a bare-metal node:

  1. The node is plugged into the service vlan by a switch management driver.
  2. The node boots via PXE. TinyCore Linux is loaded.
  3. The bare-metal agent is downloaded from the controller node.
  4. The agent starts and receives a “destroy” task, then it erases data on the hard disk.
  5. After the task is completed, the bare-metal driver shuts down the node and it stays in the service vlan until it's needed to spawn a new bare-metal instance.

Summary

Using VlanManager for networking and injection of MAC addresses of bare-metal machines directly into the database let us implement network management for bare-metal provisioning without any modifications in the core OpenStack code. This shows how OpenStack is a highly customizable platform that makes it easy to build custom solutions on top.

Choose your cloud native journey.

Whatever your role, we’re here to help with open source tools and world-class support.

GET STARTED
NEWSLETTER

Subscribe to our bi-weekly newsletter for exclusive interviews, expert commentary, and thought leadership on topics shaping the cloud native world.

JOIN NOW