OpenStack Networking – FlatManager and FlatDHCPManager

Over time, networking in OpenStack has been evolving from a simple, barely usable model, to one that aims to support full customer isolation. To address different user needs, OpenStack comes with a handful of “network managers”. A network manager defines the network topology for a given OpenStack deployment. As  of the current stable “Essex” release of OpenStack, one can choose from three different types of network managers: FlatManager, FlatDHCPManager, VlanManager. I’ll discuss the first two of them here. 

FlatManager and FlatDHCPManager have lots in common. They both rely on the concept of bridged networking, with a single bridge device. Let’s consider her the example of a multi-host network; we’ll look at a single-host use case in a subsequent post.

For each compute node, there is a single virtual bridge created, the name of which is specified in the Nova configuration file using this option:


All the VMs spawned by OpenStack get attached to this dedicated bridge.

Network bridging on OpenStack compute node

This approach (single bridge per compute node) suffers from a common known limitation of bridged networking: a linux bridge can be attached only to a signle physical interface on the host machine (we could get away with VLAN interfaces here, but this is not supported by FlatDHCPManager and FlatManager). Because of this, there is no L2 isolation between hosts. They all share the same ARP broadcast domain. 

The idea behind FlatManager and FlatDHCPManager is to have one “flat” IP address pool defined throughout the cluster. This address space is shared among all  user instances, regardless of which tenant they belong to. Each tenant is free to grab whatever address is available in the pool.


FlatManager provides the most primitive set of operations. Its role boils down just to attaching the instance to the bridge on the compute node. By default, it does no IP configuration of the instance. This task is left for the systems administrator and can be done using some external DHCP server or other means.

FlatManager network topology


FlatDHCPManager plugs  a given instance into the bridge, and on top of that provides a DHCP server to boot up from.

On each compute node:

  • the network bridge is given an address from the “flat” IP pool
  • a dnsmasq DHCP server process is spawned and listens on the bridge interface IP
  • the bridge acts as the default gateway for all the instances running on the given compute node


FlatDHCPManager – network topology

As for dnsmasq, FlatDHCPManager creates a static lease file per compute node to guarantee the same IP address for the instance over time. The lease file is constructed based on instance data from the Nova database, namely MAC, IP and hostname. The dnsmasq server is supposed to hand out addresses only to instances running locally on the compute node.  To achieve this, instance data to be put into DHCP lease file  are filtered by the ‘host’ field from the ‘instances’ table.  Also, the default gateway option in dnsmasq is set to the bridge’s IP address. On the diagram below you san see that it will be given a different default gateway depending on which compute node the instance lands.

Network gateways for instances running on different compute nodes


Below I’ve shown the routing table from vm_1 and for vm_3 – each of them has a different default gateway:

root@vm_1:~# route -n
Kernel IP routing table
Destination    Gateway     Genmask Flags Metric Ref Use Iface UG     0   0   0 eth0

root@vm_3:~# route -n
Kernel IP routing table
Destination    Gateway     Genmask Flags Metric Ref Use Iface UG     0   0   0 eth0


By default, all the VMs in the “flat” network can see one another regardless of which tenant they belong to. One can enforce instance isolation by applying the following  flag in nova.conf:


This configures  IPtables policies to prevent any traffic between instances (even inside the same tenant), unless it is unblocked in a security group.

From practical standpoint, “flat” managers seem to be usable for homogenous,  relatively small, internal  corporate clouds where there are no tenants at all, or their number is very limited.  Typically, the usage scenario will be a dynamically scaled web server farm or an HPC cluster. For this purpose it is usually sufficient to have a single IP address space where IP address management is offloaded to some central DHCP server or is managed in a simple way by OpenStack’s dnsmasq. On the other hand, flat networking can struggle with scalability, as all the instances share the same L2 broadcast domain.

These issues (scalability + multitenancy) are in some ways addressed by VlanManager, which will be covered in an upcoming blog posts.

31 responses to “OpenStack Networking – FlatManager and FlatDHCPManager

    1. Hi,
      The relevant nova.conf entries are:

      # network manager to be used

      # bridge to attach vm-s to

      # the physical interface to which the bridge is attached

      # in flat network modes, this setting allows for configuring network inside vm prior to its boot
      # Before boot nova mounts the vm image and “injects” network configuration to /etc/network/interfaces
      # inside the vm

      # This setting is used for iptables rules (NAT + filtering) to be set up

  1. Nice post. Your diagrams helped in understanding. Thanks! However, I am having trouble setting up FlatDHCP mode (and Flat too 🙁 ). In FlatDHCP, VMs within a compute node are able to talk to each other but not outside. So, if we consider your diagram cannot ping/ssh to But can ping/ssh to I’ve added rule for ssh and ping. Could you possibly know what am I doing wrong? My nova.conf also matches what you gave above.

    1. Manish,

      The reasons can be numerous. I would suggest taking the following approach to address the problem:
      1. Check if you have IP addresses assigned to all the bridges on compute nodes (the incoming packet somehow needs to find the way to the network – this is done by routing – which in turn is based on br100 having an address from
      2. Check with tcpdump if your pings arrive at the destination vm-s interface (vnetX) and on br100

      Also – you can copy your kernel routing table on compute nodes and paste it here.

      Please, let me know about the results – will try to help further.

  2. Thanks a lot for your response! 🙂

    1. 10.0.0.x IP is assigned to all compute nodes br100 intf.
    2. After checking tcpdump: ping (icmp packet) on destination compute node br100 does not arrive but ARP request for destination is sent and responded by destination compute node. I didn’t tcpdump inside the vm though.

    Controller node with compute and network routing:
    default via dev eth0 metric 100 dev br100 proto kernel scope link src dev eth0 proto kernel scope link src dev virbr0 proto kernel scope link src dev br100 proto kernel scope link src

    Another compute node routing:
    default via dev eth0 metric 100 dev br100 proto kernel scope link src dev eth0 proto kernel scope link src dev virbr0 proto kernel scope link src dev br100 proto kernel scope link src

    1. Manish,

      Sorry for my late response. I hope you managed to resolve the problem. If you can see ARP broadcast and replies, but no IP traffic, then I would suspect either firewall or routing (but routing tables are correct here). Also – be sure that your switch works as expected. I am afraid for now I cannot tell much more without just logging in and looking at this particular setup.

  3. Manish – were you able to resolve the problem. Looks like allow_same_net_traffic=False setting is critical in making vm instances hosted across two compute nodes communicate. I am also seeing the same behavior, but did not try changing the value of this setting to true.

  4. @Piotre: Thanks for your response. I checked firewall. No problems there.

    @Tamale: Thanks for the suggestion. My system already has ip forwarding enabled. cat /proc/sys/net/ipv4/ip_forward gives 1.

    @hjg: Thanks for your suggestion. I had set allow_same_net_traffic=True before trying it out.

    @all: I am trying to setup the controller and compute node in Virtualbox VMs instead of two physical machines. When I manually launched a VM inside a clean Ubuntu 12.04 (that is a Vbox VM) using virt-manager with qemu as hypervisor in bridged mode, the VM was not able to communicate to any other machine other than its host. So, I am guess there is some problem VM inside VM environment. Now, I’ll be trying OpenStack on physical machines.

    1. Hello Manish,
      I am having the same issue on physical machines. Did you eventually manage to solve this problem ?

  5. I have the same problem
    I use FlatDHCP mode with multi_host option: I have a controller node (which also is a compute node) and a compute node (only nova compute-network-api). I cannot ping a VM on the compute node from a VM on the controller: by using tcpdump on br100, I can see that ARP request are sent but no reply arrives.

    1. @Antonio
      I have the same problem too. I noticed that my second compute node is not assigned with any IP address in flat network (ie 10.0.0.x) and hence no routes in 10.0.0.x subnet. I didn’t have to do anything on my controller and its bridge was assigned I am wondering why the bridge in the second compute node is not assigned with 10.0.0.x ip? Did you resolve your issue?


        1. @Piotr Thanks for your response!
          Yes. I have the same nova.conf file on both the controller and compute node. On compute node I only changed the values for the following:
          I have br100 up both on controller and compute node. Please see the following:

  6. @Piotr
    I am not sure if I should do that! I have not done anything to setup br100 on the controller; it was setup by openstack scripts and works just fine. I believe this is going to be done as part of starting up the openstack scripts with the right conf options. In fact I had this working, before it went south!

  7. Do you need two NIC cards (one for eth0, one for br100) on both controller node and compute node? Or br100 is a virtual bridge used by VM only?

    1. Hi,
      Depending on the HA mode you use:
      You need the cloud controller to have two nics if you run with multi_host=False option. Then the cloud controller typically acts the gateway for all the compute nodes.
      If you run multi_host=True, then each compute node needs to have two network cards, as each compute node is the default gateways for locally running vm-s. In contrary – no need for 2 NICs on the cloud controller then.

      And yes – br100 is an interface to which all the instances connect. It is connected to the interface specified as FlatInterface in nova.conf.

  8. This doc is a very clear and helpful. One question: Flat network is a L2 network connect all instances(throughout all nodes), as we know, the gateway of a L2 network should be the router’s interface, not each switch’s ip address (here is br100). So, 1)why don’t set instances’ default gateway in every node to a common router interface in the network? 2)If no default gateway in a instance, it can also communicate with each other because of in same L2 network, right? 3)In FlatManager mode, the external DCHP will set an unique default gateway address to all instances because he see every instance are equal, right? Thanks.

  9. Hi,
    I am using openstack only in one vm and therefore the controller and the compute nodes are the same vm. In addition, I have only one NIC, eth0. I tried using FlatDHCPManager but since eth0 is bridged with the veth pairs through br100, a dhcp request on br100 goes to a dhcp server from which eth0 gets the ip, not on the local dnsmasq. So it is either a race condition or the ip is overwritten (i am not sure). Is there any configuration which disables a dhcp request being replied by an external dhcp server?

  10. Hello,
    I have a 3 nodes setup:
    – controller
    – compute1
    – compute2

    Relevant lines in the nova.conf in compute nodes are:


    eth2 is an interface with no IP address in both compute1 and compute2
    If I do not specify br100 in /etc/network/interfaces the br100 interface is configured with
    in both compute1 and compute2
    (differently from what it appears in this tutorial).
    I configured br100 in /etc/network/interfaces with different IPs: in compute1 in compute2
    with bridge-ports eth2 for both.
    When I spawn VMs, again br100 gets the same address in both compute1 and compute2.
    However, ip a shows that br100
    keeps the statically configured IP as secondary in both nodes (i.e. in compute1 and in compute2).
    The result of this latter configuration is that spawned VMs can ping the outside world and can ping other VMs hosted by the same compute node.
    However pings do not work across different compute nodes.
    The problem is somewhat similar to what Manish raised last year.
    Tcpdump shows that ARP packets cross the boundaries but ICPM packets do not.
    I would appreciate any hint to fix this problem.

    1. Let me add that putting eth2 and br100 in promisc mode in both compute1 and compute2 did not solve the issue

      1. I finally managed to solve this problem.
        Actually I omitted a relevant piece of information: the three nodes are VMs running in an ESXi server.
        The problem was fixed by enabling promiscuous mode in the vSwitch connecting the eth2 virtual NICs of the two compute nodes.

        1. Hello!
          Thanks for the tip.. I was wondering/investigation what was my openstack problem and forgot that I was running the nodes/controller inside Citrix Xen Center without promiscuous mode.
          Solved my problem, thank you!
          It may be useful for someone else:

  11. I finally managed to fix this problem.
    Actually, I omitted a relevant piece of information: the 3 nodes are actually VMware VMs running in an ESXi server.
    The problem was fixed by enabling Promiscuous mode as “accept” in the ESXi vSwitch that connected the nodes’ eth2 interfaces.

  12. Hi,

    I have a two node setup:

    compute1 has two NICs and I want to create flat network for two NICs. openstack documentation provides a neat diagram but there is no enough configuration information (
    I could configure one flat network, using FlatDHCPManager, but facing issues while configuring two flat networks (for two NICs on compute1).
    Can anyone one here help in providing the configuration for configuring the same.


  13. Hi
    I have a two node setup,controller and compute1.

    I had set controller eth1 for,and set compute1 eth0 for how can I set nova.conf on both server?
    I saw installguide said set it for flat_interface=eth1
    I don’t understand

    Anyone can help me ?

  14. Hi,

    Is it possible to have another bridge interface (eg. br101) and use the interface for another network?
    For example: br101 maps to and all VMs are under Also if it’s possible for a VM to be on the two networks??
    Configuration examples would be big help. The idea behinds this is we have different network for different departments (QA, OPS, DEV …), we would like to place a QA’s VM directly connected to QA network for example.

    QA’s VM1 eth0 br100 Company’s QA network

    DEV’s VM1 eth0 br101 Company’s DEV network

    We will not be using NAT.


Comments are closed.




Mirantis Cloud Platform
Automate Upgrades with Mirantis DriveTrain
ONAP Overview