Improving DHCP Performance In OpenStack

Alexander Shaposhnikov - July 31, 2014 -

Have you ever seen a problem in OpenStack where a VM loses its IP address? If you have, you know what a problem it can be — especially if you have a large number of nodes and VMs. Your clients get frustrated as they start losing connectivity with their VMs for no obvious reason. Even the cloud support team gets frustrated, as everything appears to be working with no hints in the log files as to what might be wrong.

Sound familiar?

In this blog post, I would like to share my experience with OpenStack networking, and specifically the DHCP subcomponent that is responsible for allocating an IP address to a VM.

Why are we blaming it on the DHCP component? Because this particular issue is commonly caused by this small, seemingly trivial OpenStack component.

DHCP agent and DNSmasq

In Openstack, neutron-dhcp-agent provides instances with IP addresses. Theoretically, neutron-dhcp-agent could support different types of backends, but for now it supports only dnsmasq. When an instance is spawned, the procedure of allocations and assignment includes a process that involves storing an IP address in the dnsmasq config, then starting or reloading dnsmasq. Usually Openstack has only one neutron-dhcp-agent, which will spawn one dnsmasq per network, so one big network (including all the subnets in it) will be served by only one dnsmasq service. Theoretically — and according to practical lab testing — dnsmasq should be able to serve up to 1000 DHCP requests per second, but here are some facts:

  1. The lease time, by default, is 120 seconds. As you probably know, the dhcp client will try to prolong the lease halfway through the lease time. That means that each and every VM will update their IP addresses once a minute.

  2. Almost four minutes are required (3 min 43 seconds) to start one DNSmasq instance with 65535 static leases. Usually this happens when Neutron allocates a new IP for a new VM, then forces DNSmasq to reload. During this time, no DHCP service will be provided for the corresponding private Neutron network.

  3. If you’re not using no-ping option for dnsmasq configuration — the default for OpenStack due to safety concerns — you’ll suffer from very slow service speed, because in dnsmasq, a separate process pinger is used to check that the offered IP address isn’t already in use. With the “no-ping” option, dnsmasq was able to serve about 160 requests per second during 10 minutes without losing any of them, though this performance is dependant on core speed and CPU speed.

  4. Ubuntu and CentOS have mac address tables (neighbour table) limited to 128/512/1024 (net.ipv4.neigh.default.gc_thresh1/2/3) records. Because of this, IP records that are not frequently used will age abnormally fast, and that will affect networking performance and slow the ability for the system to know how to send traffic to correct mac address on the node on which the dhcp agent resides.

  5. Attempting to work around these performance problems by significantly increasing IP lease time will cause a huge problem with respect to the release of IP addresses by neutron if your cloud loads dynamically change. By default, neutron will allocate an IP address to a VM for 24 hours, independent of the actual lease time. Also, by default, neutron will not release an IP address until 24 hours after an instance has been terminated.

Actions you can take

Fortunately, there are some things you can do. If you’re using OpenStack with private networks that have an address space of more than 255 addresses (/24), then you should consider tuning default parameters for dnsmasq and for the network node itself.

  1. Increase IP lease time to decrease the number of requests per second coming from VMs trying to renew IP addresses. Calculate the new lease time based on common sense, keeping in mind the average VM lifecycle time. Setting lease time to very large values will force OpenStack to keep this IP in the database as “used” because of a bug. Neutron will not release the IP because of neutron’s lease time in the database, even if the VM is deleted.

  2. Increase the size of the MAC address table to be capable of serving at least 1k hosts.  To do that, you typically need to set the sysclt variables (usually in /etc/sysctl.conf) on the host where dhcp_agent is located. Optionally you can do this on all networking-related nodes. The variables and their settings are:

    net.ipv4.neigh.default.gc_thresh1 = 1024
    net.ipv4.neigh.default.gc_thresh2 = 4096
    net.ipv4.neigh.default.gc_thresh3 = 8192
  3. Add the no-ping option to the default parameters for dnsmasq. This change will enable it to serve more than 10-20 requests per second because dnsmasq won’t try to ping these IP’s before actually allocating them. Note that you should be very careful with this option if you’re using OpenStack as part of your infrastructure. For example, if you’re using provider networks and your VM’s are part of single L2 domain with the other physical servers/equipment/etc. IP conflicts are possible and can wreak havoc.

Changes the Neutron community should think about

Unfortunately, there is no way for a user to solve the problem of 24 hour IP allocation in neutron. Instead, it must be solved by changes to neutron. The simple solution would be to have a configurable parameter in neutron or dhcp-agent for the lease time, and use it as the allocation period for the neutron database. This way looks perfect on the surface but on closer inspection you realize it will significantly increase load on neutron-api/neutron-db. So this is not the correct incorrect way to solve the problem.

Instead, neutron should simply remove IP’s from the database on instance termination. This will solve all the problems with dynamic workload on a cloud and allow the flawless reuse of IP addresses. [UPDATE:  In fact, that is exactly the situation as of the OpenStack Icehouse release, where this problem has now been mitigated somewhat.]


As I promised, I covered only one small subsystem of OpenStack networking, the DHCP service. As you can see, it can cause a lot of pain if it is configured incorrectly, and especially if you use default values for DNSmasq options. The recommendations I provided above may help you to understand how to select specific DNSmasq options and how to tune them if necessary.

Deploy Mirantis Secure Registry on any Kubernetes (Minikube, EKS, GKE, K0S, etc.)

Note: this blog post was originally published by Avinash Desireddy on Medium. You can view the original post here. Docker Containers, Kubernetes, CNCF, and many other relevant projects completely changed how we package, ship, and run applications. As you all know, Kubernetes has become a defacto standard for running applications. At the same time, container registries and chart repositories play a …

Deploy Mirantis Secure Registry on any Kubernetes (Minikube, EKS, GKE, K0S, etc.)
Software Supply Chain Security on Any Kubernetes with Mirantis Secure Registry 3.0

Security and cloud infrastructure availability concerns have been in the news of late with the recent Log4j vulnerabilities and outages at some of the world’s largest public cloud providers. The security and integrity of your container-based images has never been more important. Many have taken to Kubernetes to assist in the deployment and management of their container-based workloads, and are leveraging …

Software Supply Chain Security on Any Kubernetes with Mirantis Secure Registry 3.0
A Year in Review: A Look Back at the Most Powerful Mirantis Resources from 2021

2021 has been quite the year - and while there have been plenty of not-so-good times, we at Mirantis would like to take a moment to focus on the good. We are thankful for the opportunity to provide our readers with informative, accurate, and, above all, educational content via our company blog. We try not only to include helpful information …

A Year in Review: A Look Back at the Most Powerful Mirantis Resources from 2021
Service Mesh for Mere Mortals
A Guide to Istio and How to Use Service Mesh Platforms
Istio in the Enterprise: Security & Scale Out Challenges for Microservices in k8s

Presented with Tetrate
Mirantis Webstore
Purchase Kubernetes support