NEW! Mirantis Academy -   Learn confidently with expert guidance and On-demand content.   Learn More

< BLOG HOME

Automate bare metal server provisioning using Ironic (bifrost) and the ansible deploy driver

Guest Post - November 30, 2016
image
On our team, we mostly conduct various research in OpenStack, so we use bare metal machines extensively. To make our lives somewhat easier, we've developed a set of simple scripts that enables us to backup and restore the current state of the file system on the server. It also enables us to switch between different backups very easily. The set of scripts is called multi-root (https://github.com/vnogin/multi-root).
Unfortunately, we had a problem; in order to use this tool, we had to have our servers configured in a particular way, and we faced different issues with manual provisioning:
  • It is not possible to set up more than one bare metal server at a time using a Java-based IPMI application
  • The Java-based IPMI application does not properly handle disconnection from the remote host due to connectivity problems (you have to start installation from the very beginning)
  • The bare metal server provisioning procedure was really time consuming
  • For our particular case, in order to use multi-root functionality we needed to create software RAID and make required LVM configurations prior to operating system installation
To solve these problems, we decided to automate bare metal node setup, and since we are part of the OpenStack community, we decided to use bifrost instead of other provisioning tools. Bifrost was a good choice for us as it does not require other OpenStack components.

Lab structure

This is how we manage disk partitions and how we use software RAID on our machines:
diagram of bifrost lab structure
As you can see here, we have the example of a bare metal server, which includes two physical disks.  Those disks are combined using RAID1, then partitioned by the operating system.  The LVM partition then gets further partitioned, with each copy of an operating system image assigned to its own partition.
This is our network diagram:

In this case we have one network to which our bare metal nodes are attached. Also attached to that network is the IRONIC server. A DHCP server assigns IP addresses for the various instances as they're provisioned on the bare metal nodes, or prior to the deployment procedure (so that we can bootstrap the destination server).
Now let's look at how to make this work.

How to set up bifrost with ironic-ansible-driver

So let's get started.
  1. First, add the following line to the /root/.bashrc file:
    # export LC_ALL="en_US.UTF-8"
  2. Ensure the operating system is up to date:
    # apt-get -y update && apt-get -y upgrade
  3. To avoid issues related to MySQL, we decided to ins tall it prior to bifrost and set the MySQL password to "secret":
    # apt-get install git python-setuptools mysql-server -y
  4. Using the following guideline, install and configure bifrost:
    # mkdir -p /opt/stack
    # cd /opt/stack
    # git clone https://git.openstack.org/openstack/bifrost.git
    # cd bifrost
  5. We need to configure a few parameters related to localhost prior to the bifrost installation. Below, you can find an example of an /opt/stack/bifrost/playbooks/inventory/group_vars/localhost file:
    echo "---
    ironic_url: "http://localhost:6385/"
    network_interface: "p1p1"
    ironic_db_password: aSecretPassword473z
    mysql_username: root
    mysql_password: secret
    ssh_public_key_path: "/root/.ssh/id_rsa.pub"
    deploy_image_filename: "user_image.qcow2"
    create_image_via_dib: false
    transform_boot_image: false
    create_ipa_image: false
    dnsmasq_dns_servers: 8.8.8.8,8.8.4.4
    dnsmasq_router: 172.16.166.14
    dhcp_pool_start: 172.16.166.20
    dhcp_pool_end: 172.16.166.50
    dhcp_lease_time: 12h
    dhcp_static_mask: 255.255.255.0" > /opt/stack/bifrost/playbooks/inventory/group_vars/localhost
    As you can see, we're telling Ansible where to find Ironic and how to access it, as well as the authentication information for the database so state information can be retrieved and saved. We're specifying the image to use, and the networking information.
    Notice that there's no default gateway for DHCP in the configuration above, so I'm going to fix it manually after the install.yaml playbook execution.
  6. Install ansible and all of bifrost's dependencies:
    # bash ./scripts/env-setup.sh
    # source /opt/stack/bifrost/env-vars
    # source /opt/stack/ansible/hacking/env-setup
    # cd playbooks
  7. After that, let's install all packages that we need for bifrost (Ironic, MySQL, rabbitmq, and so on) ...
    # ansible-playbook -v -i inventory/localhost install.yaml
  8. ... and the Ironic staging drivers with already merged patches for enabling Ironic ansible driver functionality:
    # cd /opt/stack/
    # git clone git://git.openstack.org/openstack/ironic-staging-drivers
    # cd ironic-staging-drivers/
  9. Now you're ready to do the actual installation.
    # pip install -e .
    # pip install "ansible>=2.1.0"
    You should see typical "installation" output.
  10. In the /etc/ironic/ironic.conf configuration file, add the "pxe_ipmitool_ansible" value to the list of enabled drivers. In our case, it's the only driver we need, so let's remove the other drivers:
    # sed -i '/enabled_drivers =*/c\enabled_drivers = pxe_ipmitool_ansible' /etc/ironic/ironic.conf 
  11. If you want to enable cleaning and disable disk shredding during the cleaning procedure, add these options to /etc/ironic/ironic.conf:
    automated_clean = true
    erase_devices_priority = 0
  12. Finally, restart the Ironic conductor service:
    # service ironic-conductor restart
  13. To check that everything was installed properly, execute the following command:
    # ironic driver-list | grep ansible
    | pxe_ipmitool_ansible | test |
    You should see the pxe_ipmitool_ansible driver in the output.
  14. Finally, add the default gateway to /etc/dnsmasq.conf (be sure to use the IP address for your own gateway).
    # sed -i '/dhcp-option=3,*/c\dhcp-option=3,172.16.166.1' /etc/dnsmasq.conf
Now that everything's set up, let's look at actually doing the provisioning.

How to use ironic-ansible-driver to provision bare-metal servers with custom configurations

Now let's look at actually provisioning the servers. Normally, we'd use a custom ansible deployment role that satisfies Ansible's requirements regarding idempotency to prevent issues that can arise if a role is executed more than once, but because this is essentially a spike solution for us to use in the lab, we've relaxed that requirement.  (We've also hard-coded a number of values that you certainly wouldn't in production.)  Still, by walking through the process you can see how it works.
  1. Download the custom ansible deployment role:
    curl -Lk https://github.com/vnogin/Ansible-role-for-baremetal-node-provision/archive/master.tar.gz | tar xz -C /opt/stack/ironic-staging-drivers/ironic_staging_drivers/ansible/playbooks/ --strip-components 1
  2. Next, create an inventory file for the bare metal server(s) that need to be provisioned:
    # echo "---
      server1:
        ipa_kernel_url: "http://172.16.166.14:8080/ansible_ubuntu.vmlinuz"
        ipa_ramdisk_url: "http://172.16.166.14:8080/ansible_ubuntu.initramfs"
        uuid: 00000000-0000-0000-0000-000000000001
        driver_info:
          power:
            ipmi_username: IPMI_USERNAME
            ipmi_address: IPMI_IP_ADDRESS
            ipmi_password: IPMI_PASSWORD
            ansible_deploy_playbook: deploy_custom.yaml
        nics:
          -
            mac: 00:25:90:a6:13:ea
        driver: pxe_ipmitool_ansible
        ipv4_address: 172.16.166.22
        properties:
          cpu_arch: x86_64
          ram: 16000
          disk_size: 60
          cpus: 8
        name: server1
        instance_info:
          image_source: "http://172.16.166.14:8080/user_image.qcow2"" > /opt/stack/bifrost/playbooks/inventory/baremetal.yml
    
    # export BIFROST_INVENTORY_SOURCE=/opt/stack/bifrost/playbooks/inventory/baremetal.yml
    As you can see the above we have added all required information for bare-metal node provisioning using IPMI. If needed you can add information about various number of bare-metal servers here and all of them will be enrolled and deployed later.
  3. Finally, you'll need to build a ramdisk for the Ironic ansible deploy driver and create a deploy image using DIB (disk image builder). Start by creating an RSA key that will be used for connectivity from the Ironic ansible driver to the provisioning bare metal host:
    # su - ironic
    # ssh-keygen
    # exit
  4. Next set environment variables for DIB:
    # export ELEMENTS_PATH=/opt/stack/ironic-staging-drivers/imagebuild
    # export DIB_DEV_USER_USERNAME=ansible
    # export DIB_DEV_USER_AUTHORIZED_KEYS=/home/ironic/.ssh/id_rsa.pub
    # export DIB_DEV_USER_PASSWORD=secret
    # export DIB_DEV_USER_PWDLESS_SUDO=yes
  5. Install DIB:
    # cd /opt/stack/diskimage-builder/
    # pip install .
  6. Create the bootstrap and deployment images using DIB, and move them to the web folder:
    # disk-image-create -a amd64 -t qcow2 ubuntu baremetal grub2 ironic-ansible -o ansible_ubuntu
    # mv ansible_ubuntu.vmlinuz ansible_ubuntu.initramfs /httpboot/
    # disk-image-create -a amd64 -t qcow2 ubuntu baremetal grub2 devuser cloud-init-nocloud -o user_image
    # mv user_image.qcow2 /httpboot/
  7. Fix file permissions:
    # cd /httpboot/
    # chown ironic:ironic *
  8. Now we can enroll anddeploy our bare metal node using ansible:
    # cd /opt/stack/bifrost/playbooks/
    # ansible-playbook -vvvv -i inventory/bifrost_inventory.py enroll-dynamic.yaml
    Wait for the provisioning state to read "available", as a bare metal server needs to cycle through a few states and could be cleared, if needed. During the enrollment procedure, the node can be cleared by the shred command. This process does take a significant amount of time time, so you can disable or fine tune it in the Ironic configuration (as you saw above where we enabled it).
  9. Now we can start the actual deployment procedure:
    # ansible-playbook -vvvv -i inventory/bifrost_inventory.py deploy-dynamic.yaml
    If deployment completes properly, you will see the provisioning state for your server as "active" in the Ironic node-list.
    +--------------------------------------------------------------+---------+--------------------+-----------------+-------------------------+------------------+
    | UUID                                                    | Name  | Instance UUID | Power State | Provisioning State | Maintenance |
    +--------------------------------------------------------------+---------+--------------------+-----------------+-------------------------+------------------+
    | 00000000-0000-0000-0000-000000000001   | server1| None          | power on      | active                     | False            |
    +--------------------------------------------------------------+---------+--------------------+-----------------+-------------------------+------------------+
    
Now you can log in to the deployed server via ssh using the login and password that we defined above during image creation (ansible/secret) and then, because the infrastructure to use it has now been created, clone the multi-root tool from Github.

Conclusion

As you can see, bare metal server provisioning isn't such a complicated procedure. Using the Ironic standalone server (bifrost) with the Ironic ansible driver, you can easily develop a custom ansible role for your specific deployment case and simultaneously deploy any number of bare metal servers in automation mode.
I want to say thank you to Pavlo Shchelokovskyy and Ihor Pukha for your help and support throughout the entire process. I am very grateful to you guys.

Choose your cloud native journey.

Whatever your role, we’re here to help with open source tools and world-class support.

GET STARTED
NEWSLETTER

Subscribe to our bi-weekly newsletter for exclusive interviews, expert commentary, and thought leadership on topics shaping the cloud native world.

JOIN NOW