This post is intended to walk somone through the process of establishing an external testing platform that is linked with the upstream OpenStack continuous integration platform. If you haven’t already, please do read the first article in this series that discusses the upstream OpenStack CI platform in detail. At the end of the article, you should have all the background information on the tools needed to establish your own linked external testing platform.
What Does an External Test Platform Do?
In short, an external testing platform enables third parties to run tests — ostensibly against an OpenStack environment that is configured with that third party’s drivers or hardware — and report the results of those tests on the code review of a proposed patch. It’s easy to see the benefit of this real-time feedback by taking a look at a code review that shows a number of these external platforms providing feedback. In this screenshot, you can see a number
Verified +1 and one
Verified -1 labels added by external Neutron vendor test platforms on a proposed patch to Neutron:
Verified +1 and -1 labels added by external testing systems on a Neutron patch
Each of these systems, when adding a
Verified label to a review does so by adding a comment to the review. These comments contain links to artifacts from the external testing system’s test run for this proposed patch, as shown here:
Comments added to a review by the vendor testing platforms
The developer submitting a patch can use those links to investigate why their patch has caused test failures to occur for that external test platform.
Why Set Up an External Test Platform?
The benefits of external testing integration with upstream code review are numerous:
- A tight feedback loop
- The third party gets quick notifications that a proposed patch to the upstream code has caused a failure in their driver or configuration. The tighter the “feedback loop”, the faster fixes can be identified
- Better code coverage
- Drivers and plugins that may not be used in the default configuration for a project can be tested with the same rigor and frequency as drivers that are enabled in the upstream devstack VM gate tests. This prevents bitrot and encourages developers to maintain code that is housed in the main source trees.
- Increased consistency and standards
- Determining a standard set of tests that prove a driver implements the full or partial API of a project means that drivers can be verified to work with a particular release of OpenStack. If you’ve ever had a conversation with a potential deployer of OpenStack who wonders how they know that their choice of storage or networking vendor, or underlying hypervisor, actually works with the version of OpenStack they plan to deploy, then you know why this is a critical thing!
Why might you be thinking about how to set up an external testing platform? Well, a number of OpenStack projects have had discussions already about requirements for vendors to complete integration of their testing platforms with the upstream OpenStack CI platform. The Neutron developer community is ahead of the game, with more than half a dozen vendors already providing linked testing that appears on Neutron code reviews.
The Cinder project also has had discussions around enforcing a policy the any driver that is in the Cinder source tree have tests run on each commit to validate the driver is working properly. Similarly, the Nova community has discussed the same policy for hypervisor drivers in that project’s source tree. So, while this may be old news for some teams, hopefully this post will help vendors that are new to the OpenStack contribution world get integrated quickly and smoothly.
The Tools You Will Need
The components involved in building a simple linked external testing system that can listen to and notify the upstream OpenStack continuous integration platform are as follows:
- Jenkins CI
- The server that is responsible for executing jobs that run tests for a project
- A system that configures and manages event pipelines that launch Jenkins jobs
- Jenkins Job Builder (JJB)
- Makes construction/maintenance of Jenkins job config XML files a breeze
- Devstack-Gate and Nodepool Scripts
- A collection of scripts that constructs an OpenStack environment from source checkouts
I’ll be covering how to install and configure the above components to build your own testing platform using a set of scripts and Puppet modules. Of course, there are a number of ways that you can install and configure any of these components. You can manually install it somewhere by following the install instructions in the component’s documentation. However, I do not recommend that. The problem with manual installation and configuration is two-fold:
- If something goes wrong, you have to re-install everything from scratch. If you haven’t backed up your configuration somewhere, you will have to re-configure everything from memory.
- You cannot launch a new configuration or instance of your testing platform easily, since you have to manually set everything up again.
A better solution is to use a configuration management system, such as Puppet, Chef,Ansible or SaltStack to manage the deployment of these components, along with a Git repository to store configuration data. In this article, I will show you how to install an external testing system on multiple hosts or virtual machines using a set of Bash scripts and Puppet modules I have collected into a source repository on GitHub. If you don’t like Puppet or would just prefer to use a different configuration management tool, that’s totally fine. You can look at the Puppet modules in this repo for inspiration (and eventually I will write some Ansible scripts in the OpenStack External Testing project, too).
Before I go into the installation instructions, you will need to take care of a few things. Follow these detailed steps and you should be all good.
Getting an Upstream Service Account
In order for your testing platform to post review comments to Gerrit code reviews on openstack.org, you will need to have a service account registered with the OpenStack Infra team. See this link for instructions on getting this account.
In short, you will need to email the OpenStack Infra mailing list an email that includes:
- The email address to use for the system account (must be different from any other Gerrit account)
- A short account username that will appear on code reviews
- (optional) A longer account name or description
- (optional but encouraged) Include your contact information (IRC handle, your email address, and maybe an alternate contact’s email address) to assist the upstream infrastructure team
- The public key for an SSH key pair that the service account will use for Gerrit access.Please note that there should be no newlines in the SSH key
Don’t have an SSH key pair for your Gerrit service account? You can create one like so:
ssh-keygen -t rsa -b 1024 -N '' -f gerrit_key
The above will produce the key pair: a pair of files called
gerrit_key.pub. Copy the text of the
gerrit_key.pub into the email you send to the OpenStack Infra mailing list. Keep both the files handy for use in the next step.
Create a Git Repository to Store Configuration Data
When we install our external testing platform, the Puppet modules are fed a set of configuration options and files that are specific to your environment, including the SSH private key for the Gerrit service account. You will need a place to store this private configuration data, and the ideal place is a Git repository, since additions and changes to this data will be tracked just like changes to source code.
I created a source repository on GitHub that you can use as an example. Instead of forking the repository, like you might would normally do, I recommend instead just git clone’ing the repository to some local directory, and making it your own data repository:
git clone firstname.lastname@example.org:jaypipes/os-ext-testing-data ~/mydatarepo cd mydatarepo rm -rf .git git init . git add . git commit -a -m "My new data repository"
Now you’ve got your own data repository to store your private configuration data and you can put it up in some private location somewhere — perhaps in a private organization in GitHub, perhaps on a Git server you have somewhere.
Put Your Gerrit Service Account Private Key Into the Data Repository
The next thing you will want to do is add your SSH key pair to the repository that you used in the step above that had you register an upstream Gerrit service account.
If you created a new key pair using the
ssh-keygen command above. You would copy the
gerrit_key file into your data repository.
If you did not create a new key pair (you used an existing key pair) or you created a key pair that wasn’t called
gerrit_key, simply copy that key pair into the data repository, then open up the file called
vars.sh, and change the following line in it:
gerrit_key to the name of your SSH private key.
Set Your Gerrit Account Username
Next, open up the file
vars.sh in your data repository (if you haven’t already), and change the following line in it:
jaypipes-testing with your Gerrit service account username.
Set Your Vendor Name in the Test Jenkins Job
Next, open up the file
etc/jenkins_jobs/config/sandbox.yaml in your data repository. Change the following line in it:
echo "Hello world, this is MyVendor's Testing System"
MyVendor to your organization’s name.
Save Changes in Your Data Repository
OK, we’re done with the first changes to your data repository and we’re ready to install a Jenkins master node. But first, save your changes and push your commit to wherever you are storing your data repository privately:
git add . git commit -a -m "Added Gerrit SSH key and username" git push
Requirements for Nodes
On the nodes (hosts, virtual machines, or LXC containers) that you are going to install Jenkins master and slaves into, you will want to ensure the following:
- These basic packages are installed:
- Have the SSH keys you use with GitHub in
~/.ssh/. It also helps to bring over your
~/.ssh/configfiles as well.
Setting up Your Jenkins Master Node
On the host or virtual machine (or LXC container) you wish to run the Jenkins Master node on, run the following:
git clone $YOUR_DATA_REPO data wget https://raw.github.com/jaypipes/os-ext-testing/master/puppet/install_master.sh bash install_master.sh
The above should create an SSL self-signed certificate for Apache to run Jenkins UI with, and then install Jenkins, Jenkins Job Builder, Zuul, Nodepool Scripts, and a bunch of support packages.
When Puppet completes, go ahead and open up the Jenkins web UI, which by default will be at
http://$HOST_IP:8080. You will need to enable the Gearman workers that Zuul and Jenkins use to interact. To do this:
- Click the `Manage Jenkins` link on the left
- Click the `Configure System` link
- Scroll down until you see “Gearman Plugin Config”. Check the “Enable Gearman” checkbox.
- Click the “Test Connection” button and verify Jenkins connects to Gearman.
- Scroll down to the bottom of the page and click `Save`
Once you are done with that, it’s time to load up your Jenkins jobs and restart Zuul:
sudo jenkins-jobs --flush-cache update /etc/jenkins_jobs/config/ sudo service zuul restart
If you refresh the main Jenkins web UI front page, you should now see two jobs show up:
Jenkins Master Web UI Showing Sandbox Jenkins Jobs Created by JJB
Testing Communication Between Upstream and Your Master
Congratulations. You’ve successfully set up your Jenkins master. Let’s now test connectivity between upstream and our external testing platform using the simple
sandbox-noop-check-communication job. By default, I set this Jenkins job to execute on the master node for the
openstack-dev/sandbox project. Here is the project configuration in the example data repository’s
- project: name: sandbox github-org: openstack-dev node: master jobs: - sandbox-noop-check-communication - sandbox-dsvm-tempest-full: node: devstack_slave
Note that the
master by default. The
sandbox-dsvm-tempest-full Jenkins Job is configured to run on a node labeled
devstack_slave, but we will cover that later when we bring up our Jenkins slave.
In our Zuul configuration, we have two pipelines: check and gate. There is only a single project listed in the layout.yaml Zuul project configuration file, the
projects: - name: openstack-dev/sandbox check: - sandbox-noop-check-communication
By default, the only job that is enabled is the
sandbox-noop-check-communicationJenkins job, and it will get run whenever a patchset is created in the upstream
openstack-dev/sandbox project, as well as any time someone adds a comment with the words “recheck no bug” or “recheck bug XXXXX”. So, let us create a sample patch to that project and check to see if the
sandbox-noop-check-communication job fires properly.
Before we do that, let’s go ahead and tail the Zuul debug log, grepping for the term “sandbox”. This will show messages if communication is working properly.
sudo tail -f /var/log/zuul/debug.log | grep sandbox
OK, now create a simple test patch in sandbox. Do this on your development workstation, not your Jenkins master:
git clone email@example.com:openstack-dev/sandbox /tmp/sandbox cd /tmp/sandbox git checkout -b testing-ext touch mytest git add mytest git commit -a -m "Testing comms" git review
Output should look like so:
jaypipes@cranky:~$ git clone firstname.lastname@example.org:openstack-dev/sandbox /tmp/sandbox Cloning into '/tmp/sandbox'... remote: Reusing existing pack: 13, done. remote: Total 13 (delta 0), reused 0 (delta 0) Receiving objects: 100% (13/13), done. Resolving deltas: 100% (4/4), done. Checking connectivity... done jaypipes@cranky:~$ cd /tmp/sandbox jaypipes@cranky:/tmp/sandbox$ git checkout -b testing-ext Switched to a new branch 'testing-ext' jaypipes@cranky:/tmp/sandbox$ touch mytest jaypipes@cranky:/tmp/sandbox$ git add mytest jaypipes@cranky:/tmp/sandbox$ git commit -a -m "Testing comms" [testing-ext 51f90e3] Testing comms 1 file changed, 0 insertions(+), 0 deletions(-) create mode 100644 mytest jaypipes@cranky:/tmp/sandbox$ git review Creating a git remote called "gerrit" that maps to: ssh://email@example.com:29418/openstack-dev/sandbox.git Your change was committed before the commit hook was installed. Amending the commit to add a gerrit change id. remote: Processing changes: new: 1, done remote: remote: New Changes: remote: https://review.openstack.org/73631 remote: To ssh://firstname.lastname@example.org:29418/openstack-dev/sandbox.git * [new branch] HEAD -> refs/publish/master/testing-ext
Keep an eye on your tail’d Zuul debug log file. If all is working, you should see something like this:
2014-02-14 16:08:51,437 INFO zuul.Gerrit: Updating information for 73631,1 2014-02-14 16:08:51,629 DEBUG zuul.Gerrit: Change status: NEW 2014-02-14 16:08:51,630 DEBUG zuul.Scheduler: Adding trigger event: 2014-02-14 16:08:51,630 DEBUG zuul.Scheduler: Done adding trigger event: 2014-02-14 16:08:51,630 DEBUG zuul.Scheduler: Run handler awake 2014-02-14 16:08:51,631 DEBUG zuul.Scheduler: Fetching trigger event 2014-02-14 16:08:51,631 DEBUG zuul.Scheduler: Processing trigger event 2014-02-14 16:08:51,631 DEBUG zuul.IndependentPipelineManager: Starting queue processor: check 2014-02-14 16:08:51,631 DEBUG zuul.IndependentPipelineManager: Finished queue processor: check (changed: False) 2014-02-14 16:08:51,631 DEBUG zuul.DependentPipelineManager: Starting queue processor: gate 2014-02-14 16:08:51,631 DEBUG zuul.DependentPipelineManager: Finished queue processor: gate (changed: False)
If you go to the link to the code review in Gerrit (the link that output after you ran git review), you will see your Gerrit testing account has added a
+1 Verified vote in the code review:
Successful communication between upstream and our external system
Congratulations. You now have an external testing platform that is receiving events from the upstream Gerrit system, triggering Jenkins jobs on your master Jenkins server, and writing reviews back to the upstream Gerrit system. The next article goes over adding a Jenkins slave to your system, which is necessary to run real Jenkins jobs that run devstack-based gate tests. Please do let me know what you think of both this article and the source repository of scripts to set things up. I’m eager for feedback and critique.
(This post was originally published on Join-Fu.)