Mirantis Acquires Docker Enterprise Platform Business

LEARN MORE

Understanding the OpenStack CI System

This post describes in detail the upstream 
OpenStack
continuous integration platform

. In the process, I’ll be describing the code flow in the upstream system — from the time the contributor submits a
patch to Gerrit, all the way through the creation of a 
devstack
environment in a virtual machine, the running of the 
Tempest
 test suite against the devstack installation, and finally the reporting of test results and archival of test
artifacts. Hopefully, with a good understanding of how the upstream tooling works, setting up your own linked
external testing platform will be easier.

Some History and Concepts

Over the past four years, there has been a steady evolution in the way that the source code of OpenStack projects is
tested and reviewed. I remember when we used Bazaar for source control and 
Launchpad merge proposals
 for code review. There was no automated or continuous testing to speak of in those early days, which put pressure
on core reviewers to do testing of proposed patches locally. There was also no standardized integration test suite,
so often a change in one project would inadvertantly break another project.

Thanks to the work of many contributors, particularly those patient souls in the
OpenStack
Infrastructure team

, today there is a robust platform supporting continuous integration testing for OpenStack and 
Stackforge
 projects. At the center of this platform are the 
Jenkins CI
 servers, the 
Gerrit
 git and patch review server, and the 
Zuul
 gating system.

The Code Review System

When a contributor submits a patch to one of the OpenStack projects, one pushes their code to the git server managed
by Gerrit running on 
review.openstack.org
. Typically, contributors use the 
git-review Git plugin
, which simplifies submitting to a git server managed by Gerrit. Gerrit controls which users or groups are allowed
to propose code, merge code, and administer code repositories under its management. When a contributor pushes code
to review.openstack.org, Gerrit creates a Changeset representing the proposed code. The original submitter
and any other contributors can push additional amendmentsto that Changeset, and Gerrit collects all of the
changes into the Changeset record. Here is a shot of a Changeset under review. You can see a number of patches
(changes) listed in the review screen. Each of those patches was an amendment to the original commit.


Individual patches amend the changeset

Individual patches amend the changeset

For each patch in Gerrit, there are three sets of “labels” that may be applied to the patch. Anyone can comment on a
Changeset and/or review the code. A review is shown on the patch in the Code-Review column in the patch
“labels matrix”:


The "label matrix" on a Gerrit patch

The “label matrix” on a Gerrit patch

Non-core team members may give the patch a Code-Review label of +1 (Looks good to me)0 (No
strong opinion)
, or -1 (I would prefer you didn’t merge this). Core team members can give
any of those values, plus +2 (Looks good to me, approved) and -2 (Do not submit).

The other columns in the label matrix are Verified and Approved. Only non-interactive
users of Gerrit
, such as Jenkins, are allowed to add a Verified label to a patch. The external testing
platform you will set up is one of these non-interactive users. The value of the Verified label will be +1
(check pipeline tests passed)
-1 (check pipeline tests failed)+2 (gate
pipeline tests passed)
, or -2 (gate pipeline tests failed).

Only members of the OpenStack project’s core team can add an Approved label to a patch. It is either
+1 (Approved) value or not, appearing as a check mark in the Approved column of the label matrix:


An approved patch.

An approved patch

Continuous Integration Testing

Continuous integration (CI) testing

 is the act of running tests that validate a full application environment on a continual basis — i.e. when any
change is proposed to the application. Typically, when talking about CI, we are referring to tests that are run
against a full, real-world installation of the project. This type of testing, called 
integration testing
, ensures that proposed changes to one component do not cause failures in other components. This is especially
important for complex multi-project systems like OpenStack, with non-trivial dependencies between subsystems.

When code is pushed to Gerrit, a series of jobs are triggered that run a series of tests against the proposed code.
Jenkins is the server that executes and manages these jobs. It is a Java application with an extensible architecture
that supports plugins that add functionality to the base server. We will delve into one such plugin, the Gerrit
plugin, in the second article in this series, on creating an external linked testing platform.

Each job in Jenkins is configured separately. Behind the scenes, Jenkins stores this configuration information in an
XML file in its data directory. You may manually edit a Jenkins job as an administrator in Jenkins. However, in a
testing platform as large as the upstream OpenStack CI system, doing so manually would be virtually impossible and
fraught with errors. Luckily, there is a helper tool called 
Jenkins Job Builder
 (JJB) that constructs these XML configuration files after reading a set of YAML files and job templating rules. We
will describe JJB later in the article.

The “Gate”

When we talk about “the gate”, we are talking about the process by which code is kept out of a set of source code
branches if certain conditions are not met.

OpenStack projects use a method of controlling merges into certain branches of their source trees called the 
Non-Human Gatekeeper

 model 
[1]
. Gerrit (the non-human) is configured to allow merges by users in a group called “Non-Interactive Users” to
themaster and stable branches of git repositories under its control. The upstream main Jenkins CI
server, as well as Jenkins CI systems running at third party locations, are the users in this group.

So, how do these non-interactive users actually decide whether to merge a proposed patch into the target branch?
Well, there is a set of tests (different for each project) — unit, functional, integration, upgrade, style/linting —
that is marked as “gating” that particular project’s source trees. For most of the OpenStack projects, there are
unit tests (run in a variety of different supported versions of Python) and style checker tests for 
HACKING
and 
PEP8
 compliance. These unit and style tests are run in Python 
virtualenvs
 managed by the 
tox
 testing utility.

In addition to the Python unit and style tests, there are a number of 
integration tests
 that are executed against full installations of OpenStack. The integration tests are simply subsets of the Tempest
integration test suite. Finally, many projects also include upgrade and schema migration tests in their gate tests.

How Upstream Testing Works

Graphically, the upstream continuous integration gate testing system works like this:


gerrit-zuul-jenkins-flow

We step through this event flow in detail below, referencing the numbered steps in bold.

The Gerrit Event Stream and Zuul

After a contributor has pushed (1a) a new patch to a changeset or a core team member has reviewed
the patch and added an Approved +1 label (1b), Gerrit pushes out a notification event to its 
event stream

 (2). This event stream can have a number of subscribers, including the Gerrit Jenkins plugin and
Zuul. Zuul was developed to manage the many complex graphs of interdependent branch merge proposals in the upstream
system. It monitors in-progress jobs for a set of related patches and will pre-emptively cancel and dependent test
jobs that would not succeed due to a failure in a dependent patch 
[2]
.

In addition to this dependency monitoring, Zuul is responsible for constructing thepipelines of jobs that
should be executed on various events. One of these pipelines is called the “gate” pipeline, appropriately named for
the set of jobs that must succeed in order for a proposed patch to be merged into a target branch.

Zuul’s pipelines are configured in a single file called 
layout.yaml

 in the 
OpenStack-Infra config project
. Here’s a snippet from that file that constructs the gate pipeline:

  - name: gate
    description: Changes that have been approved by core developers...
    failure-message: Build failed. For information on how to proceed...
    manager: DependentPipelineManager
    precedence: low
    trigger:
      gerrit:
        - event: comment-added
          approval:
            - approved: 1
        - event: comment-added
          comment_filter: (?i)^\s*reverify( (?:bug|lp)[\s#:]*(\d+))\s*$
    start:
      gerrit:
        verified: 0
    success:
      gerrit:
        verified: 2
        submit: true
    failure:
      gerrit:
        verified: -2

Zuul listens to the Gerrit event stream (3), and matches the type of event to one or more pipelines (4).
The matching conditions for the gate pipeline are configured in thetrigger:gerrit: section of the YAML
snippet above:

    trigger:
      gerrit:
        - event: comment-added
          approval:
            - approved: 1
        - event: comment-added
          comment_filter: (?i)^\s*reverify( (?:bug|lp)[\s#:]*(\d+))\s*$

The above indicates that Zuul should fire the gate pipeline when it sees reviews with an Approved +1 label, and any
comment to the review that contains “reverify” with or without a bug identifier. Note that there is a similar
pipeline that is fired when a new patchset is created or when a review comment is made with the word “recheck”. This
pipeline is called the check pipeline. Look in the layout.yaml file for the configuration of the check
pipeline.

Once the appropriate pipeline is matched, Zuul executes (5) that particular pipeline for the project
that had a patch proposed.

But wait, hold up…“, you may be asking yourself, “how does Zuul know which
Jenkins jobs to execute for a particular project and pipeline?
“. Great question! 
:)

Also in the layout.yaml file, there is a section that configures which Jenkins jobs should be run for each project.
Let’s take a look at the configuration of the gate pipeline for the Cinder project:

  - name: openstack/cinder
    template:
      - name: python-jobs
...snip...
    gate:
      - gate-cinder-requirements
      - gate-tempest-dsvm-full
      - gate-tempest-dsvm-postgres-full
      - gate-tempest-dsvm-neutron
      - gate-tempest-dsvm-large-ops
      - gate-tempest-dsvm-neutron-large-ops
      - gate-grenade-dsvm

Each of the lines in the gate: section indicate a specific Jenkins job that should be run in the gate
pipeline for Cinder. In addition, there is the python-jobs item in
the template:section. Project templates are a way that Zuul consolidates configuration of many
similar jobs into a simple template configuration. The 
project template definition for python-jobs

looks like this (still in layout.yaml:

project-templates:
  - name: python-jobs
...snip...
    gate:
      - 'gate-{name}-docs'
      - 'gate-{name}-pep8'
      - 'gate-{name}-python26'
      - 'gate-{name}-python27'

So, on determing which Jenkins jobs should be executed for a particular pipeline, Zuul sees
the python-jobs project template in the Cinder configuration and expands that to execute the following
Jenkins jobs:

  • gate-cinder-docs
  • gate-cinder-pep8
  • gate-cinder-python26
  • gate-cinder-python27

Jenkins Job Creation and Configuration

I previously mentioned that the configuration of an individual Jenkins job is stored in aconfig.xml file
in the Jenkins data directory. Now, at last count, the upstream 
OpenStack Jenkins CI
 system has just shy of 2,000 jobs. It would be virtually impossible to manage the configuration of so many jobs
using human-based processes. To solve this dilemma, the
Jenkins
Job Builder

 (JJB) python tool was created. JJB consumes YAML files that describe both individual Jenkins jobs as well as
templates for parameterized Jenkins jobs, and writes the config.xml files for all Jenkins jobs that are produced
from those templates.Important: Note that Zuul does not construct Jenkins jobs. JJB does that. Zuul
simply configures which Jenkins jobs should run for a project and a pipeline.

There is a master 
projects.yaml

 file in the same directory that lists the “top-level” definitions of jobs for all projects, and it is in this
file that many of the variables that are used in job template instantiation are defined (including
the {name} variable, which corresponds to the name of the project.

When JJB constructs the set of all Jenkins jobs, it reads the projects.yaml file, and for each project,
it sees the “name” attribute of the project, and substitutes that name attribute value wherever it
sees {name} in any of the jobs that are defined for that project. Let’s take a look at the Cinder
project’s definition in the projects.yaml file 
here

:

- project:
    name: cinder
    github-org: openstack
    node: bare-precise
    tarball-site: tarballs.openstack.org
    doc-publisher-site: docs.openstack.org

    jobs:
      - python-jobs
      - python-grizzly-bitrot-jobs
      - python-havana-bitrot-jobs
      - openstack-publish-jobs
      - gate-{name}-pylint
      - translation-jobs

You will note one of the items in the jobs section is called python-jobs. This is
actually not a single Jenkins job, but actually a job group. A job group definition is merely a list of
jobs or job templates. Let’s take a look at the definition of the 
python-jobs

 job group:

- job-group:
    name: python-jobs
    jobs:
      - '{name}-coverage'
      - 'gate-{name}-pep8'
      - 'gate-{name}-python26'
      - 'gate-{name}-python27'
      - 'gate-{name}-python33'
      - 'gate-{name}-pypy'
      - 'gate-{name}-docs'
      - 'gate-{name}-requirements'
      - '{name}-tarball'
      - '{name}-branch-tarball'

Each of the items listed in the jobs section of the python-jobs job group definition
above is a job template. Job templates are expanded in the same way as Zuul project templates and
JJB job groups are expanded. Let’s take a look at one such job template in the list above, called gate-{name}-python27.

(Hint: all Jenkins jobs for any OpenStack or Stackforge project are described in the OpenStack-Infra Config
project’s 
modules/openstack_projects/files/jenkins_jobs/config/

directory).

The 
python-jobs.yaml

 file in themodules/openstack_project/files/jenkins_job_builder/config directory contains the
definition of common Python project Jenkins job templates. One of those job templates
isgate-{name}-python27:

- job-template:
    name: 'gate-{name}-python27'
... snip ...
    builders:
      - gerrit-git-prep
      - python27:
          github-org: '{github-org}'
          project: '{name}'
      - assert-no-extra-files

    publishers:
      - test-results
      - console-log

    node: '{node}'

Looking through the above job template definition, you will see a section called “builders“. The
builders section of a job template lists (in sequential order of expected execution) the executable sections or
scripts of the Jenkins job. The first executable section in the gate-{name}-python27 job template is
called “gerrit-git-prep“. This executable section is defined in 
macros.yaml

, which contains a number of commonly-run scriptlets. Here’s the entire gerrit-git-prep macro definition:

- builder:
    name: gerrit-git-prep
    builders:
      - shell: "/usr/local/jenkins/slave_scripts/gerrit-git-prep.sh https://review.openstack.org http://zuul.openstack.org git://git.openstack.org"

So, gerrit-git-prep is simply executing a Bash script called “gerrit-git-prep.sh” that is stored in the /usr/local/jenkins/slave_scripts/ directory.
Let’s take a look at that file. You can find it in the 
/modules/jenkins/files/slave_scripts/

 

[3]

directory in the same OpenStack Infra Config project:

#!/bin/bash -e

GERRIT_SITE=
ZUUL_SITE=
GIT_ORIGIN=

# ... snip ...

set -x
if [[ ! -e .git ]]
then
    ls -a
    rm -fr .[^.]* *
    if [ -d /opt/git/$ZUUL_PROJECT/.git ]
    then
        git clone file:///opt/git/$ZUUL_PROJECT .
    else
        git clone $GIT_ORIGIN/$ZUUL_PROJECT .
    fi
fi
git remote set-url origin $GIT_ORIGIN/$ZUUL_PROJECT

# attempt to work around bugs 925790 and 1229352
if ! git remote update
then
    echo "The remote update failed, so garbage collecting before trying again."
    git gc
    git remote update
fi

git reset --hard
if ! git clean -x -f -d -q ; then
    sleep 1
    git clean -x -f -d -q
fi

if [ -z "$ZUUL_NEWREV" ]
then
    git fetch $ZUUL_SITE/p/$ZUUL_PROJECT $ZUUL_REF
    git checkout FETCH_HEAD
    git reset --hard FETCH_HEAD
    if ! git clean -x -f -d -q ; then
        sleep 1
        git clean -x -f -d -q
    fi
else
    git checkout $ZUUL_NEWREV
    git reset --hard $ZUUL_NEWREV
    if ! git clean -x -f -d -q ; then
        sleep 1
        git clean -x -f -d -q
    fi
fi

if [ -f .gitmodules ]
then
    git submodule init
    git submodule sync
    git submodule update --init
fi

The purpose of the script above is simple: Check out the source code of the proposed Gerrit changeset and ensure that
the source tree is clean of any cruft from a previous run of a Jenkins job that may have run in the same Jenkins
workspace
. The concept of a workspace is important. When Jenkins runs a job, it must execute that job from
within a workspace. The workspace is really just an isolated shell environment and filesystem directory that has a
set of shell variables export’d inside it that indicate a variety of important identifiers, such as the Jenkins job
ID, the name of the source code project that has triggered a job, the SHA1 git commit ID of the particular proposed
changeset that is being tested, etc 

[4]
.

The next builder in the job template is the “python27” builder, which has two variables injected
into itself:

      - python27:
          github-org: '{github-org}'
          project: '{name}'

The github-org variable is a string of the already existing {github-org} variable
value. Theproject variable is populated with the value of the {name} variable.
Here’s how thepython27 builder is defined (in macros.yaml:

- builder:
    name: python27
    builders:
      - shell: "/usr/local/jenkins/slave_scripts/run-unittests.sh 27 {github-org} {project}"

Again, just a wrapper around another Bash script, called 
run-unittests.sh

 in the/usr/local/jenkins/slave_scripts directory. Here’s what that script looks like:

version=
org=
project=

# ... snip ...

venv=py$version

# ... snip ...

source /usr/local/jenkins/slave_scripts/select-mirror.sh $org $project

tox -e$venv
result=$?

echo "Begin pip freeze output from test virtualenv:"
echo "======================================================================"
.tox/$venv/bin/pip freeze
echo "======================================================================"

if [ -d ".testrepository" ] ; then
# ... snip ...
    .tox/$venv/bin/python /usr/local/jenkins/slave_scripts/subunit2html.py ./subunit_log.txt testr_results.html
    gzip -9 ./subunit_log.txt
    gzip -9 ./testr_results.html
# ... snip ...
fi

# ... snip ...

In short, for the Python 2.7 builder, the above runs the command tox -epy27 and then runs a prettifying
script and gzips up the results of the unit test run. And that’s really the meat of the Jenkins job. We will discuss
the publishing of the job artifacts a little later in this article, but if you’ve gotten this far, you have delved
deep into the mines of the OpenStack CI system. Congratulations!

Devstack-Gate and Running Tempest Against a Real Environment

OK, so unit tests running in a simple Jenkins slave workspace are one thing. But what about Jenkins jobs that run
integration tests against a full set of OpenStack endpoints, interacting with real database and message queue
services? For these types of Jenkins jobs, things are more complicated. Yes, I know. You probably think things have
been complicated up until this point, and you’re right! But the simple unit test jobs above are just the tip of the
proverbial iceberg when it comes to the OpenStack CI platform.

For these complex Jenkins jobs, an additional set of tools are added to the mix:

  • Nodepool — Provides virtual machine instances to Jenkins masters for running complex,
    isolation-sensitive Jenkins jobs
  • Devstack-Gate — Scripts that create an OpenStack environment with Devstack, run tests against
    that environment, and archive logs and results

Assignment of a Node to Run a Job

Different Jenkins jobs require different workspaces, or environments, in which to run. For basic unit or
style-checking test jobs, like the gate-{name}-python27 job template we dug into above, not much more
is needed than a tox-managed virtualenv running in a source checkout of the project with a proposed change. However,
for Jenkins jobs that run a series of integration tests against a full OpenStack installation, a workspace with
significantly more resources and isolation is necessary. For these latter types of jobs, the upstream CI platform
uses a pool of virtual machine instances. This pool of virtual machine instances is managed by a tool called 

nodepool

. The virtual machines run in both HP Cloud and Rackspace Cloud, who graciously donate these instances for
the upstream CI system to use. You can see the configuration of the Nodepool-managed set of instances 
here

.

Instances that are created by Nodepool run Jenkins slave software, so that they can communicate with the upstream
Jenkins CI master servers. A script called 
prepare_node.sh

 runs on each Nodepool instance. This script just git clones the OpenStack Infra config project to the node,
installs Puppet, and runs a 
Puppet manifest

that sets up the node based on the type of node it is. There are bare nodes, nodes that are meant to run Devstack to
install OpenStack, and nodes specific to the Triple-O project. The node type that we will focus on here is the node
that is meant to run Devstack. The script that runs to prepare one of these nodes is 
prepare_devstack_node.sh

, which in turn calls 
prepare_devstack.sh

. This script caches all of the repositories needed by Devstack, along with Devstack itself, in a workspace
cache on the node. This workspace cache is used to enable fast reset of the workspace that is used during the
running of a Jenkins job that uses Devstack to construct an OpenStack environment.

Devstack-Gate

The 
Devstack-Gate
 project is a set of scripts that are executed by certain Jenkins jobs that need to run integration or upgrade tests
against a realistic OpenStack environment. Going back to the 
Cinder project configuration

 in the Zuul layout.yaml file:

  - name: openstack/cinder
    template:
      - name: python-jobs
... snip ...
    gate:
      - gate-cinder-requirements
      - gate-tempest-dsvm-full
      - gate-tempest-dsvm-postgres-full
      - gate-tempest-dsvm-neutron
      - gate-tempest-dsvm-large-ops
      - gate-tempest-dsvm-neutron-large-ops
      - gate-grenade-dsvm
... snip ...

Note the highlighted line. That Jenkins job template is one such job that needs an isolated workspace that has a full
OpenStack environment running on it. Note that “dsvm” stands for “Devstack virtual machine”.

Let’s take a look at the JJB configuration of the 
gate-tempest-dsvm-full

 job:

- job-template:
    name: '{pipeline}-tempest-dsvm-full{branch-designator}'
    node: '{node}'
... snip ...
    builders:
      - devstack-checkout
      - shell: |
          #!/bin/bash -xe
          export PYTHONUNBUFFERED=true
          export DEVSTACK_GATE_TIMEOUT=180
          export DEVSTACK_GATE_TEMPEST=1
          export DEVSTACK_GATE_TEMPEST_FULL=1
          export BRANCH_OVERRIDE={branch-override}
          if [ "$BRANCH_OVERRIDE" != "default" ] ; then
              export OVERRIDE_ZUUL_BRANCH=$BRANCH_OVERRIDE
          fi
          cp devstack-gate/devstack-vm-gate-wrap.sh ./safe-devstack-vm-gate-wrap.sh
          ./safe-devstack-vm-gate-wrap.sh
      - link-logs

    publishers:
      - devstack-logs
      - console-log

The 
devstack-checkout

 builder is simply a Bash script macro that looks like this:

- builder:
    name: devstack-checkout
    builders:
      - shell: |
          #!/bin/bash -xe
          if [[ ! -e devstack-gate ]]; then
              git clone git://git.openstack.org/openstack-infra/devstack-gate
          else
              cd devstack-gate
              git remote set-url origin git://git.openstack.org/openstack-infra/devstack-gate
              git remote update
              git reset --hard
              if ! git clean -x -f ; then
                  sleep 1
                  git clean -x -f
              fi
              git checkout master
              git reset --hard remotes/origin/master
              if ! git clean -x -f ; then
                  sleep 1
                  git clean -x -f
              fi
              cd ..
          fi

All the above is doing is git clone’ing the devstack-gate repository into the Jenkins workspace, and if the
devstack-gate repository already exists, checks out the latest from the master branch.

Returning to our gate-tempest-dsvm-full JJB job template, we see the remaining part of the builder is a
Bash scriptlet like so:

          #!/bin/bash -xe
          export PYTHONUNBUFFERED=true
          export DEVSTACK_GATE_TIMEOUT=180
          export DEVSTACK_GATE_TEMPEST=1
          export DEVSTACK_GATE_TEMPEST_FULL=1
          export BRANCH_OVERRIDE={branch-override}
          if [ "$BRANCH_OVERRIDE" != "default" ] ; then
              export OVERRIDE_ZUUL_BRANCH=$BRANCH_OVERRIDE
          fi
          cp devstack-gate/devstack-vm-gate-wrap.sh ./safe-devstack-vm-gate-wrap.sh
          ./safe-devstack-vm-gate-wrap.sh

Not all that complicated. It exports some environment variables and copies thedevstack-vm-gate-wrap.sh script
out of the devstack-gate repo that was clone’d in thedevstack-checkout macro to the work directory and
then runs that script.

The 

devstack-vm-gate-wrap.sh

 script is responsible for setting even more environment variables and then calling the 

devstack-vm-gate.sh

 script, which is where the real magic happens.

Construction of OpenStack Environment with Devstack

The devstack-vm-gate.sh script is responsible for constructing a full OpenStack environment and running
integration tests against that environment. To construct this OpenStack environment, it uses the excellent 
Devstack project
. Devstack is an elaborate series of Bash scripts and functions that clones each OpenStack project source code
into/opt/stack/new/$project 
[5]
— , runs python setup.py install in each project checkout, and starts each relevant OpenStack
service (e.g. nova-computenova-scheduler, etc) in a separate 
Linux screen session

.

Devstack’s creation script (stack.sh) is 

called

 from the script after creating the localrcfile that stack.sh uses when constructing the
Devstack environment.

Execution of Integration Tests Against an OpenStack Environment

Once the OpenStack environment is constructed, the devstack-vm-gate.sh script continue on to 
run a series of integration tests

:

    cd $BASE/new/tempest
    if [[ "$DEVSTACK_GATE_TEMPEST_ALL" -eq "1" ]]; then
        echo "Running tempest all test suite"
        sudo -H -u tempest tox -eall -- --concurrency=$TEMPEST_CONCURRENCY
        res=$?
    elif [[ "$DEVSTACK_GATE_TEMPEST_FULL" -eq "1" ]]; then
        echo "Running tempest full test suite"
        sudo -H -u tempest tox -efull -- --concurrency=$TEMPEST_CONCURRENCY
        res=$?

You will note that the $DEVSTACK_GATE_TEMPEST_FULL Bash environment variable was set to “1″ in
the gate-tempest-dsvm-full Jenkins job builder scriptlet.

sudo -H -u tempest tox -efull triggers the execution of Tempest’s integration test suite.
Tempest
 is the collection of canonical OpenStack integration tests that are used to validate that OpenStack APIs work
according to spec and that patches to one OpenStack service do not inadvertently cause failures in another service.

If you are curious what actual commands are run, you can check out the 
tox.ini
 file in Tempest:

[testenv:full]
# The regex below is used to select which tests to run and exclude the slow tag:
# See the testrepostiory bug: https://bugs.launchpad.net/testrepository/+bug/1208610
commands =
  bash tools/pretty_tox.sh '(?!.*\[.*\bslow\b.*\])(^tempest\.(api|scenario|thirdparty|cli)) {posargs}'

In short, the above runs the Tempest 
API

scenario

CLI
, and 
thirdparty
 tests.

Archival of Test Artifacts

The final piece of the puzzle is archiving all of the artifacts from the Jenkins job execution. These artifacts
include log files from each individual OpenStack service running in Devstack’s screen sessions, the results of the
Tempest test suite runs, as well as echo’d output from the devstack-vm-gate* scripts themselves.

These artifacts are gathered together by the 
devstack-logs

 and 
console-log
 JJB publisher macros:

- publisher:
    name: console-log
    publishers:
      - scp:
          site: 'static.openstack.org'
          files:
            - target: 'logs/$LOG_PATH'
              copy-console: true
              copy-after-failure: true

- publisher:
    name: devstack-logs
    publishers:
      - scp:
          site: 'static.openstack.org'
          files:
            - target: 'logs/$LOG_PATH'
              source: 'logs/**'
              keep-hierarchy: true
              copy-after-failure: true

Conclusion

I hope this article has helped you understand a bit more how the OpenStack continuous integration platform works.
We’ve stepped through the flow through the various components of the platform, including which events trigger what
actions in each components. You should now have a good idea how the various parts of the upstream CI infrastructure
are configured and where to go look in the source code for more information.

The next article in this series discusses how to construct your own external testing platform that is linked with the
upstream OpenStack CI platform. Hopefully, this article will provide you most of the background information you need
to understand the steps and tools involved in that external testing platform construction.


[1]— The link describes and illustrates the non-human gatekeeper model with Bazaar, but the same concept
is applicable to Git. See the 
OpenStack
GitWorkflow

 pages for an illustration of the OpenStack specific model.
[2]— Zuul really is a pretty awesome bit of code kit. Jim Blair, the author, does an excellent job of 
explaining the merge proposal dependency graph

 and how Zuul can “trim” dead-end branches of the dependency graph in the Zuul documentation.
[3]— Looking for where a lot of the “magic” in the upstream gate happens? Take an afternoon to
investigate the scripts in this directory. 


[4]— Gerrit Jenkins plugin and Zuul export a variety of workspace environment variables into the
Jenkins jobs that they trigger. If you are curious what these variables are, check out the 
Zuul documentation on
parameters

.
[5]— The reason the projects are installed into /opt/stack/new/$project is
because the current HEAD of the target git branch for the project is installed
into/opt/stack/old/$project. This is to allow an upgrade test tool called 
Grenade
 to 
test upgrade paths

.

 

Originally published at
Join-Fu

.

The purpose of the script above is simple: Check out the source code of the proposed Gerrit changeset and ensure that
the source tree is clean of any cruft from a previous run of a Jenkins job that may have run in the same Jenkins
workspace
. The concept of a workspace is important. When Jenkins runs a job, it must execute that job from
within a workspace. The workspace is really just an isolated shell environment and filesystem directory that has a
set of shell variables export’d inside it that indicate a variety of important identifiers, such as the Jenkins job
ID, the name of the source code project that has triggered a job, the SHA1 git commit ID of the particular proposed
changeset that is being tested, etc 

[4]
.

The next builder in the job template is the “python27” builder, which has two variables injected
into itself:

      - python27:
          github-org: '{github-org}'
          project: '{name}'

The github-org variable is a string of the already existing {github-org} variable
value. Theproject variable is populated with the value of the {name} variable.
Here’s how thepython27 builder is defined (in macros.yaml:

- builder:
    name: python27
    builders:
      - shell: "/usr/local/jenkins/slave_scripts/run-unittests.sh 27 {github-org} {project}"

Again, just a wrapper around another Bash script, called 
run-unittests.sh

 in the/usr/local/jenkins/slave_scripts directory. Here’s what that script looks like:

version=
org=
project=

# ... snip ...

venv=py$version

# ... snip ...

source /usr/local/jenkins/slave_scripts/select-mirror.sh $org $project

tox -e$venv
result=$?

echo "Begin pip freeze output from test virtualenv:"
echo "======================================================================"
.tox/$venv/bin/pip freeze
echo "======================================================================"

if [ -d ".testrepository" ] ; then
# ... snip ...
    .tox/$venv/bin/python /usr/local/jenkins/slave_scripts/subunit2html.py ./subunit_log.txt testr_results.html
    gzip -9 ./subunit_log.txt
    gzip -9 ./testr_results.html
# ... snip ...
fi

# ... snip ...

In short, for the Python 2.7 builder, the above runs the command tox -epy27 and then runs a prettifying<

The purpose of the script above is simple: Check out the source code of the proposed Gerrit changeset and ensure that
the source tree is clean of any cruft from a previous run of a Jenkins job that may have run in the same Jenkins
workspace
. The concept of a workspace is important. When Jenkins runs a job, it must execute that job from
within a workspace. The workspace is really just an isolated shell environment and filesystem directory that has a
set of shell variables export’d inside it that indicate a variety of important identifiers, such as the Jenkins job
ID, the name of the source code project that has triggered a job, the SHA1 git commit ID of the particular proposed
changeset that is being tested, etc 

[4]
.

The next builder in the job template is the “python27” builder, which has two variables injected
into itself:

      - python27:
          github-org: '{github-org}'
          project: '{name}'

The github-org variable is a string of the already existing {github-org} variable
value. Theproject variable is populated with the value of the {name} variable.
Here’s how thepython27 builder is defined (in macros.yaml:

- builder:
    name: python27
    builders:
      - shell: "/usr/local/jenkins/slave_scripts/run-unittests.sh 27 {github-org} {project}"

Again, just a wrapper around another Bash script, called 
run-unittests.sh

 in the/usr/local/jenkins/slave_scripts directory. Here’s what that script looks like:

version=
org=
project=

# ... snip ...

venv=py$version

# ... snip ...

source /usr/local/jenkins/slave_scripts/select-mirror.sh $org $project

tox -e$venv
result=$?

echo "Begin pip freeze output from test virtualenv:"
echo "======================================================================"
.tox/$venv/bin/pip freeze
echo "======================================================================"

if [ -d ".testrepository" ] ; then
# ... snip ...
    .tox/$venv/bin/python /usr/local/jenkins/slave_scripts/subunit2html.py ./subunit_log.txt testr_results.html
    gzip -9 ./subunit_log.txt
    gzip -9 ./testr_results.html
# ... snip ...
fi

# ... snip ...

In short, for the Python 2.7 builder, the above runs the command tox -epy27 and then runs a prettifying
script and gzips up the results of the unit test run. And that’s really the meat of the Jenkins job. We will discuss
the publishing of the job artifacts a little later in this article, but if you’ve gotten this far, you have delved
deep into the mines of the OpenStack CI system. Congratulations!

Devstack-Gate and Running Tempest Against a Real Environment

OK, so unit tests running in a simple Jenkins slave workspace are one thing. But what about Jenkins jobs that run
integration tests against a full set of OpenStack endpoints, interacting with real database and message queue
services? For these types of Jenkins jobs, things are more complicated. Yes, I know. You probably think things have
been complicated up until this point, and you’re right! But the simple unit test jobs above are just the tip of the
proverbial iceberg when it comes to the OpenStack CI platform.

For these complex Jenkins jobs, an additional set of tools are added to the mix:

  • Nodepool — Provides virtual machine instances to Jenkins masters for running complex,
    isolation-sensitive Jenkins jobs
  • Devstack-Gate — Scripts that create an OpenStack environment with Devstack, run tests against
    that environment, and archive logs and results

Assignment of a Node to Run a Job

Different Jenkins jobs require different workspaces, or environments, in which to run. For basic unit or
style-checking test jobs, like the gate-{name}-python27 job template we dug into above, not much more
is needed than a tox-managed virtualenv running in a source checkout of the project with a proposed change. However,
for Jenkins jobs that run a series of integration tests against a full OpenStack installation, a workspace with
significantly more resources and isolation is necessary. For these latter types of jobs, the upstream CI platform
uses a pool of virtual machine instances. This pool of virtual machine instances is managed by a tool called 

nodepool

. The virtual machines run in both HP Cloud and Rackspace Cloud, who graciously donate these instances for
the upstream CI system to use. You can see the configuration of the Nodepool-managed set of instances 
here

.

Instances that are created by Nodepool run Jenkins slave software, so that they can communicate with the upstream
Jenkins CI master servers. A script called 
prepare_node.sh

 runs on each Nodepool instance. This script just git clones the OpenStack Infra config project to the node,
installs Puppet, and runs a 
Puppet manifest

that sets up the node based on the type of node it is. There are bare nodes, nodes that are meant to run Devstack to
install OpenStack, and nodes specific to the Triple-O project. The node type that we will focus on here is the node
that is meant to run Devstack. The script that runs to prepare one of these nodes is 
prepare_devstack_node.sh

, which in turn calls 
prepare_devstack.sh

. This script caches all of the repositories needed by Devstack, along with Devstack itself, in a workspace
cache on the node. This workspace cache is used to enable fast reset of the workspace that is used during the
running of a Jenkins job that uses Devstack to construct an OpenStack environment.

Devstack-Gate

The 
Devstack-Gate
 project is a set of scripts that are executed by certain Jenkins jobs that need to run integration or upgrade tests
against a realistic OpenStack environment. Going back to the 
Cinder project configuration

 in the Zuul layout.yaml file:

  - name: openstack/cinder
    template:
      - name: python-jobs
... snip ...
    gate:
      - gate-cinder-requirements
      - gate-tempest-dsvm-full
      - gate-tempest-dsvm-postgres-full
      - gate-tempest-dsvm-neutron
      - gate-tempest-dsvm-large-ops
      - gate-tempest-dsvm-neutron-large-ops
      - gate-grenade-dsvm
... snip ...

Note the highlighted line. That Jenkins job template is one such job that needs an isolated workspace that has a full
OpenStack environment running on it. Note that “dsvm” stands for “Devstack virtual machine”.

Let’s take a look at the JJB configuration of the 
gate-tempest-dsvm-full

 job:

- job-template:
    name: '{pipeline}-tempest-dsvm-full{branch-designator}'
    node: '{node}'
... snip ...
    builders:
      - devstack-checkout
      - shell: |
          #!/bin/bash -xe
          export PYTHONUNBUFFERED=true
          export DEVSTACK_GATE_TIMEOUT=180
          export DEVSTACK_GATE_TEMPEST=1
          export DEVSTACK_GATE_TEMPEST_FULL=1
          export BRANCH_OVERRIDE={branch-override}
          if [ "$BRANCH_OVERRIDE" != "default" ] ; then
              export OVERRIDE_ZUUL_BRANCH=$BRANCH_OVERRIDE
          fi
          cp devstack-gate/devstack-vm-gate-wrap.sh ./safe-devstack-vm-gate-wrap.sh
          ./safe-devstack-vm-gate-wrap.sh
      - link-logs

    publishers:
      - devstack-logs
      - console-log

The 
devstack-checkout

 builder is simply a Bash script macro that looks like this:

- builder:
    name: devstack-checkout
    builders:
      - shell: |
          #!/bin/bash -xe
          if [[ ! -e devstack-gate ]]; then
              git clone git://git.openstack.org/openstack-infra/devstack-gate
          else
              cd devstack-gate
              git remote set-url origin git://git.openstack.org/openstack-infra/devstack-gate
              git remote update
              git reset --hard
              if ! git clean -x -f ; then
                  sleep 1
                  git clean -x -f
              fi
              git checkout master
              git reset --hard remotes/origin/master
              if ! git clean -x -f ; then
                  sleep 1
                  git clean -x -f
              fi
              cd ..
          fi

All the above is doing is git clone’ing the devstack-gate repository into the Jenkins workspace, and if the
devstack-gate repository already exists, checks out the latest from the master branch.

Returning to our gate-tempest-dsvm-full JJB job template, we see the remaining part of the builder is a
Bash scriptlet like so:

          #!/bin/bash -xe
          export PYTHONUNBUFFERED=true
          export DEVSTACK_GATE_TIMEOUT=180
          export DEVSTACK_GATE_TEMPEST=1
          export DEVSTACK_GATE_TEMPEST_FULL=1
          export BRANCH_OVERRIDE={branch-override}
          if [ "$BRANCH_OVERRIDE" != "default" ] ; then
              export OVERRIDE_ZUUL_BRANCH=$BRANCH_OVERRIDE
          fi
          cp devstack-gate/devstack-vm-gate-wrap.sh ./safe-devstack-vm-gate-wrap.sh
          ./safe-devstack-vm-gate-wrap.sh

Not all that complicated. It exports some environment variables and copies thedevstack-vm-gate-wrap.sh script
out of the devstack-gate repo that was clone’d in thedevstack-checkout macro to the work directory and
then runs that script.

The 

devstack-vm-gate-wrap.sh

 script is responsible for setting even more environment variables and then calling the 

devstack-vm-gate.sh

 script, which is where the real magic happens.

Construction of OpenStack Environment with Devstack

The devstack-vm-gate.sh script is responsible for constructing a full OpenStack environment and running
integration tests against that environment. To construct this OpenStack environment, it uses the excellent 
Devstack project
. Devstack is an elaborate series of Bash scripts and functions that clones each OpenStack project source code
into/opt/stack/new/$project 
[5]
— , runs python setup.py install in each project checkout, and starts each relevant OpenStack
service (e.g. nova-computenova-scheduler, etc) in a separate 
Linux screen session

.

Devstack’s creation script (stack.sh) is 

called

 from the script after creating the localrcfile that stack.sh uses when constructing the
Devstack environment.

Execution of Integration Tests Against an OpenStack Environment

Once the OpenStack environment is constructed, the devstack-vm-gate.sh script continue on to 
run a series of integration tests

:

    cd $BASE/new/tempest
    if [[ "$DEVSTACK_GATE_TEMPEST_ALL" -eq "1" ]]; then
        echo "Running tempest all test suite"
        sudo -H -u tempest tox -eall -- --concurrency=$TEMPEST_CONCURRENCY
        res=$?
    elif [[ "$DEVSTACK_GATE_TEMPEST_FULL" -eq "1" ]]; then
        echo "Running tempest full test suite"
        sudo -H -u tempest tox -efull -- --concurrency=$TEMPEST_CONCURRENCY
        res=$?

You will note that the $DEVSTACK_GATE_TEMPEST_FULL Bash environment variable was set to “1″ in
the gate-tempest-dsvm-full Jenkins job builder scriptlet.

sudo -H -u tempest tox -efull triggers the execution of Tempest’s integration test suite.
Tempest
 is the collection of canonical OpenStack integration tests that are used to validate that OpenStack APIs work
according to spec and that patches to one OpenStack service do not inadvertently cause failures in another service.

If you are curious what actual commands are run, you can check out the 
tox.ini
 file in Tempest:

[testenv:full]
# The regex below is used to select which tests to run and exclude the slow tag:
# See the testrepostiory bug: https://bugs.launchpad.net/testrepository/+bug/1208610
commands =
  bash tools/pretty_tox.sh '(?!.*\[.*\bslow\b.*\])(^tempest\.(api|scenario|thirdparty|cli)) {posargs}'

In short, the above runs the Tempest 
API

scenario

CLI
, and 
thirdparty
 tests.

Archival of Test Artifacts

The final piece of the puzzle is archiving all of the artifacts from the Jenkins job execution. These artifacts
include log files from each individual OpenStack service running in Devstack’s screen sessions, the results of the
Tempest test suite runs, as well as echo’d output from the devstack-vm-gate* scripts themselves.

These artifacts are gathered together by the 
devstack-logs

 and 
console-log
 JJB publisher macros:

- publisher:
    name: console-log
    publishers:
      - scp:
          site: 'static.openstack.org'
          files:
            - target: 'logs/$LOG_PATH'
              copy-console: true
              copy-after-failure: true

- publisher:
    name: devstack-logs
    publishers:
      - scp:
          site: 'static.openstack.org'
          files:
            - target: 'logs/$LOG_PATH'
              source: 'logs/**'
              keep-hierarchy: true
              copy-after-failure: true

Conclusion

I hope this article has helped you understand a bit more how the OpenStack continuous integration platform works.
We’ve stepped through the flow through the various components of the platform, including which events trigger what
actions in each components. You should now have a good idea how the various parts of the upstream CI infrastructure
are configured and where to go look in the source code for more information.

The next article in this series discusses how to construct your own external testing platform that is linked with the
upstream OpenStack CI platform. Hopefully, this article will provide you most of the background information you need
to understand the steps and tools involved in that external testing platform construction.


[1]— The link describes and illustrates the non-human gatekeeper model with Bazaar, but the same concept
is applicable to Git. See the 
OpenStack
GitWorkflow

 pages for an illustration of the OpenStack specific model.
[2]— Zuul really is a pretty awesome bit of code kit. Jim Blair, the author, does an excellent job of 
explaining the merge proposal dependency graph

 and how Zuul can “trim” dead-end branches of the dependency graph in the Zuul documentation.
[3]— Looking for where a lot of the “magic” in the upstream gate happens? Take an afternoon to
investigate the scripts in this directory. 


[4]— Gerrit Jenkins plugin and Zuul export a variety of workspace environment variables into the
Jenkins jobs that they trigger. If you are curious what these variables are, check out the 
Zuul documentation on
parameters

.
[5]— The reason the projects are installed into /opt/stack/new/$project is
because the current HEAD of the target git branch for the project is installed
into/opt/stack/old/$project. This is to allow an upgrade test tool called 
Grenade
 to 
test upgrade paths

.

 

Originally published at
Join-Fu

.

The purpose of the script above is simple: Check out the source code of the proposed Gerrit changeset and ensure that
the source tree is clean of any cruft from a previous run of a Jenkins job that may have run in the same Jenkins
workspace
. The concept of a workspace is important. When Jenkins runs a job, it must execute that job from
within a workspace. The workspace is really just an isolated shell environment and filesystem directory that has a
set of shell variables export’d inside it that indicate a variety of important identifiers, such as the Jenkins job
ID, the name of the source code project that has triggered a job, the SHA1 git commit ID of the particular proposed
changeset that is being tested, etc 

[4]
.

The next builder in the job template is the “python27” builder, which has two variables injected
into itself:

      - python27:
          github-org: '{github-org}'
          project: '{name}'

The github-org variable is a string of the already existing {github-org} variable
value. Theproject variable is populated with the value of the {name} variable.
Here’s how thepython27 builder is defined (in macros.yaml:

- builder:
    name: python27
    builders:
      - shell: "/usr/local/jenkins/slave_scripts/run-unittests.sh 27 {github-org} {project}"

Again, just a wrapper around another Bash script, called 
run-unittests.sh

 in the/usr/local/jenkins/slave_scripts directory. Here’s what that script looks like:

version=$1
org=$2
project=$3

# ... snip ...

venv=py$version

# ... snip ...

source /usr/local/jenkins/slave_scripts/select-mirror.sh $org $project

tox -e$venv
result=$?

echo "Begin pip freeze output from test virtualenv:"
echo "======================================================================"
.tox/$venv/bin/pip freeze
echo "======================================================================"

if [ -d ".testrepository" ] ; then
# ... snip ...
    .tox/$venv/bin/python /usr/local/jenkins/slave_scripts/subunit2html.py ./subunit_log.txt testr_results.html
    gzip -9 ./subunit_log.txt
    gzip -9 ./testr_results.html
# ... snip ...
fi

# ... snip ...

In short, for the Python 2.7 builder, the above runs the command tox -epy27 and then runs a prettifying
script and gzips up the results of the unit test run. And that’s really the meat of the Jenkins job. We will discuss
the publishing of the job artifacts a little later in this article, but if you’ve gotten this far, you have delved
deep into the mines of the OpenStack CI system. Congratulations!

Devstack-Gate and Running Tempest Against a Real Environment

OK, so unit tests running in a simple Jenkins slave workspace are one thing. But what about Jenkins jobs that run
integration tests against a full set of OpenStack endpoints, interacting with real database and message queue
services? For these types of Jenkins jobs, things are more complicated. Yes, I know. You probably think things have
been complicated up until this point, and you’re right! But the simple unit test jobs above are just the tip of the
proverbial iceberg when it comes to the OpenStack CI platform.

For these complex Jenkins jobs, an additional set of tools are added to the mix:

  • Nodepool — Provides virtual machine instances to Jenkins masters for running complex,
    isolation-sensitive Jenkins jobs
  • Devstack-Gate — Scripts that create an OpenStack environment with Devstack, run tests against
    that environment, and archive logs and results

Assignment of a Node to Run a Job

Different Jenkins jobs require different workspaces, or environments, in which to run. For basic unit or
style-checking test jobs, like the gate-{name}-python27 job template we dug into above, not much more
is needed than a tox-managed virtualenv running in a source checkout of the project with a proposed change. However,
for Jenkins jobs that run a series of integration tests against a full OpenStack installation, a workspace with
significantly more resources and isolation is necessary. For these latter types of jobs, the upstream CI platform
uses a pool of virtual machine instances. This pool of virtual machine instances is managed by a tool called 

nodepool

. The virtual machines run in both HP Cloud and Rackspace Cloud, who graciously donate these instances for
the upstream CI system to use. You can see the configuration of the Nodepool-managed set of instances 
here

.

Instances that are created by Nodepool run Jenkins slave software, so that they can communicate with the upstream
Jenkins CI master servers. A script called 
prepare_node.sh

 runs on each Nodepool instance. This script just git clones the OpenStack Infra config project to the node,
installs Puppet, and runs a 
Puppet manifest

that sets up the node based on the type of node it is. There are bare nodes, nodes that are meant to run Devstack to
install OpenStack, and nodes specific to the Triple-O project. The node type that we will focus on here is the node
that is meant to run Devstack. The script that runs to prepare one of these nodes is 
prepare_devstack_node.sh

, which in turn calls 
prepare_devstack.sh

. This script caches all of the repositories needed by Devstack, along with Devstack itself, in a workspace
cache on the node. This workspace cache is used to enable fast reset of the workspace that is used during the
running of a Jenkins job that uses Devstack to construct an OpenStack environment.

Devstack-Gate

The 
Devstack-Gate
 project is a set of scripts that are executed by certain Jenkins jobs that need to run integration or upgrade tests
against a realistic OpenStack environment. Going back to the 
Cinder project configuration

 in the Zuul layout.yaml file:

  - name: openstack/cinder
    template:
      - name: python-jobs
... snip ...
    gate:
      - gate-cinder-requirements
      - gate-tempest-dsvm-full
      - gate-tempest-dsvm-postgres-full
      - gate-tempest-dsvm-neutron
      - gate-tempest-dsvm-large-ops
      - gate-tempest-dsvm-neutron-large-ops
      - gate-grenade-dsvm
... snip ...

Note the highlighted line. That Jenkins job template is one such job that needs an isolated workspace that has a full
OpenStack environment running on it. Note that “dsvm” stands for “Devstack virtual machine”.

Let’s take a look at the JJB configuration of the 
gate-tempest-dsvm-full

 job:

- job-template:
    name: '{pipeline}-tempest-dsvm-full{branch-designator}'
    node: '{node}'
... snip ...
    builders:
      - devstack-checkout
      - shell: |
          #!/bin/bash -xe
          export PYTHONUNBUFFERED=true
          export DEVSTACK_GATE_TIMEOUT=180
          export DEVSTACK_GATE_TEMPEST=1
          export DEVSTACK_GATE_TEMPEST_FULL=1
          export BRANCH_OVERRIDE={branch-override}
          if [ "$BRANCH_OVERRIDE" != "default" ] ; then
              export OVERRIDE_ZUUL_BRANCH=$BRANCH_OVERRIDE
          fi
          cp devstack-gate/devstack-vm-gate-wrap.sh ./safe-devstack-vm-gate-wrap.sh
          ./safe-devstack-vm-gate-wrap.sh
      - link-logs

    publishers:
      - devstack-logs
      - console-log

The 
devstack-checkout

 builder is simply a Bash script macro that looks like this:

- builder:
    name: devstack-checkout
    builders:
      - shell: |
          #!/bin/bash -xe
          if [[ ! -e devstack-gate ]]; then
              git clone git://git.openstack.org/openstack-infra/devstack-gate
          else
              cd devstack-gate
              git remote set-url origin git://git.openstack.org/openstack-infra/devstack-gate
              git remote update
              git reset --hard
              if ! git clean -x -f ; then
                  sleep 1
                  git clean -x -f
              fi
              git checkout master
              git reset --hard remotes/origin/master
              if ! git clean -x -f ; then
                  sleep 1
                  git clean -x -f
              fi
              cd ..
          fi

All the above is doing is git clone’ing the devstack-gate repository into the Jenkins workspace, and if the
devstack-gate repository already exists, checks out the latest from the master branch.

Returning to our gate-tempest-dsvm-full JJB job template, we see the remaining part of the builder is a
Bash scriptlet like so:

          #!/bin/bash -xe
          export PYTHONUNBUFFERED=true
          export DEVSTACK_GATE_TIMEOUT=180
          export DEVSTACK_GATE_TEMPEST=1
          export DEVSTACK_GATE_TEMPEST_FULL=1
          export BRANCH_OVERRIDE={branch-override}
          if [ "$BRANCH_OVERRIDE" != "default" ] ; then
              export OVERRIDE_ZUUL_BRANCH=$BRANCH_OVERRIDE
          fi
          cp devstack-gate/devstack-vm-gate-wrap.sh ./safe-devstack-vm-gate-wrap.sh
          ./safe-devstack-vm-gate-wrap.sh

Not all that complicated. It exports some environment variables and copies thedevstack-vm-gate-wrap.sh script
out of the devstack-gate repo that was clone’d in thedevstack-checkout macro to the work directory and
then runs that script.

The 

devstack-vm-gate-wrap.sh

 script is responsible for setting even more environment variables and then calling the 

devstack-vm-gate.sh

 script, which is where the real magic happens.

Construction of OpenStack Environment with Devstack

The devstack-vm-gate.sh script is responsible for constructing a full OpenStack environment and running
integration tests against that environment. To construct this OpenStack environment, it uses the excellent 
Devstack project
. Devstack is an elaborate series of Bash scripts and functions that clones each OpenStack project source code
into/opt/stack/new/$project 
[5]
— , runs python setup.py install in each project checkout, and starts each relevant OpenStack
service (e.g. nova-computenova-scheduler, etc) in a separate 
Linux screen session

.

Devstack’s creation script (stack.sh) is 

called

 from the script after creating the localrcfile that stack.sh uses when constructing the
Devstack environment.

Execution of Integration Tests Against an OpenStack Environment

Once the OpenStack environment is constructed, the devstack-vm-gate.sh script continue on to 
run a series of integration tests

:

    cd $BASE/new/tempest
    if [[ "$DEVSTACK_GATE_TEMPEST_ALL" -eq "1" ]]; then
        echo "Running tempest all test suite"
        sudo -H -u tempest tox -eall -- --concurrency=$TEMPEST_CONCURRENCY
        res=$?
    elif [[ "$DEVSTACK_GATE_TEMPEST_FULL" -eq "1" ]]; then
        echo "Running tempest full test suite"
        sudo -H -u tempest tox -efull -- --concurrency=$TEMPEST_CONCURRENCY
        res=$?

You will note that the $DEVSTACK_GATE_TEMPEST_FULL Bash environment variable was set to “1″ in
the gate-tempest-dsvm-full Jenkins job builder scriptlet.

sudo -H -u tempest tox -efull triggers the execution of Tempest’s integration test suite.
Tempest
 is the collection of canonical OpenStack integration tests that are used to validate that OpenStack APIs work
according to spec and that patches to one OpenStack service do not inadvertently cause failures in another service.

If you are curious what actual commands are run, you can check out the 
tox.ini
 file in Tempest:

[testenv:full]
# The regex below is used to select which tests to run and exclude the slow tag:
# See the testrepostiory bug: https://bugs.launchpad.net/testrepository/+bug/1208610
commands =
  bash tools/pretty_tox.sh '(?!.*\[.*\bslow\b.*\])(^tempest\.(api|scenario|thirdparty|cli)) {posargs}'

In short, the above runs the Tempest 
API

scenario

CLI
, and 
thirdparty
 tests.

Archival of Test Artifacts

The final piece of the puzzle is archiving all of the artifacts from the Jenkins job execution. These artifacts
include log files from each individual OpenStack service running in Devstack’s screen sessions, the results of the
Tempest test suite runs, as well as echo’d output from the devstack-vm-gate* scripts themselves.

These artifacts are gathered together by the 
devstack-logs

 and 
console-log
 JJB publisher macros:

- publisher:
    name: console-log
    publishers:
      - scp:
          site: 'static.openstack.org'
          files:
            - target: 'logs/$LOG_PATH'
              copy-console: true
              copy-after-failure: true

- publisher:
    name: devstack-logs
    publishers:
      - scp:
          site: 'static.openstack.org'
          files:
            - target: 'logs/$LOG_PATH'
              source: 'logs/**'
              keep-hierarchy: true
              copy-after-failure: true

Conclusion

I hope this article has helped you understand a bit more how the OpenStack continuous integration platform works.
We’ve stepped through the flow through the various components of the platform, including which events trigger what
actions in each components. You should now have a good idea how the various parts of the upstream CI infrastructure
are configured and where to go look in the source code for more information.

The next article in this series discusses how to construct your own external testing platform that is linked with the
upstream OpenStack CI platform. Hopefully, this article will provide you most of the background information you need
to understand the steps and tools involved in that external testing platform construction.


[1]— The link describes and illustrates the non-human gatekeeper model with Bazaar, but the same concept
is applicable to Git. See the 
OpenStack
GitWorkflow

 pages for an illustration of the OpenStack specific model.
[2]— Zuul really is a pretty awesome bit of code kit. Jim Blair, the author, does an excellent job of 
explaining the merge proposal dependency graph

 and how Zuul can “trim” dead-end branches of the dependency graph in the Zuul documentation.
[3]— Looking for where a lot of the “magic” in the upstream gate happens? Take an afternoon to
investigate the scripts in this directory. 
:)

[4]— Gerrit Jenkins plugin and Zuul export a variety of workspace environment variables into the
Jenkins jobs that they trigger. If you are curious what these variables are, check out the 
Zuul documentation on
parameters

.
[5]— The reason the projects are installed into /opt/stack/new/$project is
because the current HEAD of the target git branch for the project is installed
into/opt/stack/old/$project. This is to allow an upgrade test tool called 
Grenade
 to 
test upgrade paths

.

 

Originally published at
Join-Fu

.

LIVE DEMO
How to Use Service Mesh with VMs and Containers
REGISTER