Cloud users today often also need to be cloud innovators: upgrading components, integrating new hardware, scaling up, and making lots of configuration changes. But how can you be sure the changes you make don’t break anything?
Testing using Tempest is (part of) the answer. Tempest, OpenStack’s official test suite, is a powerful tool for running a set of functional tests against an OpenStack cluster. Tempest automatically runs against every patch in every project of OpenStack, which lets us avoid merging changes that break things.
Historically, however, there have been a few roadblocks to using Tempest. Accessing its power reminds me of being back at University, learning Assembler by coding inline with a high-level language IDE like Turbo Pascal. It’s sure great to have the IDE there to run the system assembler for you, link the code, and report errors. But, all in all, you’d prefer to be writing Pascal or another high level language.
Tempest complexity can make it difficult and time-consuming to use. Tempest also lacks certain features useful for benchmarking, like the ability to generate simulated loads and present results in graph form.
The solution for these problems is Rally, an OpenStack project to create a framework for validating, performance testing and benchmarking OpenStack at scale with Tempest. Rally automatically installs and configures Tempest, and automates running Tempest tests. In contrast to Tempest, which is installed and executed individually on each cluster, Rally can verify a huge number of clouds — just add clouds as deployments to Rally, then easily switch between them. Rally’s benchmarking engine can automatically perform tests under simulated user loads. Results of these tests and benchmarks are saved in Rally’s database. You can review them immediately or schedule many tests over time to see how configuration changes are affecting your cloud.
In this article, we’re going to look at how to use Tempest and Rally, but before we get to the fun part, here are the main reasons why we’ve integrated Rally with Tempest:
Most OpenStack projects use CI/CD systems that employ Tempest as a ‘gate’: for automated testing of committed changes. So it makes sense for us to help developers use Tempest more widely and with greater ease.
Benchmarking with Tempest tests is efficient:
Tempest already implements a multiplicity of tests, so there’s no need to recreate the same tests in Rally. Less code duplication => better world.
Verification should not be limited to testing with a single user. Rally can simulate loading for Tempest. In other words, Rally Benchmark can launch Tempest tests with a variable number of (simulated) active users.
Debugging tempest & OpenStack. If there are race conditions in gates developers will be able to run the same test (or set of test) under load on their environment => which will simplify detecting source of race condition.
Sounds reasonable. But you’ll never know how good Rally is until you try it. So let’s go hands-on!
First of all, you should download Rally from the github repository and run the script “install_rally.sh” to install it.
Rally lets you add clouds in two ways. You can source the openrc credentials of your cloud and create the rally deployment from your environment, like this:
Or you can describe your cloud in json (see link for instructions and details) and offer that to ‘rally deployment create’ using the –file argument:
After adding your deployment, you can use `rally deployment list` to display a list of all known deployments.
Use the command `rally deployment check` to see if Rally has been deployed correctly, and to determine which Rally services are available:
As mentioned earlier, Rally can customize Tempest for a huge number of clouds. To decrease the time required for cloning the Tempest repository, Rally clones it once, and then uses a local repository to reproduce it on each cloud.
The command `rally-manage tempest install` takes care of cloning the repository, generates the configuration file for Tempest and installs the virtual environment with all dependencies.
The command `rally verify start` launches auto-verification. This command expects only one argument: the test set name. If this argument is not specified, smoke tests (the default) will be executed. Output of this command is similar to the Tempest output.
Valid test set names include: full, smoke, baremetal, compute, data_processing, identity, image, network, object_storage, orchestration, telemetry, and volume.
Results of verification tests are stored in Rally’s database, so you can view and compare them. Lists of verifications can be displayed with the command `rally verify list`.
Detailed information for one execution can be derived with two commands: `rally verify show` and `rally verify detailed`. The latter displays tracebacks for failed tests.
In addition to installing, configuring and running Tempest, Rally will also store and display results for multiple tests, under simulated user loads.
Let’s run benchmarking for the test scenario tempest.api.identity.admin.v3.test_domains.DomainsTestXML.test_create_update_delete_domain, with simulated loads ranging from 2 to 10 users. Once the test is finished, we’ll generate an HTML file of the result, including graphs. Information about the JSON/YAML file used to define this task can be found locally in rally/doc/samples/tasks, or on the OpenStack wiki.
When you deploy and run a cloud, you must cope with constant change, so how do you verify each change? Tempest is a good solution but is cumbersome to use. Rally makes this simple.Rally provides benchmarking for OpenStack, and now, thanks to Tempest integration, the scenario library is huge. Will your patch make some operations faster? Is your deployment as fast as your competitor’s? Why not find out with Rally?