Mirantis OpenStack

  • Download

    Mirantis OpenStack is the zero lock-in distro that makes deploying your cloud easier, and more flexible, and more reliable.

  • On-Demand

    Mirantis OpenStack Express is on demand Private-Cloud-as-a-Service. Fire up your own cloud and deploy your workloads immediately.

Solutions Engineering

Services offerings for all phases of the OpenStack lifecycle, from green-field to migration to scale-out optimization, including Migration, Self-service IT as a Service (ITaaS), CI/CD. Learn More

Deployment and Operations

The deep bench of OpenStack infrrastructure experts has the proven experience across scores of deployments and uses cases, to ensure you get OpenStack running fast and delivering continuous ROI.

Driver Testing and Certification

Mirantis provides coding, testing and maintenance for OpenStack drivers to help infrastructure companies integrate with OpenStack and deliver innovation to cloud customers and operators. Learn More

Certification Exam

Know OpenStack? Prove it. An IT professional who has earned the Mirantis® Certificate of Expertise in OpenStack has demonstrated the skills, knowledge, and abilities needed to create, configure, and manage OpenStack environments.

OpenStack Bootcamp

New to OpenStack and need the skills to run an OpenStack cluster yourself? Our bestselling 3 day course gives you the hands-on knowledge you need.

OpenStack: Now

Your one stop for the latest news and technical updates from across OpenStack ecosystem and marketplace, for all the information you need stay on top of rapid the pace innovation.

Read the Latest

The #1 Pure Play OpenStack Company

Some vendors choose to “improve” OpenStack by salting it with their own exclusive technology. At Mirantis, we’re totally committed to keeping production open source clouds free of proprietary hooks or opaque packaging. When you choose to work with us, you stay in full control of your infrastructure roadmap.

Learn about Our Philosophy

Building QA test environments with OpenStack

on February 27, 2014
Evgeniya Shumakher is a Business Analyst at Mirantis and the author of two proposals for the OpenStack Summit this Spring in Atlanta, How to Avoid Picking Problems that OpenStack Can’t Solve and Is Your OpenStack Cloud Compliant?

According to last October’s OpenStack user survey, QA test environments are one of the top ten workloads running on OpenStack clouds. In this post, I’ll describe how staging environments are built, and explore ways that OpenStack can make this process easier and more efficient.

The Current State of the Art

Hardly anyone would argue about the fact that the most important requirement for a test environment is that it be a copy of the relevant production environment (though this is not, in fact, always 100% true – see below). Any other requirements and constraints emerge from your software usage scenarios and testing methodology.

Before going to production, different types of system tests need to be performed, from functional tests to parallel tests. For example, in pre-production testing, you might:

  • Run tests on the prior version of the software on the first staging environment;
  • Run tests on the new version of the software on the second staging environment; and
  • Compare the results.

If your production version runs on baremetal, each test environment will need a dedicated server cluster. Hardware, network configuration, software settings and data (anonymized of course) should be mirrored from the production environment to both these test environments. Ideally, both test environments should be built on identical hardware and have as similar a networking configuration as possible.

In most cases, IT administrators manage and allocate such testing environments — testers can’t create on-demand staging environments any time they need them.

Given the above, it’s not surprising that everyone is struggling with testing.  Consider these facts:

  • It’s expensive. Building a hardware-based test environment costs a lot, especially if the software under test is mission-critical and/or requires many servers. And these costs are doubled for two environments. What’s more, that doesn’t even consider additional requirements that arise if you’re testing more than one application.
  • It’s difficult to maintain. Two test environments should have similar configurations, both replicas of your production setup, and both of them should work well.
  • It’s difficult to scale. What if one more environment is needed, for UAT or usability testing, for example? More resources, more time, more dependencies.
  • There’s no self-service for the test team — they’re heavily dependent on the IT team.

Moving Forward with OpenStack

It would be much better if testing environments were:

  • Low cost,
  • Easier and faster to configure and maintain,
  • Easier  and faster to scale, and
  • Available in a self-service framework.

For those who know what OpenStack offers, it sounds like a perfect match.  OpenStack is open source, free, and can be deployed on cheap commodity hardware.  It lets you preconfigure components of a test suite and complete, virtualized test environments, then duplicate these rapidly at any desired scale.  Finally, OpenStack has a self-service portal (Horizon) where users can access on-demand resources for:

  • Provisioning virtual servers (Nova)
  • Creating/attaching storage volumes (Cinder)
  • Configuring networks (Neutron)
  • Assembling cloud application infrastructures (Heat)

Is The Cloud For You?

Before starting to think about cloud design, you need to ask yourself:

  • Can the application be run in a virtual environment?
  • Can the application be tested in a virtual environment?

If the answer to either question is no, then cloud (and OpenStack of course) may not be for you. But it must be understood that this is a conservative rule, exempting more than a few edge cases. For example, it’s possible to build quite useful cloud-based test environments behind a production instance that runs on baremetal. Doing so, however, demands a deep understanding of the application(s) under test, and the technical ability to:

  • Do base-platform comparison testing, and develop protocols that let you map (for example) cloud-based performance to baremetal performance, when these are quite different.
  • Adapt the cloud-based test environment to enable comparable test coverage despite performance and other differences — for example, by tuning test-database sizes, requests and other variables so that your cloud-based test environment performs comparably to your baremetal test environment. This may enable the cloud-based test environment to provide similar test-coverage to the baremetal solution in many areas.
  • Be conservative, and know when your test system is not modeling the production system well. (Often the case with pure load testing.)

Also remember: cloud is evolving very fast, and the relationship between VMs and underlying hardware is increasingly configurable, enabling ever-better fine-tuning of VM performance and characteristics. Some careful testing might be in order before you write cloud off as a possible solution for both production and testing.

If you know your application will perform well, and test meaningfully in a cloud environment, however, then cloud (and probably OpenStack) is clearly a good choice.

Design Decisions

Let’s see how a cloud-based staging environment can transform the picture.

What would it take to replicate a stack that your application runs on in the virtual environment?

Let’s assume that you build an OpenStack cloud based on existing hardware.

The first, and most important thing you should think about is the cloud architecture, since many aspects must be taken into account. The more layers you have in a solution stack, the more careful you should be when designing each level. Here are some helpful tips.

First and foremost, if your application is “cloud ready”, you can’t go wrong with HA. Even if your cloud won’t run production applications, it needs to support pre-production testing, and should therefore be highly available.   (For more information, you can check out Mirantis OpenStack’s implementation of HA.

You should also check out a concise rundown of methods for optimizing compute performance in OpenStack-based cloud environments.

Other design decisions, such as:

  • Network configuration
    • How many networks will you need?
    • How will VM traffic be transmitted?
    • Will VMs need Internet access, and how will it be provided?
  • OpenStack components configuration
    • Which components should be installed
    • Where OpenStack services should be installed (on Compute node, on Controller node, on some sort of dedicated node, e.g., Storage node)

… depend on software characteristics, specific data communications and traffic-shaping requirements (e.g., for big bandwidth, low latency, QoS, response time) and testing workflow. Try to find answers to questions such as:

  • How production data is copied to the test environment
  • How often the test environment is created or updated

It will be helpful to assemble a list of requirements and transform them to architectural decisions.

Images

Once you’ve decided on your architecture, it’s time to start preparing VM images. Think about making:

  • A set of generic images with vCPU, vRAM, vHDD, guest OS and some fixed software installed and configured
  • A set of images with specific vCPU, vRAM, vHDD, perhaps exotic OSs and specific software installed and configured

The goal, of course, is to develop a library of images to enable rapid deployment of diverse test environments whose components are themselves configured in a disciplined way, pre-documented, and pre-tested.

Heat it!

How will you provide on-premise testing environments?

Provisioning can be done with Heat: the OpenStack component responsible for cloud applications orchestration.

Heat offers a templates mechanism. A Heat template describes the infrastructure for a cloud application, e.g., servers, volumes and their connections, networking settings, including floating ips, security groups, authentication settings, etc. For automatic software configuration, puppet or chef can be used.

This means that one general Heat template can be created for different types of test environments, provided these have similar networking settings and virtual cluster configurations. Specific virtual test environment features can then be added either with a customized Heat template or by manual configuration.

With a cloud-based solution, testers can own their test environments: creating, managing and deleting them, without IT intervention.  IT, meanwhile, typically owns the hardware cluster and manages the cloud environment.

Case study

So far I’ve provided a generalized approach to building virtual test environments. Now, let’s look at a specific use-case.

Our subject is a web software company, developing a consumer/business application. The app in question has basic availability requirements, and is likely to meet high levels of market demand at introduction. So the company needed to engineer a production platform that offered sub mission-critical HA reliability; and this solution needed to scale very rapidly and cleanly with traffic and demand. Scaling, moreover, would need to be clean in both directions – there are cyclic aspects to this client’s business that could make periodic infrastructure scale-backs necessary for cost-efficiency.

The application uses the standard LAMP stack of back-end technologies. At the opening of our study, the intended production environment was running on VMWare: not baremetal, but still relatively high-cost due to licensing and fees. Their test environment was also running on VMWare, and was maintained by IT administrators.

The company had already identified some problems:

  • Their test team could not create on-premise staging environments on demand.
  • Their IT staff required too much time to service requests for new test environments.
  • It was difficult to scale a test environment and/or add a new one.
  • Costs for test environments, support and scaling up were high (partially due to the price of VMWare licensing).

The company wanted to change their approach to building test environments. They wanted their new solution to fulfill these requirements:

  • Test engineers should be able to create and maintain a staging environment via a self-service portal.
  • The process of provisioning test requirements should be rapid.
  • Test environments should be easy to scale and support.
  • Total cost of the solution should be low.
  • As replicas of the production environment are useful not only for the test team, other users such as product consultants should be able to create a demo environment via the self-service portal.
  • Different test environments should be isolated.
  • The IT team should be able to control resource consumption for each test team.

The solution architect proposed the building of a test environment using OpenStack. The recommended approach was to build a 10-20 node OpenStack cluster with HA for controller nodes.  The resulting solution, when complete, delivers the following benefits, among others:

  • To provide isolation, each test environment runs in a separate tenant, or project.  For example, there’s one project for the test team, one for the consultant team, and so on. (See http://docs.openstack.org/trunk/openstack-ops/content/projects_users.html)
  • Heat templates enable rapid creation of new environments. The test team can create new Heat templates on its own.
  • Self-service is provided with OpenStack CLI/Horizon and Heat templates.
  • Total cost of the OpenStack solution is lower, because the company doesn’t have to pay for licenses, and can use commodity software and hardware.
  • The test team owns their test environments. They can create and delete environments anytime they need to.
  • OpenStack lets IT set up quotas for each project. OpenStack’s Ceilometer component can be used for monitoring resource consumption of cloud instances, as well.

Conclusion

Building test environments can be a very complicated process, its specifics dependent on the type of software under test, desired testing methodology, people responsible for maintenance of testing environments, and other factors.

Of course, not every test environment can be built on OpenStack, and not every type of test can be performed in a cloud-based test environment. As a cloud operating system, OpenStack has it’s own constraints and issues. For example, it has some performance issues, which need to be taken into account in designing and executing performance tests.

Hardware-dependent tests such as recovery testing can’t be done in a virtualized environment. Of course sometimes hardware emulators can be used, but it’s not the same.

The biggest take-away is: plan carefully. With good engineering and process discipline, many QA organizations will be able to specify cloud-based test environment solutions that will be robust, representative of production infrastructure (whether or not this is cloud-based), and that will solve serious workflow, efficiency, quality and cost issues.

 

1 comment

One Response

Continuing the Discussion

  1. OpenStack中国社区周报 (3/1-3/7) « OpenStack中国社区

    […] 如何搭建OpenStack测试环境  作者:Evgeniy Shumakeher […]

    March 7, 201401:33

Some HTML is OK


or, reply to this post via trackback.