Since Rackspace and NASA launched the OpenStack cloud-software initiative in July 2010, there have been 2 releases per year, beginning with the Austin release in October 2010, and most recently with the Stein release in April 2019. As with any software deliverable in its infancy, OpenStack was difficult to install and administer, lacked some usability and functionality, and had more than its share of defects.
Almost 10 years (and 19 releases) later, OpenStack has matured; it has improved in all areas, making it one of the leading choices for customers to implement a private cloud.
But OpenStack is still viewed as difficult to install and administer, as well as to use when managing cloud resources. The goal of this blog is to show that “OpenStack ain’t that tough,” especially after you’ve taken a class and been through the hands-on lab exercises.
Brief introduction to OpenStack
OpenStack is not a product. From the openstack.org web site: The OpenStack project is a global collaboration of developers and cloud computing technologists producing the open standard cloud computing platform for both public and private clouds. It’s backed by a vibrant community of developers and some of the biggest names in the industry. For example, companies such as Mirantis, Red Hat, SUSE, AT&T, Rackspace, Cisco, NetApp, and many more contribute to its development.
OpenStack is divided into many components, called projects, to provide IaaS (Infrastructure as a Service) cloud services. Each project provides a specialized service, with names such as Keystone (the Identity service), Nova (the Compute service), Glance (the Image service), Neutron (the Networking service), and so on.
OpenStack can be managed and operated from the Linux command line interface (CLI) or a web-based UI. The UI is provided by the Horizon component and is commonly called the Dashboard UI.
OpenStack is in production at many organizations worldwide, such as Walmart, T-Mobile, Target, Progressive Insurance, eBay, Cathay Pacific, Overstock.com, SkyTV, GE Healthcare, DirecTV, American Airlines, Adobe Advertising Cloud, AT&T, Verizon, Banco Santander, Volkswagen AG, Ontario Institute for Cancer Research, Target, PayPal, and many more.
As with many software projects, OpenStack has had a perception of being difficult to install, configure, and use. For example, here are several user quotes from the April 2017 survey:
- “Deployment is still a nightmare of complexity and riddled with failure unless you are covered in scars from previous deployments.”
- Author’s comment: This is, perhaps, my favorite comment! It is a true statement for anyone who has been around OpenStack for as long as I have been. The only users who were successful with an OpenStack deployment were those who had been through it before (several times). BTW, I have the scars from previous deployments. 😉
- “Deploying, maintaining and upgrading a production OpenStack-based cloud is no small feat. Luckily the technology itself has vastly improved over the last few years.”
- OpenStack’s core services “accomplish at least 80% of our actual needs. And once you learn the quirks, it’s rather simple to use and maintain.”
Perceptions are hard to overcome, even as OpenStack has matured. However, looking at the most recent OpenStack survey from April 2018, you can begin to see a change in the areas that users believe need more work or attention:
- Documentation needs better focus/needs to improve.
- When documentation is at the top of the list of concerns, it is either terrible, or the code has improved. From my experience, the documentation is no different than any other software deliverable. It’s definitely not terrible.
- Upgrades and user experience need work.
Over the last several releases, upgrades have received a lot of attention. For example, the following presentations discuss real world use cases for upgrading the OpenStack infrastructure:
- OpenStack Summit: Berlin, November 2017: OVH presented their upgrade results from Juno to Newton – OVH Public Cloud: how Open infrastructure powers innovation
- Mirantis Webinar, October 2017: Get Control of Your Cloud Infrastructure, Upgrades, and LCM with MCP DriveTrain
- OpenStack Summit, Boston, May 2017: Mirantis presented a session using DriveTrain from Mirantis Cloud Platform (MCP) for upgrades – Point and Click versus CI/CD: A Real World Look at Better OpenStack Deployment, Sustainability, and Upgrades! This presentation covers a number of improvements. For example, one customer’s upgrade of the control plane for OpenStack from Mitaka to Ocata was completed in under 2 hours.
Code updates focused on improving usability and migration
As you can imagine, many updates go into every OpenStack release. Some are operational, some are additional functions, others are aimed at improving the process to upgrade from release to release. Here are just a few examples over the last several releases focusing on migration and usage:
- Improved user experience when recovering from storage system failures (Cinder).
- Token provider API has been refactored to have cleaner interfaces (Keystone).
- Migrated to fernet tokens – faster, eliminates database bloat and eases maintenance (Keystone).
- Maintenance of policy files is easier (Keystone).
- Support for zero-downtime upgrades (Neutron).
- Multiple bindings for compute owned ports is supported for better server live migration support (Neutron).
- Improvements were made to minimize network downtime during live migrations (Nova).
- The libvirt driver is now capable of live migrating between different types of networking backends, for example, Linux Bridge to Open vSwitch (OVS) (Nova).
- Fast Forward Upgrades: Preview release of support for upgrading from Newton to Queens (TripleO).
In addition to the focus on migration and usage, one OpenStack component gaining widespread acceptance is the Orchestration service, known as Heat. Heat provides an automated approach to deploying your cloud resources: VM instances, storage, networking, and more, through the use of text (YAML) files, called Heat templates.
Heat improves the overall operation of your cloud. Automation is always good. Manual processes can be error prone. Using Heat to automate processes reduces errors and deploys the resources consistently every time.
Use case: Deploy an instance with SSH connectivity using Heat
Let’s look at a practical example, using Heat to automate the deployment of a VM instance. First, there are several operational considerations you need to be aware of. Do you need to connect to the instance? For example, will you need an SSH connection? If the answer is yes, then:
- You need to allocate and associate a floating IP (FIP) address. A floating IP is a public IP address that, when associated with an instance, is translated to the instance IP through NAT tables.
- You need to allow packets to flow (not be dropped) on port 22 (the default SSH port).
This is a very common use case for Cloud users. Setting it up is a manual process with multiple steps … unless you automate with Heat.
Time for an example! In the next section, we look at an example Heat template. After that, you will see how to use the Heat template to automate the creation of an instance, allowing SSH connections.
Create OpenStack Heat template (VMwithFIP.yaml)
Let’s look at a subset of an example Heat template. Remember, the Heat template is a text (YAML) file, and for the purpose of this discussion, we created a Heat template and named it VMwithFIP.yaml. The name is used when you deploy the template.
The focus of this discussion is on the resources required to automate the instance deployment with a floating IP and rule for SSH:
- First, define the OS::Nova::Server resource to create the VM instance. Include the security_groups property to reference the security group that is also created in this template, the_sg resource:
- Next, define the_sg resource to create the AllowPingSSH security group with 2 rules: One to allow ping (icmp) requests, the other to allow ingress connections on port 22 (SSH):
- Next, use the OS::Neutron::FloatingIP resource to allocate a floating IP address from the public network:
- Lastly, use the OS::Neutron::FloatingIPAssociation resource to associate the floating IP address (floatingip_id) with the VM instance (port_id of the VM instance):
Next, let’s use the CLI to deploy the template to create the VM instance with a floating IP address and rule for SSH connectivity
Deploy the Heat template (VMwithFIP.yaml)
From the CLI, issue a stack create command to create a Heat stack called mystack using the VMwithFIP.yaml Heat template:
The instance, floating IP, and rule for SSH connections are being created.
When complete, retrieve the floating IP address. You’ll need to use it for the SSH session:
The instance name is testVM. It has an IP address of 10.0.0.24, with a floating IP address of 172.24.4.15.
Next, use the floating IP to establish an SSH connection:
In this example, we issued the hostname and ifconfig commands to verify the instance name and IP address. We saw those values earlier when displaying the stack output.
Lastly, let’s verify the AllowPingSSH Security Group was created:
Yes, it is that easy! In this blog, we showed you how to create a Heat template with several resources (instance, floating IP, and security group rule). Then we showed you how to deploy the Heat template and verify SSH connectivity.
Want to learn more? Take a class!
The Mirantis OpenStack curriculum was recently upgraded to the Rocky release (August 2018). Numerous changes were made to the content, with many new use cases added. All classes run OpenStack Rocky on Ubuntu 18.04 LTS with KVM/QEMU as the hypervisor.
What components are covered?
- Core components: Keystone, Glance, Neutron, Nova, Cinder, Horizon
- Additional components: Octavia (LBaaS), Heat, Telemetry (Ceilometer, Aodh, Gnocchi)
Use cases discussed in our OpenStack training include:
- Creating networks, subnetworks, and routers, such as:
- The role of Linux namespaces in OpenStack networks
- Floating IPs, NAT tables, and Security Groups
- Creating/managing Load Balancer as a Service (LBaaS)
- Deploying and managing VM instances
- Deploying from an image (Glance)
- Deploying from a (Cinder) boot volume
- Re-sizing instances
- Understanding the impact of quotas
- Moving an instance between subnets
- Many practical Heat template examples
- Auto-scaling and load balancing of today’s cloud applications
- PLUS, OS250 includes a full day dedicated to installation and configuration of the most commonly used OpenStack components. This is a perfect way to ‘get your scars!’ More importantly, you are exposed to the configuration options and how to debug errors – skills that every cloud admin needs.
Wondering how you can enroll in a class?
OpenStack is definitely easier to operate/administer after you’ve taken a class and have some hands-on experience.
Use one of the following links to get more information, schedules, and enroll in a class:
For the complete Mirantis Training course catalog:
One final point
If you are looking to get an OpenStack certification, Mirantis has been certifying IT professionals and Linux system administrators for years. And, if you enroll in one of our OpenStack Bootcamps, you receive a voucher to take the certification exam at no additional cost!
Use the following link for more information or to enroll in an exam:
About the Author
Paul Quigley is a Technical Curriculum Developer/Trainer in the Mirantis Training team, with a current focus on OpenStack. Paul has been involved in cloud technologies for approximately 10 years and OpenStack for approximately 7 years along with experience using VMware and KVM.