OpenStack Icehouse: More Than Just PR

Learn more about What’s New in OpenStack Icehouse during Mirantis’ technical overview webcast this Thursday, April 24th.

Is your head swimming with Icehouse information yet?  By now you have certainly heard that the ninth OpenStack’s version, Icehouse, has been released.  The most prominent features, such as rolling compute upgrades, improved object storage replication, tighter integration of networking and compute functionality, and federated identity services, have gotten a lot of airtime, but with more than 350 implemented blueprints, there’s a lot more below the surface of this particular iceberg. Here at Mirantis, we’ve been doing a lot of work with Icehouse, and we’ll be going into some technical depth on many of the new features of Icehouse in our webinar on April 24. In the meantime we wanted to give you a taste of what’s available.

The marketing behind Icehouse is stressing that it’s the ninth OpenStack release, likely as a way to counter the notion that OpenStack is somehow “immature” and not ready for enterprise environments.  In fact, the OpenStack Foundation has taken great pains to make sure that everyone knows (rightfully so) that this time around the needs of real world users, and in particular operators, have been taken into account.  Many improvements, such as upgradability and better testing and consistency, have been taken specifically with these users in mind.

Compute

The new “feature” that’s gotten the most attention in the Icehouse release is probably the availability of “rolling upgrades”, or “live upgrades”.  The idea here is that it’s possible to leave your Havana-based compute nodes running while you grade the database schema on your controller to Icehouse.  Bringing your compute nodes over is then just a matter of restarting them, rather than taking them down for a full upgrade.  While this doesn’t solve all of your upgrade problems, the ability to migrate without having to take down your nodes is significant.

Another area that’s receiving attention is the Nova scheduler; in addition to a new caching scheduler, designed to improve performance by caching the list of available hosts, Icehouse also provides a way to match image properties with host properties (through the AggregateImagePropertiesIsolation filter).  Most exciting in this area, however, is the notion of server groups, which enable you to easily set either affinity or anti-affinity for a set of resources.  For example, you might want to make sure that your database and web application server were running on different physical hosts in order to prevent performance issues under load.  (On the other hand, Cells, which aggregate multiple hosts under a single Nova API instance, haven’t received much attention since they were introduced in the Grizzly release last year.)

Icehouse also saw the introduction of more stringent testing requirements, with hypervisors divided into Group A (full support, all tests in the gate), Group B (unit tests in the gate, functional tests in an external system, with warnings to reviewers), and Group C (minimal testing, with the potential for deprecation.)  There had been talk of deprecating both the Xenserver and Docker hypervisors, but the community has added enough testing to maintain the former in Group B, and the Docker community is working on getting their hypervisor back there, or even back into Group A.  The PowerVM hypervisor has been completely removed from the tree.

Icehouse also includes other improvements to the data access layer intended to improve performance.

Storage

In the storage area, as far as Object Storage (Swift) there’s been a lot of talk about SSYNC, or Swift-Sync, which is intended to solve the problems current seen in RSYNC environments, where replication times increase significantly when directory sizes are too large to hold in RAM.  SSYNC is based on hashes for the directories, so that only directories that are different are synced.  Currently a wrapper around RSYNC, it will eventually use all Swift API commands, but as of now it is still considered experimental, and you shouldn’t use it in production.

Icehouse also brings the advent of discoverable capabilities, where you can query a Swift proxy server to find out exactly what functionality is available, as well as account-level Access Control Lists, sync-realms (groups of servers within a cluster that agree to synchronize amongst themselves) and automatic retry on read failure, hiding the problem from the end user.

On the block storage side, Cinder introduces the concept of storage tiers by using volume types.  In Havana, you could move volumes from one backend to another — say, from an SSD to a traditional drive when it was no longer in heavy use — but in Icehouse, this concept has been expanded into “storage tiers” where each volume has a volume “type”, and to move it to another tier you simply change it’s type.  Volumes can be grouped in terms of performance or other factors such as Quality of Service.

Icehouse also provides improved disaster recovery capabilities, enabling the backing up, import, and export not just of data, but also of service metadata.  This way, you can return your system to precisely its pre-disaster (or more accurately, post backup) state.

Like Nova, Cinder also has more stringent testing requirements, with multiple drivers setting up external CI systems.

Finally, Glance has seen some improvements relating to sizing, quota calculation and NFS support.

Networking

Most sources will tell you that work in OpenStack Neutron in Icehouse was mostly related to stability, but there is one major new feature that’s not necessarily visible to the end user, but is a significant change for those working on the software.

Icehouse sees a switch over to the Modular Layer 2 (ML2) plugin, a framework that provides more flexibility in terms of providing access to various networking technologies.  Rather than providing huge monolithic core plugins, new plugins can be written to the much simpler ML2 framework by providing MechanismDrivers.  In Icehouse, MechanismDrivers exist for Open vSwitch, Linux Bridge, and Hyper-V, as well as for the OpenDaylight SDN controller.  This isn’t to say that other plugins aren’t available, however, and a number of vendor hardware-related plugins have been updated for Icehouse, including Brocade, Big Switch, Mellanox, and IBM.

Icehouse also brings tighter integration with Nova — it provides events so that instances can make certain that networking is available before continuing to boot, thus avoiding errors — and improved scheduling of virtual routers, which previously had difficulties when IP ranges overlapped.

Perhaps the biggest surprise, however, is the un-deprecation of Nova-Network.  While the plan had originally been to depreciate the service and remove it, in the Icehouse cycle it was determined that while Neutron had essentially reached feature parity with Nova (which was why it hadn’t been removed before) there were situations in which Neutron simply wasn’t appropriate, and there was no good migration strategy for existing Nova-Network installs.  For these reasons, it was decided that Nova would stay, and development on it would resume.

Other services

Of course OpenStack is more than Compute, Storage, and Networking, and the advancements in the various other OpenStack programs show that.  The Icehouse development cycle includes improvements such as:

  • Federated identity in Keystone, enabling users to use the same identity for both public and private clouds.  This feature is essential to the development of hybrid clouds.

  • The splitting of authorization and authentication functions in Keystone, enabling you to store identity and role information in different locations and datastore types.

  • An improved user experience in Horizon.

  • Automated scaling of resources using Heat, with better lifecycle management.

  • A Operator API in Heat for performing admin tasks.

  • A new events-based API in Ceilometer.

This is only a very small portion of the list, of course.

Database as a Service, and other new arrivals

Also new in Icehouse is the graduation of OpenStack Trove, Database as a Service.  Trove is interesting in that it’s the first OpenStack project that’s not actually involved in running an OpenStack cluster; instead, it’s a service that’s available to end users, making OpenStack more useful and relevant in a world where most applications involve a database of one kind or another.  Trove, which enables users to provision a database on demand, began with support for MySQL, and in the Icehouse release includes full support for MySQL and Percona, as well as experimental (partial) support for Cassandra, MongoDB, Redis, and CouchBase.

While Trove graduated to “integrated” status in the Icehouse cycle, three projects were accepted into “incubation”.  Marconi intends to provide a web-friendly message queuing system accessible by HTTP request.  Ironic is focused on bare-metal provisioning, enabling OpenStack to work with actual hardware rather than virtual machines.  Sahara (formerly called Savana) is the Data Processing project, providing a simple way to provision a Hadoop cluster and execute Elastic Data Processing jobs on it.  Sahara is expected to graduate to integrated status in OpenStack’s next release, code-named Juno.

Whoa, that’s a lot. How can I get more information?

Even with all of that, we’ve just scratched the surface.  There are also upgrade considerations, as well as a look at what’s coming in Juno.

We’ll be sitting down with these and other features and showing you more about how they work and what they mean on Thursday, so be sure to register for the What’s New In OpenStack Icehouse webinar.

Resources

Subscribe to Our Newsletter

Latest Tweets

Suggested Content

LIVE DEMO
Mirantis Application Platform with Spinnaker
WEBINAR
What's New in Kubernetes 1.10