Mirantis | The #1 Pure Play OpenStack Company

OpenStack Havana: GlusterFS, and what “improved support” really means

As I write this, OpenStack Havana has just been released, and as I mentioned last week, the number of changes and new features is staggering.  So to avoid overload, I thought I’d take a look at one small but significant piece of the puzzle. This week, as we get ready for our next webinar, “What’s New in OpenStack Havana: A Technical Overview” – what?  You haven’t registered yet? — we’re going to look at a term you see all over the place in release notes for a project like this: “improved support”.

There’s no “one true meaning,” of course.  Let’s be honest, we developers frequently use “improved support” to mean anything from “we fixed a bunch of bugs” to “I really don’t have time to tell you everything that’s different, I have code to write.”  In this case, we’re going to look at what this phrase really means, using changes in how Gluster works in OpenStack Havana as an example.

OpenStack and GlusterFS

As you may or may not know, GlusterFS is an open source distributed file system that enables you to virtualize storage, enabling multiple physical disks on multiple machines to appear as a single filesystem and namespace.  GlusterFS has the ability to do automatic replication, distribution, and other useful features, but perhaps one of it’s most useful qualities is that it can be used with a number of plugins, or “translators”, in the community’s parlance, that enable it to act as a particular type of filesystem.  For example, one of the improvements in OpenStack’s support of GlusterFS is a new block storage translator, written by IBM, that enables GlusterFS to provide block storage for QEMU/KVM-based hypervisors, and thus be used as a Cinder volume.

In OpenStack Grizzly, you had the ability to use a GlusterFS volume, but it was mounted through the FUSE module, as is traditionally the case for GlusterFS.  This setup was fine as long as you were just using it as a typical storage volume, or if you didn’t have many virtual machines running out of it.  However, this setup involved a lot of context switching between user space and kernel space, which led to an unacceptable degree of latency once you had more than 10 or so VMs.

Improved support for GlusterFS in OpenStack Havana

So now that we know where the baseline was, let’s look at this “improved support”.  In this case, it wasn’t a matter of bugs preventing you from using GlusterFS (unless you consider latency a bug).  In this case, it literally was a matter of improving the support that was already there, and here’s how it works.

GlusterFS has a native client library, libgfapi, created by Red Hat, so in order to make these volumes more responsive in OpenStack, IBM contributed a patch to QEMU providing libgfapi integration.  OpenStack Compute (Nova) is integrated with QEMU, so that means that you can get better performance by using Nova-QEMU-libgfapi versus the old FUSE integration.

In addition, since you can use GlusterFS to create a Cinder volume (thanks to that Block Device translator, also contributed by IBM) you can now point to Cinder as a location for your GlusterFS and NFS backends.

And that leads to another improvement in the way in which you can use GlusterFS with OpenStack.  The OpenStack Havana release now enables you to boot directly from a volume, rather than copying an image OpenStack Image Service (Glance), and because you can create a Cinder volume using GlusterFS, you can now boot from GlusterFS.

What “improved support” means to you 

OK, so we’ve discovered that in the case of GlusterFS and OpenStack Havana, “improved support” means new capabilities (use GlusterFS to create a Cinder volume), new features (boot from a volume that is, in the end, managed by GlusterFS) and improved performance (by using Nova-QEMU-libgfapi rather than FUSE).

So now I’m going to ask you: what does “improved support” mean to you?  What would you like it to mean?  Let us know!

8 comments
Google Plus Mirantis

8 Responses

  1. Richard

    It would be interesting to see some comparison between GlusterFS and Ceph, especially real life test as a cinder volume for open stack. I’d love to find a good option to have a shared block storage for openstack.

    October 28, 2013 00:21
  2. RWD

    I thought GlusterFS was mostly a dead legacy tech?

    Ceph seems to be superior in almost every way especially in uptake within OpenStack environments.

    October 30, 2013 03:12
  3. openstack-starter

    If I want to create a Cinder volume using GlusterFS and boot an instance from it,what need I do about the nova.conf file?

    March 11, 2014 16:58

Continuing the Discussion

  1. Glusterfs replicated volume based Havana 2013.2 instances on Server With GlusterFS 3.4.1 Fedora 19 in two node cluster | Xen Virtualization on Linux and Solaris

    [...] view first nice article: http://www.mirantis.com/blog/openstack-havana-glusterfs-and-what-improved-support-really-means/ and  [...]

    November 2, 201306:25
  2. “Setting up Multi-Node OpenStack RDO Havana + Gluster Backend + Neutron VLAN” on CentOS 6.5 with both Controller and Compute nodes each one having two Ethernet adapters per Andrew Lau | Xen Virtualization on Linux and Solaris

    […] back-ported  what allows native Qemu work directly with glusterfs 3.4.1 volumes.  Details here  http://www.mirantis.com/blog/openstack-havana-glusterfs-and-what-improved-support-really-means   . I am very thankful to Andrew Lau for sample of anwer-file for setups a kind of […]

    December 30, 201322:53
  3. “Setting up Multi-Node OpenStack RDO Havana + Gluster Backend + Neutron VLAN” on CentOS 6.5 with both Controller and Compute nodes each one having two Ethernet adapters per Andrew Lau | Xen Virtualization on Linux and Solaris

    […] : 956919 – Develop native qemu-gluster driver for Cinder. General concept may be seen here  http://www.mirantis.com/blog/openstack-havana-glusterfs-and-what-improved-support-really-means . I am very thankful to Andrew Lau for sample of anwer-file for setups a kind of “Controller […]

    January 5, 201402:34

Some HTML is OK


or, reply to this post via trackback.