Mirantis named a Challenger in 2024 Gartner® Magic Quadrant™ for Container Management  |  Learn More

< BLOG HOME

OpenStack Cloud Folsom Webcast: Followup on your questions

Piotr Siwczak - October 08, 2012

Many thanks to all of you who attended our webcast on 'What's new in OpenStack Folsom' this past Thursday October 4. There were more questions than we had time for in the allotted hour, so I transcribed them and did my best to address them here. If you had a question that didn't get answered, look for your initials with the question below. And if you want to download the slides or view the recorded webcast, you can sign up for that here.

  1. IC asked “What can you say about Ceph or Gluster?”

    A: I can't speak with authority on Gluster. I know Ceph can be used as a replacement for swift and there is already a work going on to fully integrate it (Keystone support etc.). Also, SUSE cloud uses it as a distributed backend for nova-volume service

  2. MC asked “Not very clear what a host aggregate is. Could you possibly clarify”

    A: Host aggregates are a way to group OpenStack resources that share a common set of features. You might want to create a host aggregate out of compute nodes with a hardened configuration (or having some other feature in common – e.g., sharing the same storage array, for doing a live migration). The Folsom scheduler provides a filter to schedule users' instances to run on this particular aggregate. You instruct the scheduler to do it by providing proper hints (--hint option to nova boot) which are taken into account by this filter.

  3. MC asked “What kind of features do the real network devices (routers/switches) need to implement for working with the three Quantum network models? I mean features like VLAN tagging or something like that....”

    A: If you want VLAN tagging support, then you need an 802.1Q enabled switch. For FLAT model I believe there are no requirements for the devices. For GRE also - since GRE runs on top of layer 3 (you might run into firewall issues here).

  4. AA asked “ Is tunnel mode much powerful (scalable, flexible, easily) than vlan mode?”

    A: AFAIK the tunnel_id field which distinguishes different tenant networks is 32bit long (according to openvswitch manpages) - compared to only 12bit for VLAN ID. This allow for a much larger number of networks. Also – to use VLANs you should have a switch which supports 802.1q. This is not true for GRE links.

  5. AO asked “When is nova-volume and nova-network going to be totally removed from nova?”

    A: I am not exactly sure when they will be removed. You can check this thread about proposed scenarios:
    nova-volume: https://lists.launchpad.net/openstack/msg14443.html
    nova-network: https://lists.launchpad.net/openstack/msg16127.html

  6. WC asked “When L3 router is in deployment, it seems all traffic has to go through this network node. Will this be a bottleneck?”

    A: It all depends what plugin is used to implement it. (e.g. I believe Nicira NVP implements its own gateway which can be deployed in H-A mode). AFAIK, the implementation of the L3-agent involves only a single network node with network namespaces support (more information here)

  7. WW asked “Is it easier in Folsom to manage compute resources that span multiple racks and multiple L2 domains from one set of controllers? Not multiple connections to the same compute resource, which could result in a major security issue. In my particular case, I mean that each compute resource only has a single connection”

    A: If you have one L2 domain per rack, then I am assuming you use a router to pass traffic between these racks. I believe spanning a single openstack installation across these domains is unsupported out-of-the-box now, as you would need to map a specific fixed IP network to a specific rack. You could probably create two availability zones – one for each rack. Then, you would use the availability zone scheduler to provision given tenant to a single availability zone only (wchich would effectively make all his instances landing on one rack). Also host aggregates could be used instead of availability zones to achieve the same effect.

  8. CD asked “Can Quantum be installed separately from the rest of the controller stack, say for HA of the networking without necessarily creating HA for the controller?”

    A: I guess it's one of the goals, but now it seems not to be mature enough to be deployed separately (this is my personal viewpoint only, with which you might not agree).

  9. GC asked “Can you send the link to the Mirantis patch which allows for HA configuration of RabbitMQ?

    A: Here you go: https://review.openstack.org/#/c/10305/

  10. GC asked “How long until the LBaaS (Load Balancing as a Service) Equilibrium project to be ready?” 

    A: There is a preview of the first prototype of Quantum/Equilibrium working together at the upcoming Design Summit.
    The current code is here: https://github.com/Mirantis/openstack-lbaas

  11. GC asked “Is RabbitMQ still a SPOF (single point of failure)? Is ZeroMQ support mature?”

    A: RabbitMQ SPOF can be gotten rid of (see answer to question 9). As for ZeroMQ, the blueprint for Folsom says:
    "The initial cut of the server will be *very* simple and *not* for production.". So I believe it's not there yet.

Choose your cloud native journey.

Whatever your role, we’re here to help with open source tools and world-class support.

GET STARTED