What’s new in OpenStack Grizzly: Webcast update with questions and answers

This past week, we offered a live webcast about the new Grizzly Release of OpenStack. You can access the slides and the video of the webcast, as well as a curated list of resources and references, by opting in here.

As is often the case, there were more questions than we could answer in the time allotted; I’ve listed the expanded Q&A below. (If you’re looking for one you submitted, look for your initials on the question). As always, your comments are welcome below.

Q (CC): Can you talk more about Swift, in particular how the number of replicas can changed? For example, how does one specify 3 replicas locally and then change that count later?

A: This can be accomplished using the swift-ring-builder set_replicas command. You can find more information regarding this command and other ring builder commands in the Swift documentation at this URL: http://docs.openstack.org/developer/swift/admin_guide.html

Q (GM): Can you provide some more details about network adapter hot plugging?

A: Grizzly adds the ability for network adapter hot plug functionality, allowing administrators to dynamically add or remove pre-created network ports from an instance while the instance is running. We have tested this feature with the KVM hypervisor.

Q: Can you provide some documentation on how to setup multiple L3 agents in Grizzly and how it works? (Asked by  ) Is the failover from one L3 to another stateless? Is connectivity to the VMs through the floating IP affected and for how long?

A: Quantum now supports multiple L2/L3 agents, but not HA or failover. Each L2/L3 agent provides connectivity to other L2/L3 networks and you can split network load across multiple quantum nodes. If you need Quantum HA setup, you should instead use Fuel’s pacemaker active-standby setup.  We will be posting more information on how to configure this type of setup in the coming months.

Q (RA): Do Nova security groups support IPv6?

A: Nova security groups support only IPv4 networks. If you need IPv6 then you can use Quantum security groups where you can specify both IPv4 and IPv6 networks.  For more information look here: http://docs.openstack.org/grizzly/openstack-network/admin/content/ch_limitations.html

Q: Are there any third-party billing systems that interface with OpenStack?

A: You can use the open source project Billing Stack. If you already have a billing solution, you can integrate with OpenStack using the Ceilometer APIs.

Q (HB): For Ceilometer, will there need to be an agent in each VM?

A: Currently no. You can write your own plugins to collect performance data. For more information on writing agent plugins, check here: http://docs.openstack.org/developer/ceilometer/contributing/plugins.html

Q: Is it possible to dynamically resize the VM (i.e., cpu, memory, etc.) without having to bring down the VM when using KVM as the underlying hypervisor?

A: No. For now KVM does not support modification of resources while an image is online.

Q: Using Nova, is it possible to provision VMs on a specific physical host or a group of physical hosts?

A: Yes, you can use host aggregates. Aggregates are different from Zones in that they represent collections of physical resources.  You can also use the new Volume Affinity Filter, or write one of your own: https://www.mirantis.com/blog/travel-less-save-more-introducing-openstack-volume-affinity-filter/

Q (JH): Fuel is now using Cobbler for bare metal deployment. Can Heat be used instead?

A: No. Heat is a service that orchestrates multiple composite cloud applications and not physical infrastructure.

Q (WW): Does Swift have support for Regions?

A: Regions in Swift allow administrators to group Zones which have already been defined. In the future swift-proxy will be aware of ‘remote’ regions.

Q (GA): What are the differences between Project Heat and Project Fuel?

A: Heat is a service to orchestrate multiple composite cloud applications whereas Fuel allows building OpenStack infrastructure. Simply put, Heat comes into play after Fuel’s already done its job.

Q (OK): Is Fuel intended only for generating cloud instances or can it be used for application layer implementations?

A: Fuel only supports the implementation of OpenStack deployment configurations.

Q: Can Heat be used for distributed applications deployments, such as onboarding multi-tier applications in several geographical locations?

A: Heat supports availability zones which can be used to distribute applications across multiple hypervisors (for example in different datacenters).

Q (MC): What’s the main difference between nova evacuation and nova live-migration?

A: Nova evacuate can be used only when the source node is unavailable (i.e., due to failure). With live-migrations the source and destination nodes must be online. Read the following for details on performing evacuations: http://docs.openstack.org/trunk/openstack-compute/admin/content/nova_cli_evacuate.html

Q (MK): Can you specify a different default availability zone for a tenant or project when creating VMs through the Horizon dashboard?

A: No, the Horizon dashboard doesn’t support availability zones right now. It may, however, appear in the Havana release.

Q (DC): Is Fuel a fork of OpenStack or does it complement the existing Grizzly release?

A: Fuel is not a fork of OpenStack. Fuel is a deployment kit that consists of verified scripts for implementing a variety of OpenStack deployment configurations. Think of it as a set of tools for cloud administrators, or anyone who needs to stand up and maintain an OpenStack cloud.

Q (JW): I read that we can only use multiple L3-agents if one is ‘active’ and the others are ‘passive’. Is this correct or can they all be active and/or load balanced?

A: You can use multiple L3 agents for different quantum routers. With this setup you can also balance network traffic across multiple quantum nodes.

Q (YW): Is the multiple l3 agent scheduled by the quantum scheduler service?

A: Yes, but for now l3-agent-scheduler and dhcp-agent-scheduler supports randomly allocating a L3 agent for a router.

Q (EM): In Grizzly can we use multiple hypervisors in the same zone?

A: Yes, you can use host aggregates. For example, you can have flavor1 on KVM and have flavor2 on XEN. You may also employ flavor extra_spec, availability_zone (each availability_zone on different hypervisor). More information on FlavorKeys can be found here: https://wiki.openstack.org/wiki/FlavorKeys

Q (JF): What are the estimated performance improvements using bare-metal for HPCC or Hadoop?

A: Performance depends entirely on your particular use case. On bare-metal there is no virtualization overhead.

Q: Is Kerberos authentification supported?

A: Grizzly does not directly support Kerberos in OpenStack Identity (also known as Keystone), but you can use the pam module to authenticate against Kerberos. Additional notes regarding the release changes can be found here: https://wiki.openstack.org/wiki/ReleaseNotes/Grizzly#OpenStack_Identity_.28Keystone.29

Q: S.N. — Do you have a slide deck or blog on grizzly vm deployment request flow similar to what you had for the essex release?

A: We do not have this at this time, but will be preparing an update in the future.

Q (SR): What about Ceph? I have heard a lot of great news on adoption of the Ceph by OpenStack. Is there any particular benefit to using Ceph over Swift?

A: Ceph can be used as a backend for cinder (volumes), but using it instead Swift could be problematic, such as a lack of keystone support.

Q (SS): Is there any project for monitoring the Compute Components, like when a VM is Down or a Hypervisor is Down?

A: Mirantis makes every effort to implement a complete monitoring solution for OpenStack. There are some existing monitoring tools available, but we have not yet tested them.

Q (RO): Any idea when nova-network will be fully deprecated? i.e. no longer in the code base?

A: As far as we know, nova-network will be never removed from source code. Nova should be independent from Quantum as it can be used for testing.

Subscribe to Our Newsletter

Latest Tweets

Suggested Content

Mirantis Cloud Platform
Machine Learning in the Datacenter