NEW! Mirantis Academy -   Learn confidently with expert guidance and On-demand content.   Learn More

< BLOG HOME

Cloud Computing, Open Source, OpenStack: What's Eating the IT World Anyway? -- Your Answers

Nick Chase - March 10, 2014

Last week we conducted a webinar called Cloud Computing, Open Source, OpenStack: What's Eating the IT World Anyways? and we got some great questions, but we weren't able to get to answers for all of them, so we've gathered them together here for you:

CTM:  If i have a network 10.10.10.10/20; could I separate into 10.10.10.10/21 and the rest, then give it to 2 projects (tenants)?

Roman Podoliaka:  I assume you are talking about externally reachable networks to be used for allocation of floating IPs. Yes, just create two external networks via Networking API with appropriate CIDR values.

CTM: Does OpenStack support more than one floating IP range? Like 192.168.1.0/24 and 192.168.100.0/24 and 172.168.0.0/16, for example.

Roman Podoliaka:  Yes. Just create a separate external network via Networking API for each CIDR.

CTM:  Does OpenStack have thin provisioning features?

Roman Podoliaka:  Yes, it does.  How to enable it depends on your situation.

JS:  Can Fuel can be used to modify already deployed environments (configuration etc.)? Or its only for initial deployment and adding nodes? Is it possible to modify some options for nova - for example - and deploy changes with fuel?

Vladimir Kozhukalov: Fuel uses puppet for Openstack deployment. Current approach allows one to modify working cluster, but not all of those parameters which were set before actual deployment can be changed. For example, disk partitioning scheme can not be modified as well as many of network parameters.

HL: How would you move the Nodes from one Installation to another? E.g. Grizzly to Havana.

Nick Chase:  This depends on your situation.  The easiest way is to start up a second cluster, in this case using Havana, and move workloads, rather than moving live nodes.  You can either do this by forcing processes to redirect to the new cluster, or by attrition, in which new workloads are started on the new cluster, and as workloads complete on the old cluster, they are not restarted, but are instead moved to the new one.  As nodes "empty out" and are no longer in use on the old cluster, you can re-provision them on the new cluster.

All that said, Icehouse will have a more well-defined upgrade path than previous releases have had.

EC:  How are the storage blocks attached to virtual machines?  E.g. hba or nfs.

Nick Chase:  I know it's a cliche, but the answer here literally is "it depends", because different block storage backend devices work differently. For an iSCSI device, an iSCSI target is created and nova receives an iSCSI Qualified Name (IQN).  Nova can then attach the block device to the compute node via that IQN and pass the raw block device to the guest instance.  On the other hand, NFS devices seem to all be different, and how they behave depends on their drivers.  Finally, Ceph RDB and other backends have their own way of doing things.  (Thanks to John Griffith and the folks in #openstack-cinder for the details.)

So to everyone who attended, thanks for joining us!  If you didn't attend and wonder what you missed, you can download the recorded webinar.

Choose your cloud native journey.

Whatever your role, we’re here to help with open source tools and world-class support.

GET STARTED
NEWSLETTER

Subscribe to our bi-weekly newsletter for exclusive interviews, expert commentary, and thought leadership on topics shaping the cloud native world.

JOIN NOW