Mirantis OpenStack

  • Download

    Mirantis OpenStack is the zero lock-in distro that makes deploying your cloud easier, and more flexible, and more reliable.

  • On-Demand

    Mirantis OpenStack Express is on demand Private-Cloud-as-a-Service. Fire up your own cloud and deploy your workloads immediately.

Solutions Engineering

Services offerings for all phases of the OpenStack lifecycle, from green-field to migration to scale-out optimization, including Migration, Self-service IT as a Service (ITaaS), CI/CD. Learn More

Deployment and Operations

The deep bench of OpenStack infrrastructure experts has the proven experience across scores of deployments and uses cases, to ensure you get OpenStack running fast and delivering continuous ROI.

Driver Testing and Certification

Mirantis provides coding, testing and maintenance for OpenStack drivers to help infrastructure companies integrate with OpenStack and deliver innovation to cloud customers and operators. Learn More

Certification Exam

Know OpenStack? Prove it. An IT professional who has earned the Mirantis® Certificate of Expertise in OpenStack has demonstrated the skills, knowledge, and abilities needed to create, configure, and manage OpenStack environments.

OpenStack Bootcamp

New to OpenStack and need the skills to run an OpenStack cluster yourself? Our bestselling 3 day course gives you the hands-on knowledge you need.

OpenStack: Now

Your one stop for the latest news and technical updates from across OpenStack ecosystem and marketplace, for all the information you need stay on top of rapid the pace innovation.

Read the Latest

The #1 Pure Play OpenStack Company

Some vendors choose to “improve” OpenStack by salting it with their own exclusive technology. At Mirantis, we’re totally committed to keeping production open source clouds free of proprietary hooks or opaque packaging. When you choose to work with us, you stay in full control of your infrastructure roadmap.

Learn about Our Philosophy

Cloud Computing, Open Source, OpenStack: What’s Eating the IT World Anyway? — Your Answers

on March 10, 2014

Last week we conducted a webinar called Cloud Computing, Open Source, OpenStack: What’s Eating the IT World Anyways? and we got some great questions, but we weren’t able to get to answers for all of them, so we’ve gathered them together here for you:

CTM:  If i have a network 10.10.10.10/20; could I separate into 10.10.10.10/21 and the rest, then give it to 2 projects (tenants)?

Roman Podoliaka:  I assume you are talking about externally reachable networks to be used for allocation of floating IPs. Yes, just create two external networks via Networking API with appropriate CIDR values.

CTM: Does OpenStack support more than one floating IP range? Like 192.168.1.0/24 and 192.168.100.0/24 and 172.168.0.0/16, for example.

Roman Podoliaka:  Yes. Just create a separate external network via Networking API for each CIDR.

CTM:  Does OpenStack have thin provisioning features?

Roman Podoliaka:  Yes, it does.  How to enable it depends on your situation.

JS:  Can Fuel can be used to modify already deployed environments (configuration etc.)? Or its only for initial deployment and adding nodes? Is it possible to modify some options for nova – for example – and deploy changes with fuel?

Vladimir Kozhukalov: Fuel uses puppet for Openstack deployment. Current approach allows one to modify working cluster, but not all of those parameters which were set before actual deployment can be changed. For example, disk partitioning scheme can not be modified as well as many of network parameters.

HL: How would you move the Nodes from one Installation to another? E.g. Grizzly to Havana.

Nick Chase:  This depends on your situation.  The easiest way is to start up a second cluster, in this case using Havana, and move workloads, rather than moving live nodes.  You can either do this by forcing processes to redirect to the new cluster, or by attrition, in which new workloads are started on the new cluster, and as workloads complete on the old cluster, they are not restarted, but are instead moved to the new one.  As nodes “empty out” and are no longer in use on the old cluster, you can re-provision them on the new cluster.

All that said, Icehouse will have a more well-defined upgrade path than previous releases have had.

EC:  How are the storage blocks attached to virtual machines?  E.g. hba or nfs.

Nick Chase:  I know it’s a cliche, but the answer here literally is “it depends”, because different block storage backend devices work differently. For an iSCSI device, an iSCSI target is created and nova receives an iSCSI Qualified Name (IQN).  Nova can then attach the block device to the compute node via that IQN and pass the raw block device to the guest instance.  On the other hand, NFS devices seem to all be different, and how they behave depends on their drivers.  Finally, Ceph RDB and other backends have their own way of doing things.  (Thanks to John Griffith and the folks in #openstack-cinder for the details.)

So to everyone who attended, thanks for joining us!  If you didn’t attend and wonder what you missed, you can download the recorded webinar.

Some HTML is OK


or, reply to this post via trackback.