NEW! Mirantis Academy -   Learn confidently with expert guidance and On-demand content.   Learn More

< BLOG HOME

OpenStack Project Technical Lead Interview Series #9: Devananda van der Veen, OpenStack Ironic Project

Rafael Knuth - October 31, 2013

This post is the 9th of a continuing series of interviews with OpenStack Project Technical Leads on our Mirantis blog. Our goal is to educate the broader tech community and help people understand how they can contribute to and benefit from OpenStack. Naturally, these are the opinions of the interviewee, not of Mirantis.

Here the interview is with Devananda van der Veen, OpenStack Ironic Project Technical Lead.

Mirantis: Can you please introduce yourself?

Devananda van der Veen: I’m a senior systems engineer at HP Cloud. I started over a year and a half ago, and I am leading the Ironic project right now. Before this I worked as a MySQL database consultant and administrator.

Q: What is your history with OpenStack and why do you engage?

A: I’ve worked with Monty Taylor for many years and when he started talking about OpenStack with me, I was like, “Oh, hey, great. That sounds fun.” I had already wanted to switch tracks from MySQL to something else, and it was clear to me that OpenStack was the next emergence in tech. MySQL was a really big explosion starting in about 2005. Because of that, I shifted my career in that direction back then, and I see the same thing happening with OpenStack now. I believe that all of IT is going to be affected by this process that we’re creating here and I want to be a part of it.

Q: What are your responsibilities as the Project Technical Lead for Ironic?

A: Setting the direction and coordinating the developers, pretty much the same as with other technical projects in OpenStack. Taking care of blueprints, managing code reviews, guiding developers but taking a back seat to actual development. Encouraging the community to grow around Ironic.

Q: What is the most important thing that you need to do in order to keep that project alive?

A: I want to create a project that a lot of different hardware vendors can feel comfortable contributing to, without feeling they’re competing for proprietary bits or splitting the project apart. Ironic itself, it’s the hardware provisioning or bare metal provisioning service in OpenStack. All of the different hardware vendors out there such as HP, Dell and IBM have their own hardware management implementations, whether it’s HP’s iLO or Dell’s DRAC. They all add on to the standard IPMI specification and some of them implement it a little differently.

Q: Can you explain what Ironic’s role is within OpenStack? Why does it matter?

A: There’s a couple of different ways I could try to explain that, I’m going to try and pick one.

Thus far in OpenStack’s evolution we’ve had to use other tools to deploy OpenStack, whether it’s a just couple of servers in your closet, a rack or a whole data center. The tooling had to be something outside of OpenStack.

Ironic’s role is to provide that hardware provisioning layer which was previously missing from OpenStack. Building on that role, one of Ironic’s goals is to enable TripleO. All the tools that you use to deploy a complex application in a cloud can be reused to deploy a cloud -- which is, after all, just another complex application.

Q: What is the distinction between Ironic and TripleO?

A: Ironic is the service that controls the power state of a physical machine, writes an image to the machine, and it’s in the plan to do other management tasks as well. TripleO is at a higher level of the stack, using many services, not just Ironic, to deploy and manage an OpenStack cloud.

Q: Tell us about the Ironic community. Who is contributing to the project?

A: HP, Red Hat, Mirantis, IBM, and others. From hardware vendors, HP and IBM are both contributing. Most of IBM’s work so far has been in a related project, a pure-python IPMI driver, ported from xCAT and contributed to the OpenStack community. It will be used by Ironic as a more scalable replacement for the ipmitool library inherited from Nova’s Baremetal code.

Q: Would you like to see more engagement from within hardware vendors?

A: Yes, I would like to see more. I’d like to see Dell certainly. They haven’t been involved with us yet. It’s a very young project, and some of the hardware vendors may not have engaged significantly yet just because the code wasn’t ready. I think it’s rapidly approaching a code maturity level where folks can jump in and add their own hardware drivers.

Q: What has the Ironic community accomplished so far?

A: Bare metal provisioning inherited a lot of limitations by initially trying to run inside of the Nova Compute process. In the last four months or so we have been ripping Ironic out of Nova and turning it into a standalone, top-level service in OpenStack. It has its own API service that you can scale. It has its own message queue and database back-end, and a Conductor service, which is something between Nova Conductor and Nova Compute. More recently, we’ve added DevStack and diskimage-builder support, a python client, and Tempest tests are in the works. All of this is required for an OpenStack project, but I feel like it’s also quite an accomplishment for a small team in just four months!

Q: What capabilities will Ironic provide in the next OpenStack release?

A: As far as exactly what features it will have in Icehouse, I can’t really say.

Q: If you could wave your magic wand right now and have Ironic be exactly what you want it to be, what would be that vision for Ironic right now?

A: I would want it to be scalable and fault tolerant. My definition of high availability in this context is that it’s resilient to failures, not that it never fails. Hardware failures are inevitable, but Ironic should recover from them.

It would have different drivers contributed by various hardware vendors. The drivers themselves, I would want them to be open source, not proprietary. Ironic could control all different kinds of hardware, everything from small ARM devices to “big iron”, all the different hardware out there that I might not personally know about. I’d love other people to have contributed drivers to make those things provisionable and manageable.

Q: Are there any misconceptions that people have about Ironic?

A: One of them is that Ironic is not a configuration database, and it is not going to store a stateful history of things. People have asked me, “Will you be tracking fan speed, CPU temperature, and then taking action if something overheats?” And my answer is, “No.”

But the biggest misconception right now is that we can use Nova Compute or Ironic to do untrusted, multi-tenant, bare metal clouds. This is a goal, but we are definitely not there today. There are some huge security implications with untrusted tenants running on bare metal. If I were to wave my magic wand again, I would want all of those to be solved. That will require a lot of collaboration between hardware vendors in creating robust hardware- and firmware-level security, not just around network boot and tenant isolation. There’s some work being done on this, but I feel we are still far from solving this. In the mean time, I want people to know this is why they shouldn’t run untrusted tenants on bare metal today.

Q: Another wish list question. What kind of people do you want to see contributing to Ironic? Who are your ideal contributors right now?

A: Developers who know how to work with an open source community, because that’s not always easy to find. Someone who has a good background in systems administration in addition to programming, because most of what we’re doing with Ironic is low level stuff. Yes, it’s written in Python, but we’re doing a lot of glue with DHCP and IPMI and PXE processes, and I’m going to need people who understand firmware security, which seems to be a bit of a void in the team right now.

Q: Thank you for your time.

A: Thank you.

Choose your cloud native journey.

Whatever your role, we’re here to help with open source tools and world-class support.

GET STARTED
NEWSLETTER

Subscribe to our bi-weekly newsletter for exclusive interviews, expert commentary, and thought leadership on topics shaping the cloud native world.

JOIN NOW