NEW! Mirantis Academy -   Learn confidently with expert guidance and On-demand content.   Learn More


Infrastructure software is dead. Long live infrastructure software.

Angela McCallister - September 06, 2016
Mirantis Co-Founder and CMO Boris Renski recently stirred discussion with his blog post that infrastructure software is dead. At this year's OpenStack Days Silicon Valley, he sat down with Battery Ventures Technology Fellow Adrian Cockcroft to talk about the changing paradigms in software and in delivery models, and the results were not what you might think.

In general, there are two different methods for deploying software.  Traditionally, in the pre-cloud paradigm, software is deployed as a monolithic package.  You deploy it, and 6 months, or 12 months, or 7 years later, when a new version comes out, you basically throw it out and start again, hoping your data and processes will still be compatible with the new version.

But those days are over, Boris argued in his blog post.  They simply aren't sustainable. Things move too fast; improvements are available for months or years before you can take advantage of them under this model.  So what do you do instead?

That question was on the mind of most of the audience for Boris and Adrian's discussion.

OpenStack and the old way

In the early days of Mirantis, Boris explained, the company used the pre-cloud paradigm, where the product is packaged as a whole, delivered, and then periodically updated. They quickly learned — and as anyone who has attempted to upgrade OpenStack knows — this isn’t feasible for OpenStack, which itself uses the Infrastructure as Code (IaC) model.

What's more, as cloud technology proliferates, the shift in paradigm away from traditional, pre-cloud views has become less about software and more about the delivery model.

So what do you do?

You abstract. Boris clarified this shift in paradigm with AWS as an example. AWS users aren’t provided the infrastructure software but rather an API to the interface. That way, AWS can change whatever it needs to in the infrastructure software without disrupting clients and users.

But it's more than that, Adrian explains. People initially want something that works without change — until they need a new feature. Such project-based thought was built on the fact that coding is expensive and slow, which is why bundling a package periodically was the norm. Now, with procuring hardware and downloading software from places like Github taking minutes, the purchasing and deployment cycle has collapsed. A deployment can take seconds simply by firing up a Docker container.

Basically, the entire reason for bundling has gone away.

Taking advantage of the new software paradigm

To adapt, the software community has learned to break everything into microservices that can deploy independently, resulting in lots of versions of things constantly changing.

But ... doesn’t that break a lot then? Of course, Boris explained, but because you end up with a series of very small steps, it’s actually easier to detect problems and roll back to the previous version. As programmers will recognize, this is the same process used to debug, one step at a time, and it allows continuous change.

This process also solves the issues that arise regarding operations when updates need to be made. Previously, you’d have to wonder if you needed to bring all or part of your system down to make the updates. With containerized OpenStack services, you could upgrade each one independently.

And don’t forget the security benefits of updating in place.

Exploits of exposed software are proliferating, and as Adrian says, people are still downloading the same old vulnerable applications. He advised building around good source components that you can verify with services like JFrog Xray and use security scanners (Docker has one) to check your products.

Looking at the future

There are still a lot of issues that need solutions, of course.

Adrian pointed out that managing a multi-vendor dependency tree is a complex problem with no good fix. “You have to figure out how to keep everything going while trying to change everything,” he explained.

The goal is to keep the “northbound” components, that is, the APIs and so on, that developers want to use, evolving, but remember that the “southbound,” or hardware-facing components, act as constraints. This problem requires collaboration and partnerships to support these devices and to work out ways to get all the versions of hardware and software to work together.

Missed this year's OpenStack Days Silicon Valley? You can see the whole panel. Just head on over to the OpenStack Days Silicon Valley 2016 videos page and scroll down to "Infrastructure Software Is Dead…Or Is It?"

Choose your cloud native journey.

Whatever your role, we’re here to help with open source tools and world-class support.


Subscribe to our bi-weekly newsletter for exclusive interviews, expert commentary, and thought leadership on topics shaping the cloud native world.