NEW! Mirantis Academy -   Learn confidently with expert guidance and On-demand content.   Learn More

< BLOG HOME

Toward a pure devops world: why infrastructure must be treated as software

Nick Chase - June 28, 2019
image

The thing about technology is that even before one wave completely finishes, the next is already on us. Virtual machines were still growing when containers appeared. Containers still aren't completely in place, and serverless technologies are growing. To say that tech is perpetually in flux is to understate things. Even when it comes to devops.
Or maybe especially when it comes to devops.
Because the goal of devops is to automate everything that can be automated, and that definition changes all the time. We started with applications, and while the bleed to infrastructure was inevitable, the move to cloud native architecture makes this automation essential. In a way, the infrastructure is very much a part of the application. 
We're already seeing that with CI/CD-based infrastructure deployment and AIOps, but it's more than that. If technology is going to make it to the next plateau, infrastructure must be treated with the same level of care and automation as software deployment. And that means that it must adapt to the same pressures and trends.

CI/CD deployment

These days, no sane developer would produce code for a large-scale production environment without some sort of CI/CD process, even if it's just version control. But infrastructure management is handled in that ad-hoc way all the time.  Granted, this is less common in the world of Virtual Machines -- and even less common in truly cloud native environments -- but it still happens, and even bare metal deployments can be scripted and incorporated into the normal operations workflow.
Getting to this level of automation is getting easier with the advent of tools such as Airship.

Building blocks

Once the general processes are in place, and it becomes possible to use automated processes to spin up resources, it becomes crucial to know what those resources are and how they're used. One way to do that is to create building blocks that can be used and combined by users and developers.
Creating these building blocks, such as standard images, makes it possible to standardize deployments and know that you aren't creating security problems -- or worse.  Projects such as Harbor make it possible to store these images in a private repository to simplify usage, but the idea of "building blocks" can also extend up the stack to the pipelines that put them together.

Training wheels

The pipelines that create production systems in an enterprise environment are complex, and there's no way to expect that everyone will be up to speed on how to use them. 
To that end, it's important to provide templated pipelines to get users started, but it's also important to have a process in place that ensures they're used appropriately, and that a well meaning engineer doesn't take down your entire production infrastructure.
An even better solution is to provide a way in which users can perform their duties without having to worry about it at all.

Low-code/No-code

Considering that devops arose as a way to streamline the relationship between developers and IT operations and lighten the load on operations, it's perhaps no surprise that low-code/no-code environments, which enable end users to create the functionality they need without involving developers are on the rise.
But what about infrastructure?  Where does this trend come in?
Well, truth is, we've been on this road for infrastructure since users started creating VMs using Amazon Web Service's web UI, and if we're ever going to stem the tide of shadow IT, those same capabilities are going to have to be available on-prem.
OpenStack is one example of an attempt to make this possible for end users -- in this case, developers who need resources to do their job -- but you can also see it in other "as-a-Service" projects. Mirantis recently announced its Model Designer, which provides a way for users to simply enter their requirements and get back a pre-configured pipeline to deploy an entire environment. Mirantis also just announced a beta Kubernetes-as-a-Service application targeted at developers, with both an API and a user interface to provide multiple ways for end users to get the resources needed to do their job -- automatically or on demand.

AIOps-ready

While the idea that AI-enabled routines can track your application workloads and infrastructure and detect and resolve problems before they happen, the reality is that most companies are still coping to groups with the "ops" part, and simply aren't ready for AIOps yet.
That doesn't mean you ignore it, of course; by getting your infrastructure firmly in hand -- and into your operations pipelines -- you're putting yourself into a much better position when you ARE ready to tackle a more self-healing environment.

The overall vision

Nobody thinks that we're going to eliminate all of IT operations; even when we do get to the point where our infrastructures are more "self-healing" we will still need people to train and groom the routines that keep them that way. But in the meantime, it's our job to create environments in which all of the resources -- including the infrastructure -- are managed as code, and in a way that not only take advantage of today's capabilities but leaves us ready for tomorrow.

Choose your cloud native journey.

Whatever your role, we’re here to help with open source tools and world-class support.

GET STARTED
NEWSLETTER

Subscribe to our bi-weekly newsletter for exclusive interviews, expert commentary, and thought leadership on topics shaping the cloud native world.

JOIN NOW