Toward a pure devops world: why infrastructure must be treated as software

Nick Chase - June 28, 2019 - ,

The thing about technology is that even before one wave completely finishes, the next is already on us. Virtual machines were still growing when containers appeared. Containers still aren’t completely in place, and serverless technologies are growing. To say that tech is perpetually in flux is to understate things. Even when it comes to devops.

Or maybe especially when it comes to devops.

Because the goal of devops is to automate everything that can be automated, and that definition changes all the time. We started with applications, and while the bleed to infrastructure was inevitable, the move to cloud native architecture makes this automation essential. In a way, the infrastructure is very much a part of the application. 

We’re already seeing that with CI/CD-based infrastructure deployment and AIOps, but it’s more than that. If technology is going to make it to the next plateau, infrastructure must be treated with the same level of care and automation as software deployment. And that means that it must adapt to the same pressures and trends.

CI/CD deployment

These days, no sane developer would produce code for a large-scale production environment without some sort of CI/CD process, even if it’s just version control. But infrastructure management is handled in that ad-hoc way all the time.  Granted, this is less common in the world of Virtual Machines — and even less common in truly cloud native environments — but it still happens, and even bare metal deployments can be scripted and incorporated into the normal operations workflow.

Getting to this level of automation is getting easier with the advent of tools such as Airship.

Building blocks

Once the general processes are in place, and it becomes possible to use automated processes to spin up resources, it becomes crucial to know what those resources are and how they’re used. One way to do that is to create building blocks that can be used and combined by users and developers.

Creating these building blocks, such as standard images, makes it possible to standardize deployments and know that you aren’t creating security problems — or worse.  Projects such as Harbor make it possible to store these images in a private repository to simplify usage, but the idea of “building blocks” can also extend up the stack to the pipelines that put them together.

Training wheels

The pipelines that create production systems in an enterprise environment are complex, and there’s no way to expect that everyone will be up to speed on how to use them. 

To that end, it’s important to provide templated pipelines to get users started, but it’s also important to have a process in place that ensures they’re used appropriately, and that a well meaning engineer doesn’t take down your entire production infrastructure.

An even better solution is to provide a way in which users can perform their duties without having to worry about it at all.


Considering that devops arose as a way to streamline the relationship between developers and IT operations and lighten the load on operations, it’s perhaps no surprise that low-code/no-code environments, which enable end users to create the functionality they need without involving developers are on the rise.

But what about infrastructure?  Where does this trend come in?

Well, truth is, we’ve been on this road for infrastructure since users started creating VMs using Amazon Web Service’s web UI, and if we’re ever going to stem the tide of shadow IT, those same capabilities are going to have to be available on-prem.

OpenStack is one example of an attempt to make this possible for end users — in this case, developers who need resources to do their job — but you can also see it in other “as-a-Service” projects. Mirantis recently announced its Model Designer, which provides a way for users to simply enter their requirements and get back a pre-configured pipeline to deploy an entire environment. Mirantis also just announced a beta Kubernetes-as-a-Service application targeted at developers, with both an API and a user interface to provide multiple ways for end users to get the resources needed to do their job — automatically or on demand.


While the idea that AI-enabled routines can track your application workloads and infrastructure and detect and resolve problems before they happen, the reality is that most companies are still coping to groups with the “ops” part, and simply aren’t ready for AIOps yet.

That doesn’t mean you ignore it, of course; by getting your infrastructure firmly in hand — and into your operations pipelines — you’re putting yourself into a much better position when you ARE ready to tackle a more self-healing environment.

The overall vision

Nobody thinks that we’re going to eliminate all of IT operations; even when we do get to the point where our infrastructures are more “self-healing” we will still need people to train and groom the routines that keep them that way. But in the meantime, it’s our job to create environments in which all of the resources — including the infrastructure — are managed as code, and in a way that not only take advantage of today’s capabilities but leaves us ready for tomorrow.

From Virtualization to Containerization
Learn how to move from monolithic to microservices in this free eBook
Download Now
Radio Cloud Native – Week of May 11th, 2022

Every Wednesday, Nick Chase and Eric Gregory from Mirantis go over the week’s cloud native and industry news. This week they discussed: Docker Extensions Artificial Intelligence shows signs that it's reaching the common person Google Cloud TPU VMs reach general availability Google buys MobileX, folds into Google Cloud NIST changes Palantir is back, and it's got a Blanket Purchase Agreement at the Department of Health and Human …

Radio Cloud Native – Week of May 11th, 2022
Where do Ubuntu 20.04, OpenSearch, Tungsten Fabric, and more all come together? In the latest Mirantis Container Cloud releases!

In the last several weeks we have released two updates to Mirantis Container Cloud - versions 2.16 and 2.17, which bring a number of important changes and enhancements. These are focused on both keeping key components up to date to provide the latest functionality and security fixes, and also delivering new functionalities for our customers to take advantage of in …

Where do Ubuntu 20.04, OpenSearch, Tungsten Fabric, and more all come together? In the latest Mirantis Container Cloud releases!
Monitoring Kubernetes costs using Kubecost and Mirantis Kubernetes Engine [Transcript]

Cloud environments & Kubernetes are becoming more and more expensive to operate and manage. In this demo-rich workshop, Mirantis and Kubecost demonstrate how to deploy Kubecost as a Helm chart on top of Mirantis Kubernetes Engine. Lens users will be able to visualize their Kubernetes spend directly in the Lens desktop application, allowing users to view spend and costs efficiently …

Monitoring Kubernetes costs using Kubecost and Mirantis Kubernetes Engine [Transcript]
The Definitive Guide to Container Platforms
Getting started with Kubernetes part 2: Creating K8s objects with YAML

Thursday, December 30, 2021 at 10:00 AM PST
Istio in the Enterprise: Security & Scale Out Challenges for Microservices in k8s

Presented with Tetrate