NEW! Mirantis Academy -   Learn confidently with expert guidance and On-demand content.   Learn More

for Modern Apps

Deliver great software fast — get guaranteed results — with frameworks, workflows, automation, and proactive support

Time to read: 17 minutes

3. Getting to “just push your code”

Accelerating modern software development and reducing operations overheads requires coordinating a lot of powerful moving parts

Got Kubernetes? Need to develop modern software for it? Here are some of the subsystems, tools, and services required to reach the point where you can “just push your code.”

folding-laptop-2x folding-laptop-2x


Pushing code to a shared version control repository (for example, on GitHub, or locally maintained) has become the definitive first step for developer and operational automation in almost all environments – particularly so in modern container-oriented ones. The push can trigger actions causing the new code to be read by an automation pipeline that may:

Scan the source itself for quality and security issuesalong with associated manifests for issues with correlated components, base container requests, and more.

Build and compile the code as needed.

Package code into containers.

Pre-test containers to prevent basic errors from moving further down the pipeline

Integrate containers together and with services required to run the application in a test environment as much like the ultimate production environment as possible.

As a final step, the application can be deployed to the test environment and operated, in the process running further automated tests built into the code itself, as well as external tests applied by the test bed system—think fuzz tests, automated UI tests, and so on. If the code passes all tests, it can be promoted (a process usually approved by humans) – either to a next level of candidacy for human QA, or in some cases, to production.

This kind of system can enable you to deliver changes to production very rapidly. Some organizations maintain pipelines that deliver changes to production every time a change is made. Others may release hourly, daily, weekly, incorporating more changes in a release (which can add risk). Either way, organizations that do continuous integration and delivery are typically backstopped by layers of additional automation that use a range of techniques to minimize the impact of possible issues on users. An example is “canary releases,” which expose only a fraction of trusted, current users to a new release, and roll back if issues are encountered.

Fully-realized Continuous Integration and Delivery (CI/CD) systems are themselves complicated integrations of complex services and applications. Such systems are absolutely critical path – among other things, they:

Marshal important components of your security infrastructure such as scanning, container encryption and signing

Enforce critical operational and compliance policies, and enable auditability and software bill-of-materials generation required for compliance – among many other no-compromise tasks

VIDEO: How to reduce your risk and liability with Kubernetes

They also, in principle, can form the “point of the spear” for analytics and cost-optimization around cluster utilization, which relates to application architecture and configuration.

So CI/CD pipelines are hard to build, with lots of moving parts. And they’re hard to maintain, because of frequent changes in how they’re used, frequent updates to their own dependencies (all those moving parts need to run somewhere, on something), and frequent updates to the dependencies they invoke in order to build your code.


Few developers have the skills to build CI/CD pipelines. Maintaining them can be a more-than-full-time job, even for a single application. You not only need to keep them functioning correctly, but adapt them to match changing requirements and leverage new technologies. And many organizations need to maintain differentiated pipelines for dozens or hundreds of applications.

What’s more, even quite-sophisticated front-end CI/CD doesn’t fully enable “just push your code” simplicity. To get there, you need to automate all the way through to cluster operations. Rapid releases via CI/CD require fine-tuned automation that leverages sophisticated native Kubernetes cluster functionalities such as rollbacks or underlying infrastructure automation such as on-demand cluster scaling – stuff that’s time-consuming to attempt manually, and very prone to error. In effect, you can’t think about using these techniques unless you fully automate them.

Mirantis can help

Mirantis is highly skilled in creating and maintaining CI/CD pipelines and DevOps automation for Kubernetes – and doing so at scale. Mirantis Professional Services can design, build, and manage custom CI/CD workflows, tuned to your requirements, and using your preferred components. Gain the expertise of a global bench of CI/CD and software development automation specialists for a predictable price.

Application Frameworks

Kubernetes reaches its full potential as an automation framework for application operations only when applications are designed in particular, Kubernetes-friendly ways, and configured to leverage Kubernetes’ built-in functionalities for restarting failed workloads, distributing components to achieve resilience through redundancy, and scaling components out efficiently in response to varying traffic demands. Apps running on Kubernetes – particularly in these Kubernetes-friendly distributed architectures – can break in complex ways, making observing and fixing them a challenge.

These days, however, depending on the kind of apps you’re building, developers may have choices beyond mounting the learning curve and designing and implementing their own Kubernetes application architectures.

Open source frameworks like Lagoon replace heavier and more restrictive solutions like Platform-as-a-Service (PaaS) and operationalize certain kinds of common application architectures. For example, web apps are traditionally built on a stack server architecture with a Content Management System and a backing database. Lagoon fully operationalizes this application footprint, providing resilience, autoscaling, and many other features, and even letting developers with little familiarity with Kubernetes “just push their code.”

VIDEO: Automating App Delivery with’s Lagoon. Watch modern app playlist

Mirantis can help

The trouble with such frameworks, however, is that they also require installation, maintenance, and other operations. Lagoon, for example, includes a core component – effectively a pipeline – that takes your code and builds it; then hands it off to a cluster-resident component that operationalizes the built configuration, making everything work automatically. It’s amazing. But it’s also a lot of moving parts.

Happily, developers have choices here, as well. For example, – a Mirantis subsidiary – founded the Lagoon project, and offers two services that make it possible to obtain Lagoon benefits without needing to install, configure, and maintain Lagoon itself. One, called Lagoon-as-a-Service, hosts the Lagoon Core component for you (minimal runtime components live on your cluster). The Cloud goes one step further—it maintains the entire Lagoon Core and runtime system, and hosts your application in the cloud. You don’t even need your own Kubernetes.

puzzle-cube-2x puzzle-cube-2x

Visualizing Kubernetes While Reducing Manual Steps

To make any of the above solutions work – and much more besides – developers and operators all need tools for connecting with one-to-many Kubernetes clusters, visualizing quickly what’s going on inside it, and iterating over tasks such as changes to configuration files. Classically, devs and ops folks did this via terminals, using Kubernetes’ ‘kubectl’ client, a powerful, scriptable tool, but one with an obscure and complicated syntax.

More recently, “Kubernetes dashboards” have emerged to make basic-to-advanced operations much simpler, and to replace the command line for many common tasks.

Mirantis can help

Lens, sponsored by Mirantis, is the world’s most popular open source Kubernetes IDE. Lens runs on any desktop, any OS, permitting developers and operators to access and manage multiple clusters, deploy from permissioned Helm repositories, and drill down into and iterate over abstractions, components, and containers, live. And all while increasing speed and eliminating the need to memorize and apply complex kubectl commands to extract information, and complex aids like JSONtool to parse and manage outputs. Developers can save up to 20% of productive time iterating changes with Lens versus conventional methods: one workday per week.

Lens Pro, a companion service, enables rapid, secure access by teams to Kubernetes environments, and provides numerous quality of life features, including quickstart local dev clusters on local machines such as developer laptops. Developers can provision local Kubernetes clusters (with different profiles) with Lens Desktop Pro for learning and local development. It also enables container image scanning and CVE reporting. Developers can even start, stop and configure clusters with a click of a button.