An App Modernization Diary
Migrating and Modernizing applications is the straightest route to gaining payback from any cloud migration. Here’s how it really works.
"Nobody wants to modernize applications," says Anoop Kumar, VP of Worldwide Solution Engineering and Pre-Sales at Mirantis. "Developers don’t want to do it. It’s boring. It’s grueling. The people who wrote these apps have long since gone away, in many cases. There may be no docs. There may be only partial source code. Or none at all."
"IT and software leaders don’t want to do it, either, really." He continues. "They know it should be done. They know maintaining these pet apps is hard and extending them is basically impossible. They know they’re locked into using old operating system kernels and components because of dependencies. They know the pet apps need constant care and feeding. And they know this is all very risky: the apps are fragile, and their attack surfaces are growing."
"But they also know their organizations are looking at them for innovation and progress: new applications," Kumar continues. "They don’t want to pull teams off building new code and throw them at modernizing old apps for months or years on end."
"So it’s easy to get caught," says Kumar. "You have a vision. You have a mandate from leadership to innovate. And it comes to a head when you need to do something big, like migrate from VMware to some other cloud, or get off the public cloud and build a new private cloud, or move workloads onto Kubernetes."
"At that point," Kumar continues, "the pressures mount up. You’re building a new thing. It’s taking a lot of attention and funds. You’ve sold the organization on the new thing’s ability to increase speed and reduce TCO. So it would be nice to show ROI pretty quickly. And you figure the best way to do this is to build new apps on the new thing – makes sense. But meanwhile, the legacy stuff is anchoring you to old infrastructure, processes, risks, and costs. And now all of this is very visible."
Kumar leads Mirantis Professional Services, whose Application Migration and Modernization platform provides a way out of the woods for organizations that find themselves in some variation of this scenario: lots of legacy or conventional applications, mandate to move forward onto cloud and cloud-native platforms, no resources or plan for bringing existing apps along.
Conversations and Plans
"Our customers are very smart. They understand tech. They know all about their costs and risks. So in our first calls or meetings, we work to communicate several important things:"
"First," he says, "we demonstrate that we have a platform and a process and expertise and experience that work together to map out every app they care about – even the undocumented legacy stuff. And then we run what we call X-Ray tools on each app, to identify all its local and remote dependencies and connections. This is important, because our discovery tools break the paralysis around not knowing exactly how things work."
"Second," he continues, "we show that our tools rank these apps by how difficult they’ll be to modernize, and we’ll show how we produce a simple plan for grouping them and handling them efficiently. We don’t normally recommend modernizing everything. This is important too: customers want to reduce risks, costs, operating overheads. And they want not to do anything without a demonstrated benefit that justifies the cost and time of obtaining it. So what our tools do is rank each app by the so-called Twelve Factors list of criteria for building good cloud-native applications. The more factors an application already comprises, the easier it is to move and/or modernize. Given this information, customers can make choices that make sense for them. Most often, customers choose to prioritize what will deliver the most impact in the shortest timeframe. For example, one recent customer asked us to start by moving 700 or so web apps to Kubernetes – something we managed in just under a year; completing the first apps in under two months."
Many Paths to Goal
Kumar goes on: "Depending on the application – and of course, on the organization’s technical and business constraints – there are potentially several paths we can follow. We talk about the "Five Rs": We can rehost some applications – just move them to new infrastructure. We can replatform others: replacing a host operating system, or moving a web application into containers without changing much about how it works. Still others we refactor: removing technical debt and improving nonfunctional aspects of the app, like how you scale and update it. And of course, there may be applications [where] it makes no sense to work on, or that we want to postpone working on for business or technical reasons. These we choose either to retain or to retire. We can also rewrite legacy applications from scratch for cloud native platforms," he adds, "though this is obviously a longer process, with reverse-engineering, requirements validation, and other steps."
"The third thing we demonstrate is that our platform and process automate how we execute the migration and modernization plan at scale – letting us move fast, with high confidence, and deliver exceptional results. We show them how we auto-generate Dockerfiles as part of containerization workflows. We show them how we curate base container images for them, so we can guarantee zero CVEs (read: known security and privacy issues) in those images. In fact, we do guarantee this: the only CVEs in production applications that we handle are the ones in customer code that our customers have decided to let stand – usually because our treatment has externally mitigated the issue."
"And of course," he continues, "we show them how we’ll automate workflows for building, containerizing, integrating, testing and validating their apps. And this is valuable for two reasons at least: first, it shows we know how to move fast and avoid manual labor wherever possible. Everyone in tech now understands why what Google calls ‘toil’ is bad: it’s slow, it’s boring, it’s not repeatable or scalable, and it creates errors. Second, it shows them actual, mature software engineering workflows, targeting their new cloud. For example, a Kubernetes cluster."
"Sometimes, our customers are already clued into all this. And of course, we’re happy to work within their existing framework when that happens. But often – and you have to remember the timing here – the customer may be just now moving on a new cloud, or Kubernetes. Or in some cases, they’ve spent some time building a Kubernetes cluster out, or experimenting with Kubernetes on a public cloud, but haven’t yet had time to fully mature their own internal dev and ops processes. These may be relatively new things for them. And so seeing how we do it: a process and tooling we’ve matured and tweaked for many years – can be very valuable in helping them get tooled-up to build new stuff. Their developers and PMs will also work closely with us through the whole engagement, so a lot of our knowledge gets transferred."
Discovery and Refinements
"When we demonstrate our AI-based discovery software," Kumar says, "people are fascinated by it. But then they start thinking ‘whoa – do I really want an outside service organization running that kind of software inside my organization?’ It’s important to know that we don’t actually do that. Our process normally takes place entirely inside a development environment, which we assume the customer is maintaining in a state similar to production, but isolated completely from it. In the vast majority of cases, dev environments contain only sanitized test data, and are isolated by security mechanisms to prevent access to anything on the production side. Then the workflows and other tooling we create are also confined to dev – the customer gets full access to validate everything we do and all the revised applications and components we deliver. They have complete control over what they decide to promote to production."
He continues: "The AMMP discovery platform is small and light enough to run on a laptop, though what we typically do is visit the customer’s premises and install it as a Docker-based virtual appliance. This works even in an air-gapped environment. We can also install it at a larger scale on a compliant Kubernetes platform, which the customer would need to provide. And then we run it – actually, we run it many times in different ways, because initially we need to see what kind of information it brings back, and where that can be enriched if we provide AMMP with more information and perhaps expand access permissions in carefully-controlled ways. For example, the platform can tell that an application is talking to an IP address, and it can tell to some extent what the software is trying to do, but we’d prefer to know without any ambiguity that a time server is sitting on that IP, and so the code is retrieving the current time, for example."
"As we perform initial discovery steps," Kumar goes on, "we’re also talking with the customer and finding out all we can about the design of their target platform. And this lets us inform AMMP so that its recommendations, for example, take into account that certain classes of application will want to live behind the new service mesh. The result of this is really reliable recommendations that take many details into account, and we’re finding that they’re very accurate in proposing appropriate scopes and phases, and estimating time requirements, headcounts, and ultimately, costs."
Execution, Delivery, Validation and Future-Proofing
"Then the nitty-gritty stuff starts happening." Kumar says. "We start executing on the agreed-upon plan. Normally, this starts happening pretty quickly. It’s not unusual, for example, for us to begin transforming one category of applications while our engineers are still doing detailed discovery around more complex apps and systems. In most cases, we work remotely with access to the dev environment, which enables speed while keeping us in our lane."
"As always, our plans recommend strategies we think will deliver maximum practical impact in the shortest possible time. And our tooling helps with this. Think about two legacy applications, say … each with three pretty monolithic processes and a database. And each application stands alone, though the code shows they’re sharing two out of three processes between them. What AMMP would typically recommend, here, is a multi-step transformation process. The first thing we’d do is convert each process into a container and refactor them all to share a single, resilient database instance. This is all automated – our platform manages the conversion. We end up with a simpler application architecture, in containers, that’s functionally equivalent to the legacy applications and easier to maintain."
"Next," Kumar goes on, "we’d start factoring out functionality into new microservices, reducing complexity, increasing reusability, and enabling that functionality to scale. And we could continue decomposing the application until it’s all microservices. But there are also places along this iterative – we actually call it ‘iterative transformation’ and we can pursue it continuously – where we can stop: where things are good enough, performant and resilient enough. At each step, the customer steps in to validate and judge the work, and typically, our outputs travel rapidly into production, getting real-world use that we can apply as transformation continues."
"Some situations – and plans – are fairly simple. Others become complicated." Kumar explains. "As part of what we do, we often deal with multiple business units, each of which owns certain applications. In such situations, we often plan transformation in successive, parallel waves, based on technical relations among applications or customer priorities. We usually deliver transformed applications into a target data center environment where resources are segregated and dedicated to each group. Every group gets appropriate attention, they all see results, and we generate reports at the conclusion of each wave step."
"As we modernize applications, of course, we’re also standardizing." Says Kumar. "The goal here is to leave the customer with the most-standardized, controllable, operationally-efficient software base and environment; and also make it very easy for customers to use the apps we modernize in multiple or future environments. A lot of our customers are committed to a hybrid cloud vision – so we build everything to run on any CNCF-certified Kubernetes, paying strict attention to avoiding inconsistent features between distributions, and staying away from features we know the Kubernetes developers may deprecate. We create blueprints for apps as we go – each comprising documentation, Helm charts and other automation – that let the customer deploy these apps on any compliant cluster. And in every case, we provide the customer with verified base images, well-structured code repositories, and tooling that lets them take over the transformed applications, maintain, and improve them."
"And then we stay available to our customers over the long haul," Kumar concludes. "By the time a long-term Application Migration/Modernization project is concluded, we’ve learned a lot – our customers have learned a lot – and we’ve built a lot of value, together. So we can, in partnership with customers, help them address new cloud challenges as mandates emerge to do so. It’s almost never a ‘one-and-done’ proposition."