NEW! Mirantis Academy -   Learn confidently with expert guidance and On-demand content.   Learn More


What to think about when you think about microservices

Erick Gregory - May 17, 2023

The Amazon Prime Video engineering blog caused a stir recently when it published a post on reducing costs while scaling up a monitoring service by rearchitecting it from a microservices architecture to a monolith. 

In a time when many folks are talking about cloud repatriation or feeling exhausted with learning new cloud technologies or paradigms, it’s no surprise that this story struck a chord. It’s a fascinating, nuanced technical study—and a quick reading provides plenty of fodder for those frustrated with the cloud status quo. Amazon themselves are abandoning serverless for a good old-fashioned monolith!

Drilling down into this story can tell us a lot about the use cases for microservices. Unpacking the reaction to this story reveals the challenges many are facing in today’s clouds. And all of this can help us better understand what we need to consider when architecting—and rearchitecting—microservices.

Of monoliths and microservices

The Prime Video service in question monitors content streams for defects like audio/video mis-synchronization. That means monitoring many, many concurrent streams. The first iteration of the service was highly distributed, consisting of a number of different components implemented as orchestrated serverless functions. But the large number of data transactions involved led the team straight into high costs and AWS account limits (for Amazon internal customers no less than you or I!).

As the Prime Video post by Marcin Kolny puts it:

“We realized that a distributed approach wasn’t bringing a lot of benefits in our specific use case, so we packed all of the components into a single process.”

In this use case, rearchitecting into a single process meant consolidating media converter, defect detector, and orchestration components into a monolith that runs on EC2 and ECS—while cutting S3 storage out of the equation entirely since components can share video frames in-memory.

The post concludes noting that the monitoring service is now 90% cheaper. Kolny adds that the decision “whether to use [microservices and serverless components] over monolith has to be made on a case-by-case basis.”

Adrian Cockcroft, formerly of Netflix and AWS, argues that this is more a matter of refactoring a microservice than abandoning distributed architectures for a Great Monolith to Rule Them All—after all, this is a single monitoring service now being delivered via containers, not the Prime Video application as a whole. He adds that as far back as 2019, then speaking as an AWS VP of Cloud Architecture Strategy, he advised optimizing serverless applications…

“ also building services using containers to solve for lower startup latency, long running compute jobs, and predictable high traffic.”

Sam Newman, author of Building Microservices, noted on Twitter:

“this article is really speaking more about pricing models of functions vs long-running VMs than anything. Still a totally logical architectural driver, but the learnings from this case study likely have a more narrow range of applicability as a result.”

But that more narrow range of applicability hasn’t stopped folks from drawing some pretty broad conclusions.

Unpacking the backlash

37signals CTO David Heinnemeier Hansson has garnered a good deal of attention in the last year by questioning many truisms and conventional wisdoms in the world of cloud—detailing, for example, his company’s anticipated savings as a result of repatriating from public cloud.

In this vein, he leveled an iconoclastic broadside at microservices on the back of the Prime Video story, referring to the architectural pattern as a “strain of intellectual contagion that just refuses to die” and “madness in almost all cases.”

But that broadside is…well, pretty broad, extrapolating a totalizing conclusion from a case study that has, as Newman points out, a very specific context. Cockcroft observes:

“…there seems to be a popular trigger meme nowadays about microservices being over-sold, and a return to monoliths. There is some truth to that, as I do think microservices were over sold as the answer to everything, and I think this may have arisen from vendors who wanted to sell Kubernetes with a simple marketing message that enterprises needed to modernize by using Kubernetes to do cloud native microservices for everything. What we are seeing is a backlash to that messaging, and a realization that the complexity of Kubernetes has a cost.”

The backlash makes sense. Many teams are burnt out and don’t want to learn new technologies or development patterns—especially if they’re being mandated as part of a general migration, rather than applied thoughtfully on a case-by-case basis. 

But frustration with today’s cloud status quo is deeper and wider than the learning curve of Kubernetes. According to Insight Partners, small teams and individual developers are increasingly using smaller cloud providers like Vercel and rather than “megaclouds.” Why? They write that “AWS, GCP, and Azure are starting to look like Costco with too many aisles and value packs,” and that developers want easier options that “don’t require a doctorate in cloud architecture.”

These developers are chafing at something very similar to enterprise teams struggling to build or migrate to Kubernetes-based microservices. It’s a similar problem, in turn, to the one suffered by the Prime Video team in their monitoring service’s first iteration: work on the application was overdetermined by the platform.

Avoiding constraints

The Prime Video team chose a serverless paradigm in order to build their service quickly, but this meant an awkward workaround via S3 for transferring video frames at scale—and dramatically expanded costs. The AWS serverless platform overdetermined the application. When the team reassessed the use case, they came to a different conclusion.

Detractors are using this case study to cudgel the entire cloud native paradigm, but it illustrates exactly why technologies like Kubernetes can be useful. Kubernetes provides a standardized layer of abstraction separating the cloud provider (or other underlying infrastructure) from application workloads.

This is a big part of why Kubernetes was developed: to prevent teams and their applications from being constrained by a particular cloud provider—tailor-made for an AWS environment and difficult to migrate elsewhere. 

But there are two important baseline realities that teams should consider when thinking about Kubernetes and building cloud native applications:

Kubernetes alone is not a developer platform.

Upskilling to use Kubernetes is a non-trivial lift—and even once you get there, you’ve learned to use a system that abstracts infrastructure, not a just-push-your-code developer platform. If you treat vanilla Kubernetes like a developer platform, if you expect it to act like a developer platform, you’re going to feel some major friction.

Friction, in turn, can lead to this not-a-platform overdetermining your work on applications—slowing down progress, constraining developers to Kubernetes patterns or functionalities that they understand, and more. Kubernetes can make a fantastic foundation for a developer platform, but a foundation isn’t a house, and understanding this will help you define your expectations—and requirements—accordingly.

A microservices architecture is a specific tool suited for specific problems.

Kubernetes enables microservices and makes a very natural home for, say, stateless apps communicating via RESTful API. But just because you have the hammer of Kubernetes doesn’t mean that every problem is a microservice-shaped nail. 

These conversations get muddied by the often vague and inconsistent ways in which we talk about microservices. As Cockcroft noted, the Prime Video team arguably refactored a microservice. At what level of granular component distribution do we all agree that we’re looking at a microservice? The haziness here can get in the way of clear thinking and discussion.

For the purposes of this discussion, a microservice is an application component (or “service”) that communicates with other components through simple protocols like HTTP, typically governing a self-contained piece of functionality and managed by a single, relatively small team. Microservice architecture is an iteration of services-oriented architecture, applied at a “micro” scale rather than the macro scale of the whole enterprise, emerging from a more modern industry context that is both “cloudier” and populated by more and larger apps maintained by more teams.

On the technical side, a microservice architecture can enable components with asymmetrical demand to scale independently of one another. If two independent components are separate, there may be less risk of each failing and taking the other down.

On the organizational side, microservices means that separate teams can work on their separate and loosely coupled components separately, worrying much less about coordination and stepping on one another’s feet with conflicting dependencies or toolkits. In theory, teams can use the languages, frameworks, and so on best-suited to their particular tasks.

These are real benefits—with the right planning, and in the right use case, a microservices approach can really accelerate work. But those benefits don’t come without an important cost in complexity, and microservices shouldn’t be regarded as a default or a panacea. There are other approaches to cloud native applications, including relatively monolithic applications deployed via container or VM. 

If you’re building or rearchitecting a service and mulling your approach, here are some fundamental questions to consider:

1. What are the facts on the ground for this service?

Is demand on your various components likely to be meaningfully bursty or asymmetrical? Or might it be pretty steady and predictable? Do you have data that can speak to this?

How large/coherent is the team managing this service? Would distributing components make it more manageable and separate concerns? 

What are this service’s external dependencies? And what depends on this service? “Loosely coupled” isn’t the same as “uncoupled.” How would distributing components for this service affect its interactions with other services?

2. What are the costs and benefits of distributing the components in question?

It’s important to emphasize here that a microservices architecture can get very complicated very quickly, with countless network calls flying around between countless endpoints for countless replicas. Observability, monitoring, and alerting become extremely important, and investing in that observability becomes part of the cost of realizing microservice benefits--not to mention the introduction of networking and latency as a critical variable in your application calculus.

For the Prime Video team, the initial benefit of using serverless functions was development velocity. That velocity came at the cost of AWS serverless limitations—and eventually they decided the costs weren’t worth it. 

Even leaving aside the serverless element, a microservice model can accelerate development as well, while providing the potential organizational and efficiency benefits we’ve already discussed. Based on your assessment of the facts on the ground for your service, those benefits might dramatically outweigh costs like increased complexity—or they might not be worth it. 

3. What does your team’s expertise support?

What is your team’s level of experience and expertise with microservices architectures? How experienced are they with Kubernetes and containers? Building or rearchitecting microservices-based applications for Kubernetes can create lots of opportunities for pitfalls that may require costly backtracking or refactoring in the future. Fortunately, there are ways to leverage the flexibility of Kubernetes without falling into those traps.

If your team isn’t versed in Kubernetes and doesn’t need or want to be, an application delivery platform like the open source Lagoon might be the right fit for them—it’s an especially good choice for making Kubernetes disappear completely when building web applications. 

If you have existing applications that need to migrate to Kubernetes, you may also want to consider bringing in outside expertise to simplify the problem, so your team can focus on innovation. Mirantis’ Application Modernization services can help you migrate to Kubernetes quickly, thoughtfully, and at scale—download the datasheet to learn more.

Choose your cloud native journey.

Whatever your role, we’re here to help with open source tools and world-class support.


Subscribe to our bi-weekly newsletter for exclusive interviews, expert commentary, and thought leadership on topics shaping the cloud native world.