I’m guessing that whenever your manager approaches you and says “We have a problem,” you sort of know that it really means “I have a problem for you to solve.” Such is often the case with our customers, who are frequently attempting to move from a cascading (waterfall) style of delivering application services on bare metal to a more modern way of approaching continuous delivery geared toward cloud native applications.
The most specific problem many of them — and perhaps you as well — face in this regard is the lack of mature tools and services to enable them to combine all of the software delivery elements in play into a robust set of pipelines that represent every aspect of delivery, from baking, to verification, to release, with the ability to repeat the process as often as needed. You also want to be able to incorporate slight variations based on the circumstances of the change being introduced in any given cycle, and you want to do it all without a disruption in service.
In my role at Mirantis, I make it a point to see things from the customer’s side so I decided early on to document my journey to this land of milk and honey in the hope that my travels will help others who may be faced with the same “problem” to solve for their company.
Chapter 1: The Birth of a Shmesh
I was at my home office, as usual, working on creating an application services environment that would provide everything one would need to produce a cloud native application that can be continuously delivered.
For most people, the starting point is to define a logical path from treating the infrastructure as “pets” to one that treats it as “cattle.” What I was looking for, I knew, had to provide a portable and immutable infrastructure pattern on which to host the applications.
This led me to Kubernetes as a foundation, and once I had gone “all in” on that concept, I started looking at sets of tools that would sit on top of my portable and immutable infrastructure and fit the needs of my application services environment without placing too much of a burden on the application to provide recoverability and resiliency. I also wanted to fill as many of the ease-of-development criteria as possible.
The next element to focus on was a workflow engine that would enable me to piece together the complicated steps required for baking, testing, and releasing the application services in a continuous flow. That’s where the Spinnaker workflow engine comes in. (Or “Shpinnaker”, as I found myself calling it after the 137th viewing of “Shrek” with my grandkids.)
Shpinnaker — I mean Spinnaker — was originally developed by Netflix, but has been picked up by some heavy weight development teams like Google, Capital One, and Mirantis to capture and maintain the steps in the release process over time for continuous delivery of cloud native applications. (More to come on this in later chapters.)
In working through the various processes required by the various Development and Operations teams I work with, I have discovered that although the Kubernetes framework addresses several needs such as “self-healing”, “auto-scaling” and “contraction” pretty well, some of the development features, specifically those related to internal integration points used by application developers, had to be recreated in separate Pods and Deployments repeatedly. And of course, since there are many different ways to skin the proverbial cat, each instance of Firewall, Domain Name Service, DHCP and even Load Balancing tended to be handled slightly differently, which made continuous management and delivery more difficult and complex.
And that brings us to Istio (or “Shmistio”, as we were now up to 153 “Shrek” viewings). Istio provides a pluggable service mesh that is integrated with the Kubernetes framework using Envoy as the Proxy Service between the Control and Data plane (Get answers to what is service mesh by reading our guide to Istio). (Istio, too, will be covered in later chapters.)
When I deployed my first version of Spinnaker with Istio, I was so pleased with the result that I called to my wife, and said “Look, Bunny, I made a Shmesh!”
She brought a broom and dustpan and looked over the living room her (admittedly profitable) Ebay business has turned into something out of a Salvation Army donation center and said, “Where is it?”
“No…” I said. “I used Spinnaker with Istio to makes an application service mesh… a Shmesh!”
She just shook her head and walked out of the room. (We’ve been married for a long time.)
Bunny’s underwhelmed response aside, there are a few things I want to share about my “Shmesh” to set the stage for the rest of this story. Specifically, you should be familiar with:
Kubernetes: If you’re reading this, you’re probably already familiar with Kubernetes, the IaaS that I will use as the foundation for our project, but if not, please check out this introduction to Kubernetes for a basic idea so you can understand the concepts.
Istio: Don’t worry if you’re not yet familiar with Istio; we’ll be talking about what you need to know as we go along. I will present the information about some of the features and capabilities of Istio as a service mesh in the context of how Istio is applied to the K8s framework to facilitate and accelerate the development and continuous delivery of application services and microservices.
Spinnaker: I will also share with you the ways in which a workflow engine such as Spinnaker can be implemented to “bring everything together” in an automated and repeatable way.
Ultimately, I will provide an actual “use case” where I put all of these tools to work to form a Continuous Delivery Pipeline for one of our applications. The application targets deployment on Google Kubernetes Engine (GKE) that gets “injected” with Istio framework components to support a variation of load balancing.
In part 2, we’ll talk about microservices and get you set up with a working install of Istio.