What is Kubernetes Orchestration?
Like a symphony conductor, Kubernetes provides a host of dynamic services for running, connecting, scaling, and managing complex, multi-container workloads: that's orchestration
Containers provide standard ways of packaging applications with dependencies into small, highly-portable units. Increasingly, those units aren’t monolithic. Modern applications are composed from smaller parts, often called “microservices,” that do singular tasks, on demand from one another or driven by higher-order tasks that administer business logic.
Starting such an application up and keeping it running can be pretty complicated to do manually. You have to configure containers so they know which ports to listen on. You have to launch individual containers, finding space on a container runtime for them and managing any persistent storage requirements. You have to make sure all your containers find each other and begin to work together in concert. And you have to handle faults (e.g., a container crashes) and, as required, scaling requests. When new versions of your containers get built, you have to swap these out for old ones, and keep the application healthy. And you need to be responsive to requirements of infrastructure: e.g., stopping your whole application temporarily when it’s time to apply updates to the underlying node, then starting it again.
Now imagine needing to do this for dozens of nodes, hundreds of apps, and perhaps thousands of different container images. Clearly, this isn’t scalable without writing lots of scripts and automation, integrating monitoring, and lots of additional work and risk.
Orchestration provides an alternative: instead of building custom scripts, monitoring, and other facilities to manage each application component (and all of them together, and many applications sharing the same server or set of servers)…
Create a set of standard behaviors and services that work for many kinds of application
Define standard ways of requesting how those behaviors and services should be applied in particular cases: a configuration
Make these configurations hierarchical, so you can easily specify how each part and layer of your application works, from bottom to top
Then also create higher-order services that continually evaluate configurations, monitor the state of all running application components and underlying infrastructure, and actively “converge” the state of components to match configured requirements, based on current conditions. If a container crashes, restart it and hook it back up with peers. If a node fails, restart the containers it was hosting on another node with appropriate resources and relink all connections so the application keeps working.
That’s orchestration: standardized, generalized, abstracted, configurable automation for complex, dynamic applications.
What are orchestration tools?
Many different container orchestration systems and platforms exist. All do some of the same things. The simplest kind of orchestration is built into individual container runtimes like Docker Engine or Mirantis Container Runtime (formerly Docker Engine – Enterprise), and is configured using tools and standard configuration file templates like Docker Compose. Single-engine orchestration is extended to clusters of container runtimes with Swarm orchestration — also configurable with Docker Compose. Features of this kind of orchestration include the ability to define, start, stop, and manage multi-container apps as units, in multiple, isolated application environments; ability to preserve volume data across successive restarts (or replacements) of a container (persistent storage); and the ability to replace only changed containers in a running system, making operations very fast.
Mesos is another orchestrator, created and maintained by an Apache open source project that predates Kubernetes. Mesos turns a collection of container hosts into a cluster, abstracts away physical resources, and provides APIs that applications can consume to manage their own resources, scheduling, and elastic scaling. At this level, Mesos is best-suited for hosting applications that are prepared to take responsibility for their own orchestration: cluster-oriented apps like Hadoop are a good example. Open source projects like Marathon provide another layer of orchestration convenience on top of Mesos, framing additional abstractions and delivering an easier-to-use environment, much like Kubernetes.
Today’s most-popular container orchestration environment is Kubernetes, which provides a full suite of general-purpose orchestration methods, services, and agents, and relatively simple standards for configuring them. Kubernetes also lets you define custom orchestration agents, called operators, and custom configurations for them that build on existing functionality.
Developers usually find Kubernetes orchestration a little daunting, at first. But most quickly discover that skills developed learning Docker, Docker Compose, and Swarm are readily applicable in Kubernetes environments. In fact, Mirantis provides an open source extension to the Docker CLI that lets Docker Compose configurations run directly on Kubernetes, in addition to working on Docker Engines and Swarm.
Learn more about enterprise Kubernetes.
Learn more about production Kubernetes.
Learn more about the role of secure container runtime.
Learn more about the importance of a secure registry.