We have updated our Privacy Policy, effective from 25 September 2024. It’s available here

< BLOG HOME

Cloud Native 5 Minutes at a Time: What is Kubernetes?

image

One of the biggest challenges for implementing cloud native technologies is learning the fundamentals—especially when you need to fit your learning into a busy schedule.

In this series, we’ll break down core cloud native concepts, challenges, and best practices into short, manageable exercises and explainers, so you can learn five minutes at a time. These lessons assume a basic familiarity with the Linux command line and a Unix-like operating system—beyond that, you don’t need any special preparation to get started.

Now that we have a grasp of container fundamentals, it’s time to explore how we can deploy containerized applications using Kubernetes. If you’re hopping onboard with this lesson and feel comfortable with the basics of using containers, this is a great place to start. If you need a primer or refresher on containers, you may wish to start with our first unit.

Table of Contents

  1. What is Kubernetes? <- You are here

  2. Setting Up a Kubernetes Learning Environment

  3. The Architecture of a Kubernetes Cluster

  4. Introducing Kubernetes Pods 

  5. What are Kubernetes Deployments?

  6. Using Kubernetes Services, Part 1

  7. Using Kubernetes Services, Part 2

  8. Persistent Data and Storage with Kubernetes

  9. How to Use Kubernetes Secrets with Environment Variables and Volume Mounts

  10. How to Use StatefulSets and Create a Scalable MySQL Server on Kubernetes

  11. Running a Stateful Web App at Scale on Kubernetes

  12. Taking your next steps with Kubernetes

Why Deploy Applications via Containers?

Before the advent of containerization, organizations deployed applications from either physical or virtualized servers, typically located either in the application owner’s data center or the facilities of a hosting provider.

Traditional deployment from a physical server was often wasteful and slow to provision. If the application needed to scale up for increases in traffic, new machines and load balancing mechanisms would have to be set up manually—and often wouldn’t use those compute resources efficiently, leaving processing power and storage paid for but unused.

Virtualization goes some way toward solving these efficiency and provisioning problems by simulating the full stack of application dependencies—from software libraries all the way down to the operating system kernel. Virtual machines (VMs) can be quickly replicated like any other piece of software, and multiple VMs can cohabitate on the same physical computer.

For some deployments, virtualization is an excellent solution. But for applications operating at scale, VMs can be sub-optimal in terms of both resource utilization and scalability. The problem lies in the anatomy of a virtual machine: every single instance of the VM is simulating the entire software stack, including the operating system. As you add more and more instances, that turns into a lot of duplicated effort—and spinning up an entire operating system every time you provision a new instance inevitably slows down the process.

By duplicating only the unique dependencies of a given application, containers provide a more svelte solution. Deploying containerized apps at scale can provide significant benefits:

  • Resource efficiency: We can fit and run more containerized applications than comparable VMs on a given physical machine, making them more cost-effective.

  • Scalability: Since containers are so relatively lightweight, they’re faster to provision.

  • Resiliency: If an instance crashes, the speed and efficiency of containers make it easier to immediately spin up a new instance.

Good news all around! But deploying complicated, large-scale containerized apps to production across many hosts poses serious challenges. How do we coordinate tasks across all of those containers? How can containers across many different hosts communicate with one another? These challenges have given rise to a new class of software systems: container orchestrators.

What is Kubernetes?

Container orchestrators provide a unified, automated solution for networking, scaling, provisioning, maintenance, and many other tasks involved in deployment of containers.

There are several notable container orchestrators with varying use-cases:

  • Swarm: An orchestrator directly integrated with the Docker Engine and well-suited to smaller clusters where security is a top priority.

  • Apache Mesos: An older orchestrator of both containerized and non-containerized workloads, originally developed at UC Berkeley and sometimes used for workloads centered around big data.

  • Kubernetes: As of this writing, the industry’s most widely-used container orchestrator is unquestionably the open source Kubernetes project that emerged from Google in 2014. Originally based on Google’s internal Borg cluster manager, Kubernetes has been maintained under the auspices of the Cloud Native Computing Foundation (CNCF) since 2015.

Pronounced (roughly) “koo-brr-net-ees” and sometimes shortened to k8s (“kay-eights”, or “kates”), the project’s name derives from the Ancient Greek κυβερνήτης, which means “captain” or “navigator.” If we understand software containers through the metaphor of a shipping container, then we can think of Kubernetes as the captain that guides those containers where they need to go.

That metaphor is illustrative, but incomplete. Kubernetes is more than a top-down giver of orders—it’s also the substrate (or “underlying layer”) that containerized applications run on. In this sense, Kubernetes isn’t merely the ship’s captain but the ship itself, a self-contained and potentially massive ship like an aircraft carrier or the starship Enterprise—a floating world that might host all kinds of activity.

An organization might run dozens or hundreds of applications or services on a Kubernetes cluster—meaning a set of one or more physical or virtual machines gathered into an organized software substrate by Kubernetes. In this sense, Kubernetes is sometimes understood as a sort of operating system for the cloud.

Who runs Kubernetes?

Kubernetes is designed to run enterprise applications at scale, and it has achieved widespread adoption for this purpose. The Cloud Native Computing Foundation’s 2021 Cloud Native Survey found that 96% of organizations among its respondents were either using or evaluating Kubernetes.

Within those organizations, there are generally two personas who interact with Kubernetes: developers and operators. These two roles have different priorities, needs, and usage patterns when it comes to Kubernetes:

  • Developers need to be able to access resources on a Kubernetes cluster, build applications for a Kubernetes environment, and run those applications on the cluster

  • Operatorsneed to be able to manage and monitor the cluster and all of its attendant infrastructure, including storage and networking considerations

There is some overlap between the skills and knowledge that each of these personas will require (and indeed, under some circumstances the same person might fulfill both roles)—developers and operators alike need a basic understanding of the anatomy of a Kubernetes cluster, for example.

Broadly speaking, however, this series will be focused on what developers should know to use Kubernetes successfully. In the lessons that follow, we will learn how Kubernetes works, how to build apps for deployment on Kubernetes, and how to streamline the developer experience. By the end of the unit, we will have deployed multiple complex apps to our own cluster.

Our five minutes are up for today—next time, we’ll get a Kubernetes cluster up and running.

Choose your cloud native journey.

Whatever your role, we’re here to help with open source tools and world-class support.

GET STARTED

Join Our Exclusive Newsletter

Get cloud-native insights and expert commentary straight to your inbox.

SUBSCRIBE NOW