What is Kubernetes management?

An effective Kubernetes environment must include the ability to create, scale, update, and observe the clusters that run containers.

Why use Kubernetes?

For as long as there have been computers, there have been difficulties getting applications to run the same way in multiple locations. Developers even have a saying about it: “Well, it works on my machine.”

In the last few years, however, portability and repeatability have been less of a problem due to the use of containers, which effectively encapsulate everything an application needs to run and provide a relatively isolated environment in which that can happen.

Of course, containers bring their own difficulties: now that you’ve got your application running in all of these little boxes, how do you manage the little boxes? That’s where Kubernetes comes in.

How does Kubernetes work?

Kubernetes uses a series of nodes on which it schedules pods. Each pod can contain one or more containers, all of which can talk to each other via services.

Workloads are added to Kubernetes via YAML files, such as:

—
apiVersion: v1
kind: Pod
metadata:
 name: rss-site
 labels:
   app: web
spec:
 containers:
   – name: front-end
     image: nginx
     ports:
     – containerPort: 80
   – name: rss-reader
     image: nickchase/rss-php-nginx:v1
     ports:
     – containerPort: 88

When you add a workload to Kubernetes, the Kubernetes controller places it on a node and starts the pod. If you’ve requested multiple replicas, it creates multiple instances of that workload, assigning each of them unique names and potentially to different nodes.

Should something go wrong with one of those pods, Kubernetes automatically starts another instance to replace it.

Proper Kubernetes management ensures that these resources are available and provides information so you can detect problems before they cause downtime for your clusters.

How do you manage Kubernetes objects and components?

Kubernetes objects and components are managed in much the same way as Kubernetes- based applications: through YAML definition files. For example, to create a new service, we might define it as:

apiVersion: v1
kind: Service
metadata:
  name: rss-service
spec:
  selector:
    app: web
  ports:
    - protocol: TCP
      port: 80
      targetPort: 9376

We can then add it using kubectl:

kubectl create -f service.yaml

Kubernetes even enables you to create your own CustomResourceDefinitions, such as:

apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
  name: crontabs.stable.example.com
spec:
  group: stable.example.com
  versions:
    - name: v1
      served: true
      storage: true
      schema:
        openAPIV3Schema:
          type: object
          properties:
            spec:
              type: object
              properties:
                cronSpec:
                  type: string
                image:
                  type: string
                replicas:
                  type: integer
  scope: Namespaced
  names:
    plural: crontabs
    singular: crontab
    kind: CronTab
    shortNames:
    - ct

These are also created and managed just like other Kubernetes objects, as in:

apiVersion: "stable.example.com/v1"
kind: CronTab
metadata:
  name: my-new-cron-object
spec:
  cronSpec: "* * * * */5"
  image: my-awesome-cron-image

The important thing to remember is that anything you do should be stored in a CVS or similar system for repeatability.

How do you manage Kubernetes clusters?

Managing Kubernetes clusters can be similar to managing Kubernetes objects, but it probably shouldn’t be. In other words, just because you CAN create a Kubernetes cluster with a YAML file doesn’t mean that you should.

Instead, you should use one of the many tools that exist for creating and managing Kubernetes clusters. What you use is going to depend on what you’re trying to achieve. Tools for managing Kubernetes include:

  • Desktop development tools such as Docker Desktop or kubeadm provide a relatively easy way to create a small Kubernetes cluster on your local machine, but aren’t suitable for a production application.
  • Managed public cloud Kubernetes optionssuch as Amazon Kubernetes Service or Google Kubernetes Service are (relatively) easy to set up and suitable for production use, but can ultimately lock you into their platform, as the actual management of your clusters is performed using proprietary APIs.
  • Enterprise Kubernetes management tools such as Mirantis Kubernetes Engine (formerly Docker Enterprise) enable you to run production-level Kubernetes clusters on existing infrastructure, such as bare metal, VMware, or OpenStack clusters.

Production Kubernetes management

Although it can be straightforward to deploy a development kubernetes cluster, a true enterprise-grade Kuberentes architecture requires a much greater degree of management.

Specifically, your Kubernetes management tool must be able to manage multiple Kubernetes clusters. In fact, an enterprise-grade Kubernetes management tool will enable you to create, scale, update, and observe clusters, potentially across multiple infrastructures, such as on-prem and public cloud.

An enterprise-grade Kubernetes management tool enables you to create a cluster by defining its parameters, such as the type and number of servers to act as nodes. For example, Mirantis Container Cloud enables you to define a cluster, choosing whether it should run on bare metal, AWS, OpenStack, VMware, and so on. Once you’ve done that, you can add machines to the cluster, and Container Cloud automatically provisions them from that infrastructure and deploys the appropriate software to those nodes.

To scale the cluster, you simply specify additional nodes, and the management tool adds them to the cluster.

In a true enterprise grade Kubernetes management system, upgrading should be just as straightforward; you should be able to specify the version of Kubernetes a cluster should be running and the Kubernetes Management system should perform the upgrade.

A Kubernetes management system should also provide Kubernetes visibility and observability — preferably with standard tools. The system should provide insights into aspects of cluster usage such as CPU load, available storage, and network load. In most cases, these insights will come from a Prometheus-based Kubernetes monitoring tool.

To find out the best Kubernetes management tool for you, please contact us and our experts will be happy to help.