Scaling with Kubernetes DaemonSets

Nick Chase - March 26, 2017 - , , , ,

Join author Nick Chase in a webinar on YAML on February 13, 2019.

We’re used to thinking about scaling from the point of view of a deployment; we want it to scale up under different conditions, so it looks for appropriate nodes, and puts pods on them. DaemonSets, on the other hand, take a different tack: any time you have a node that belongs to the set, it runs the pods you specify. They are useful for running background services in a web or storage server, such as system resource monitoring or logging. DaemonSets are also crucial if you want to run a specific service before any other pods start.
For example, you might create a DaemonSet that requests Kubernetes run Nginx  any time you create a node with the label
app=webserver .  Let’s take a look at how that works.

Creating a Kubernetes DaemonSet

Let’s start by looking at a sample YAML file to define a Kubernetes DaemonSet:

apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: frontend
spec:
  template:
    metadata:
      labels:
        app: frontend-webserver
    spec:
      nodeSelector:
        app: frontend-node
      containers:
        - name: webserver
          image: nginx
          ports:
          - containerPort: 80

Here we’re creating a DaemonSet called frontend. As with a ReplicationController, pods launched by the DaemonSet are given the label specified in the spec.template.metadata.labels property — in this case, app=frontend-webserver.

The template.spec itself has two important parts: the nodeSelector and the containers.  The containers are fairly self-evident (see our discussion of ReplicationControllers if you need a refresher) but the interesting part here is the nodeSelector.

The nodeSelector tells Kubernetes which nodes are part of the set and should run the specified containers.  In other words, these pods are deployed automatically; there’s no input at all from the scheduler, so schedulability of a node isn’t taken into account.  On the other hand, Daemon Sets are a great way to deploya collection of pods that need to be running before other objects.

Let’s go ahead and create the Kubernetes DaemonSet.  Create a configuration file called ds.yaml with the definition in it and run the command:

$ kubectl create -f ds.yaml
daemonset "datastore" created

Now let’s see how we can instruct a Kubernetes Daemon to start or remove pods automatically..

Scale Up or Scale Down a DaemonSet

If we check to see if the pods have been deployed, we’ll see that they haven’t:

$ kubectl get pods
NAME                        READY     STATUS    RESTARTS   AGE

That’s because we don’t yet have any nodes that are part of our DaemonSet.  If we look at the nodes we do have …

$ kubectl get nodes
NAME        STATUS    AGE
10.0.10.5   Ready     75d
10.0.10.7   Ready     75d

We can go ahead and add at least one of them by adding the app=frontend-node label:

$kubectl label  node 10.0.10.5 app=frontend-node
node "10.0.10.5" labeled

Now if we monitor the list of pods again…

$ kubectl get pods
NAME                        READY     STATUS    RESTARTS   AGE
frontend-7nfxo              1/1       Running   0          19s

We can see that the pod was started without us taking any additional action.  

Now we have a single webserver running. You now can, using the command kubectl, scale DaemonSet pods up by adding a new node  as shown in the example: 

$ kubectl label  node 10.0.10.7 app=frontend-node
node "10.0.10.7" labeled

If we check the list of pods again, we can see that a new one was automatically started:

$ kubectl get pods
NAME                        READY     STATUS    RESTARTS   AGE
frontend-7nfxo              1/1       Running   0          1m
frontend-rp9bu              1/1       Running   0          35s

If we remove a node from the DaemonSet, any related pods are automatically terminated:

$ kubectl label  node 10.0.10.5 --overwrite app=backend
node "10.0.10.5" labeled

$ kubectl get pods
NAME                        READY     STATUS    RESTARTS   AGE
frontend-rp9bu              1/1       Running   0          1m

Updating Daemon Sets, and improvements in Kubernetes 1.6

OK, so how do we update a running DaemonSet?  Well, as of Kubernetes version 1.5, the answer is “you don’t.” Currently, it’s possible to change the template of a DaemonSet, but it won’t affect the pods that are already running.  

Starting in Kubernetes 1.6, however, you will be able to do rolling updates with a Kubernetes DaemonSets. You’ll have to set the updateStrategy, as in:

apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: frontend
spec:
  updateStrategy: RollingUpdate
    maxUnavailable: 1
    minReadySeconds: 0
  template:
    metadata:
      labels:
        app: frontend-webserver
    spec:
      nodeSelector:
        app: frontend-node
      containers:
        - name: webserver
          image: nginx
          ports:
          - containerPort: 80

Once you’ve done that, you can make changes and they’ll propagate to the running pods. For example, you can change the image on which the containers are based. For example:

$kubectl set image ds/frontend webserver=httpd

If you want to make more substantive changes, you can edit or patch the Daemon Set:

kubectl edit ds/frontend

or

kubectl patch ds/frontend -p=ds-changes.yaml

(Obviously you would use your own DaemonSet names and files!)

So that’s the basics of working with DaemonSets.  What else would you like to learn about them? Let us know in the comments below.

banner-img
From Virtualization to Containerization
Learn how to move from monolithic to microservices in this free eBook
Download Now
Radio Cloud Native – Week of May 11th, 2022

Every Wednesday, Nick Chase and Eric Gregory from Mirantis go over the week’s cloud native and industry news. This week they discussed: Docker Extensions Artificial Intelligence shows signs that it's reaching the common person Google Cloud TPU VMs reach general availability Google buys MobileX, folds into Google Cloud NIST changes Palantir is back, and it's got a Blanket Purchase Agreement at the Department of Health and Human …

Radio Cloud Native – Week of May 11th, 2022
Where do Ubuntu 20.04, OpenSearch, Tungsten Fabric, and more all come together? In the latest Mirantis Container Cloud releases!

In the last several weeks we have released two updates to Mirantis Container Cloud - versions 2.16 and 2.17, which bring a number of important changes and enhancements. These are focused on both keeping key components up to date to provide the latest functionality and security fixes, and also delivering new functionalities for our customers to take advantage of in …

Where do Ubuntu 20.04, OpenSearch, Tungsten Fabric, and more all come together? In the latest Mirantis Container Cloud releases!
Monitoring Kubernetes costs using Kubecost and Mirantis Kubernetes Engine [Transcript]

Cloud environments & Kubernetes are becoming more and more expensive to operate and manage. In this demo-rich workshop, Mirantis and Kubecost demonstrate how to deploy Kubecost as a Helm chart on top of Mirantis Kubernetes Engine. Lens users will be able to visualize their Kubernetes spend directly in the Lens desktop application, allowing users to view spend and costs efficiently …

Monitoring Kubernetes costs using Kubecost and Mirantis Kubernetes Engine [Transcript]
FREE EBOOK!
Service Mesh for Mere Mortals
A Guide to Istio and How to Use Service Mesh Platforms
DOWNLOAD
Technical training
Learn Kubernetes & OpenStack from Deployment Experts
Prep for certification!
View schedule
WHITEPAPER
The Definitive Guide to Container Platforms
READ IT NOW