Learning Kubernetes - Episode 13 - Introduction and Explanation of DaemonSet

Learning Kubernetes - Episode 13 - Introduction and Explanation of DaemonSet

In this episode, we'll discuss an important Kubernetes controller called DaemonSet. We'll learn how DaemonSet ensures that a copy of a Pod runs on all (or some) nodes in the cluster.

Arman Dwi Pangestu
Arman Dwi PangestuMarch 16, 2026
0 views
8 min read

Introduction

Note

If you want to read the previous episode, you can click the Episode 12 thumbnail below

Episode 12Episode 12

In the previous episode, we learned about ReplicaSet, the modern replacement for ReplicationController with enhanced selector capabilities. In episode 13, we'll discuss a different type of controller: DaemonSet.

Note: Here I'll be using a Kubernetes Cluster installed through K3s.

Unlike ReplicaSet which maintains a specific number of Pod replicas, DaemonSet ensures that a copy of a Pod runs on all (or some) nodes in your cluster. This is particularly useful for running cluster-wide services like log collectors, monitoring agents, or network plugins.

What Is DaemonSet?

A DaemonSet ensures that all (or some) nodes run a copy of a Pod. As nodes are added to the cluster, Pods are automatically added to them. As nodes are removed from the cluster, those Pods are garbage collected.

Think of DaemonSet like a system daemon in Linux - it runs on every machine to provide system-level services. In Kubernetes, DaemonSet runs a Pod on every node to provide node-level services.

Key characteristics of DaemonSet:

  • One Pod per node - Runs exactly one Pod on each node (by default)
  • Automatic scheduling - New nodes automatically get the DaemonSet Pod
  • Automatic cleanup - Pods are removed when nodes are deleted
  • Node selection - Can target specific nodes using node selectors or affinity
  • System-level services - Perfect for cluster-wide infrastructure components

Why Do We Need DaemonSet?

DaemonSet is designed for workloads that need to run on every node in the cluster:

  • Log collection - Running log collectors like Fluentd or Filebeat on every node
  • Monitoring - Running monitoring agents like Prometheus Node Exporter on every node
  • Network plugins - Running CNI plugins or network proxies on every node
  • Storage daemons - Running storage plugins like Ceph or GlusterFS on every node
  • Security agents - Running security scanning or compliance tools on every node
  • Node maintenance - Running cleanup or maintenance tasks on every node

Without DaemonSet, you would need to:

  • Manually create Pods on each node
  • Track which nodes have the Pod
  • Manually add Pods when new nodes join
  • Manually remove Pods when nodes leave

DaemonSet vs ReplicaSet

Let's understand the key differences:

AspectDaemonSetReplicaSet
Pod countOne per nodeFixed number across cluster
SchedulingAutomatic per nodeScheduler decides placement
Use caseNode-level servicesApplication replicas
ScalingScales with nodesManual scaling
Node additionAuto-creates PodNo automatic action
Node removalAuto-removes PodNo automatic action

Example scenario:

  • DaemonSet: You want to run a log collector on every node to collect logs from all containers
  • ReplicaSet: You want to run 3 replicas of your web application, distributed across available nodes

Creating a DaemonSet

Let's create a basic DaemonSet:

Example 1: Basic DaemonSet

Create a file named daemonset-basic.yml:

Kubernetesdaemonset-basic.yml
apiVersion: apps/v1
kind: DaemonSet
metadata:
    name: nginx-daemonset
    labels:
        app: nginx-daemon
spec:
    selector:
        matchLabels:
            app: nginx-daemon
    template:
        metadata:
            labels:
                app: nginx-daemon
        spec:
            containers:
                - name: nginx
                  image: nginx:1.25
                  ports:
                      - containerPort: 80

Apply the configuration:

Kubernetesbash
sudo kubectl apply -f daemonset-basic.yml

Verify the DaemonSet is created:

Kubernetesbash
sudo kubectl get daemonset

Or use the shorthand:

Kubernetesbash
sudo kubectl get ds

Output:

Kubernetesbash
NAME               DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
nginx-daemonset    3         3         3       3            3           <none>          30s

The DESIRED count matches the number of nodes in your cluster.

Check the Pods:

Kubernetesbash
sudo kubectl get pods -o wide

Output:

Kubernetesbash
NAME                    READY   STATUS    RESTARTS   AGE   NODE
nginx-daemonset-abc12   1/1     Running   0          30s   node1
nginx-daemonset-def34   1/1     Running   0          30s   node2
nginx-daemonset-ghi56   1/1     Running   0          30s   node3

Notice that there's exactly one Pod per node.

Viewing DaemonSet Details

To see detailed information about a DaemonSet:

Kubernetesbash
sudo kubectl describe ds nginx-daemonset

Output:

Kubernetesbash
Name:           nginx-daemonset
Selector:       app=nginx-daemon
Node-Selector:  <none>
Labels:         app=nginx-daemon
Annotations:    <none>
Desired Number of Nodes Scheduled: 3
Current Number of Nodes Scheduled: 3
Number of Nodes Scheduled with Up-to-date Pods: 3
Number of Nodes Scheduled with Available Pods: 3
Number of Nodes Misscheduled: 0
Pods Status:  3 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=nginx-daemon
  Containers:
   nginx:
    Image:        nginx:1.25
    Port:         80/TCP
    Host Port:    0/TCP
    Environment:  <none>
    Mounts:       <none>
  Volumes:        <none>
Events:
  Type    Reason            Age   From                  Message
  ----    ------            ----  ----                  -------
  Normal  SuccessfulCreate  2m    daemonset-controller  Created pod: nginx-daemonset-abc12
  Normal  SuccessfulCreate  2m    daemonset-controller  Created pod: nginx-daemonset-def34
  Normal  SuccessfulCreate  2m    daemonset-controller  Created pod: nginx-daemonset-ghi56

Node Selection with DaemonSet

By default, DaemonSet runs on all nodes. You can control which nodes run the DaemonSet Pods using:

Method 1: Node Selector

Run DaemonSet only on nodes with specific labels:

Kubernetesdaemonset-node-selector.yml
apiVersion: apps/v1
kind: DaemonSet
metadata:
    name: ssd-monitor
spec:
    selector:
        matchLabels:
            app: ssd-monitor
    template:
        metadata:
            labels:
                app: ssd-monitor
        spec:
            nodeSelector:
                disktype: ssd
            containers:
                - name: monitor
                  image: nginx:1.25

This DaemonSet only runs on nodes labeled with disktype: ssd.

First, label a node:

Kubernetesbash
sudo kubectl label nodes node1 disktype=ssd

Apply the DaemonSet:

Kubernetesbash
sudo kubectl apply -f daemonset-node-selector.yml

Check Pods:

Kubernetesbash
sudo kubectl get pods -o wide

You'll see Pods only on nodes with the disktype: ssd label.

Method 2: Node Affinity

More flexible node selection using affinity rules:

Kubernetesdaemonset-affinity.yml
apiVersion: apps/v1
kind: DaemonSet
metadata:
    name: production-monitor
spec:
    selector:
        matchLabels:
            app: prod-monitor
    template:
        metadata:
            labels:
                app: prod-monitor
        spec:
            affinity:
                nodeAffinity:
                    requiredDuringSchedulingIgnoredDuringExecution:
                        nodeSelectorTerms:
                            - matchExpressions:
                                  - key: environment
                                    operator: In
                                    values:
                                        - production
                                        - staging
            containers:
                - name: monitor
                  image: nginx:1.25

This DaemonSet runs on nodes where environment is either production or staging.

Method 3: Taints and Tolerations

Run DaemonSet on nodes with specific taints:

Kubernetesdaemonset-tolerations.yml
apiVersion: apps/v1
kind: DaemonSet
metadata:
    name: special-daemon
spec:
    selector:
        matchLabels:
            app: special-daemon
    template:
        metadata:
            labels:
                app: special-daemon
        spec:
            tolerations:
                - key: node-role.kubernetes.io/control-plane
                  operator: Exists
                  effect: NoSchedule
            containers:
                - name: daemon
                  image: nginx:1.25

This DaemonSet can run on control plane nodes (which normally have taints preventing regular Pods).

Practical Examples

Example 1: Log Collector DaemonSet

A realistic example of running Fluentd log collector on every node:

Kubernetesfluentd-daemonset.yml
apiVersion: apps/v1
kind: DaemonSet
metadata:
    name: fluentd
    namespace: kube-system
    labels:
        app: fluentd
spec:
    selector:
        matchLabels:
            app: fluentd
    template:
        metadata:
            labels:
                app: fluentd
        spec:
            tolerations:
                - key: node-role.kubernetes.io/control-plane
                  effect: NoSchedule
            containers:
                - name: fluentd
                  image: fluent/fluentd:v1.16
                  resources:
                      limits:
                          memory: 200Mi
                      requests:
                          cpu: 100m
                          memory: 200Mi
                  volumeMounts:
                      - name: varlog
                        mountPath: /var/log
                      - name: varlibdockercontainers
                        mountPath: /var/lib/docker/containers
                        readOnly: true
            volumes:
                - name: varlog
                  hostPath:
                      path: /var/log
                - name: varlibdockercontainers
                  hostPath:
                      path: /var/lib/docker/containers

This DaemonSet:

  • Runs in kube-system namespace
  • Tolerates control plane taints
  • Mounts host directories to access logs
  • Sets resource limits

Example 2: Monitoring Agent DaemonSet

Running Prometheus Node Exporter on every node:

Kubernetesnode-exporter-daemonset.yml
apiVersion: apps/v1
kind: DaemonSet
metadata:
    name: node-exporter
    namespace: monitoring
    labels:
        app: node-exporter
spec:
    selector:
        matchLabels:
            app: node-exporter
    template:
        metadata:
            labels:
                app: node-exporter
        spec:
            hostNetwork: true
            hostPID: true
            containers:
                - name: node-exporter
                  image: prom/node-exporter:latest
                  args:
                      - --path.procfs=/host/proc
                      - --path.sysfs=/host/sys
                      - --collector.filesystem.mount-points-exclude=^/(sys|proc|dev|host|etc)($$|/)
                  ports:
                      - containerPort: 9100
                        hostPort: 9100
                        name: metrics
                  resources:
                      limits:
                          memory: 180Mi
                      requests:
                          cpu: 100m
                          memory: 180Mi
                  volumeMounts:
                      - name: proc
                        mountPath: /host/proc
                        readOnly: true
                      - name: sys
                        mountPath: /host/sys
                        readOnly: true
            volumes:
                - name: proc
                  hostPath:
                      path: /proc
                - name: sys
                  hostPath:
                      path: /sys

This DaemonSet:

  • Uses host network and PID namespace
  • Exposes metrics on port 9100
  • Mounts host /proc and /sys for system metrics

Example 3: Network Plugin DaemonSet

Running a network plugin on every node:

Kubernetesnetwork-plugin-daemonset.yml
apiVersion: apps/v1
kind: DaemonSet
metadata:
    name: kube-proxy
    namespace: kube-system
    labels:
        k8s-app: kube-proxy
spec:
    selector:
        matchLabels:
            k8s-app: kube-proxy
    updateStrategy:
        type: RollingUpdate
        rollingUpdate:
            maxUnavailable: 1
    template:
        metadata:
            labels:
                k8s-app: kube-proxy
        spec:
            priorityClassName: system-node-critical
            hostNetwork: true
            tolerations:
                - operator: Exists
                  effect: NoSchedule
            containers:
                - name: kube-proxy
                  image: registry.k8s.io/kube-proxy:v1.28.0
                  command:
                      - /usr/local/bin/kube-proxy
                      - --config=/var/lib/kube-proxy/config.conf
                  securityContext:
                      privileged: true
                  volumeMounts:
                      - name: kube-proxy
                        mountPath: /var/lib/kube-proxy
            volumes:
                - name: kube-proxy
                  configMap:
                      name: kube-proxy

This DaemonSet:

  • Runs with system-node-critical priority
  • Uses host network
  • Tolerates all taints
  • Runs in privileged mode

Updating DaemonSet

DaemonSet supports two update strategies:

OnDelete Strategy

Pods are only updated when manually deleted:

Kubernetesyml
spec:
    updateStrategy:
        type: OnDelete

With this strategy:

  1. Update the DaemonSet
  2. Manually delete Pods
  3. New Pods are created with updated template

RollingUpdate Strategy (Default)

Pods are automatically updated in a rolling fashion:

Kubernetesyml
spec:
    updateStrategy:
        type: RollingUpdate
        rollingUpdate:
            maxUnavailable: 1

With this strategy:

  1. Update the DaemonSet
  2. Kubernetes automatically updates Pods one by one
  3. Respects maxUnavailable setting

Example of updating a DaemonSet:

Kubernetesbash
# Edit the DaemonSet
sudo kubectl edit ds nginx-daemonset
 
# Or update the YAML file and apply
sudo kubectl apply -f daemonset-basic.yml

Watch the rolling update:

Kubernetesbash
sudo kubectl rollout status ds nginx-daemonset

Deleting DaemonSet

Method 1: Delete DaemonSet and Pods

Kubernetesbash
sudo kubectl delete ds nginx-daemonset

This deletes the DaemonSet and all its Pods.

Method 2: Delete DaemonSet but Keep Pods

Kubernetesbash
sudo kubectl delete ds nginx-daemonset --cascade=orphan

This deletes only the DaemonSet, leaving Pods running as orphans.

Common Use Cases

Use Case 1: Cluster Monitoring

Deploy monitoring agents to collect metrics from every node:

Kubernetesmonitoring-daemonset.yml
apiVersion: apps/v1
kind: DaemonSet
metadata:
    name: monitoring-agent
spec:
    selector:
        matchLabels:
            app: monitoring
    template:
        metadata:
            labels:
                app: monitoring
        spec:
            containers:
                - name: agent
                  image: monitoring-agent:latest
                  resources:
                      limits:
                          memory: 200Mi
                      requests:
                          cpu: 100m
                          memory: 100Mi

Use Case 2: Log Aggregation

Collect logs from all nodes:

Kuberneteslog-collector-daemonset.yml
apiVersion: apps/v1
kind: DaemonSet
metadata:
    name: log-collector
spec:
    selector:
        matchLabels:
            app: logs
    template:
        metadata:
            labels:
                app: logs
        spec:
            containers:
                - name: collector
                  image: log-collector:latest
                  volumeMounts:
                      - name: logs
                        mountPath: /var/log
            volumes:
                - name: logs
                  hostPath:
                      path: /var/log

Use Case 3: Security Scanning

Run security agents on every node:

Kubernetessecurity-scanner-daemonset.yml
apiVersion: apps/v1
kind: DaemonSet
metadata:
    name: security-scanner
spec:
    selector:
        matchLabels:
            app: security
    template:
        metadata:
            labels:
                app: security
        spec:
            containers:
                - name: scanner
                  image: security-scanner:latest
                  securityContext:
                      privileged: true

Common Mistakes and Pitfalls

Mistake 1: Not Setting Resource Limits

DaemonSet Pods run on every node, so resource usage multiplies:

Problem: Without limits, DaemonSet Pods can consume all node resources.

Solution: Always set resource requests and limits:

Kubernetesyml
resources:
    requests:
        cpu: 100m
        memory: 128Mi
    limits:
        cpu: 200m
        memory: 256Mi

Mistake 2: Using DaemonSet for Application Workloads

Problem: Using DaemonSet for regular applications that don't need to run on every node.

Solution: Use Deployment or ReplicaSet for application workloads. Use DaemonSet only for node-level services.

Mistake 3: Not Considering Node Taints

Problem: DaemonSet Pods don't schedule on tainted nodes.

Solution: Add appropriate tolerations:

Kubernetesyml
tolerations:
    - key: node-role.kubernetes.io/control-plane
      effect: NoSchedule

Mistake 4: Forgetting Host Path Permissions

Problem: DaemonSet can't access host directories due to permissions.

Solution: Use appropriate security context and volume mounts:

Kubernetesyml
securityContext:
    privileged: true
volumeMounts:
    - name: host-path
      mountPath: /host
      readOnly: true

Mistake 5: Not Using Update Strategy

Problem: Manual Pod deletion required for updates.

Solution: Use RollingUpdate strategy:

Kubernetesyml
updateStrategy:
    type: RollingUpdate
    rollingUpdate:
        maxUnavailable: 1

Best Practices

Set Appropriate Resource Limits

Since DaemonSet runs on every node, resource usage is multiplied:

Kubernetesyml
resources:
    requests:
        cpu: 100m
        memory: 128Mi
    limits:
        cpu: 200m
        memory: 256Mi

Use RollingUpdate Strategy

Enable automatic updates:

Kubernetesyml
updateStrategy:
    type: RollingUpdate
    rollingUpdate:
        maxUnavailable: 1

Add Tolerations for System Nodes

Allow DaemonSet to run on all nodes including control plane:

Kubernetesyml
tolerations:
    - operator: Exists
      effect: NoSchedule

Use Priority Classes

Set appropriate priority for system DaemonSets:

Kubernetesyml
priorityClassName: system-node-critical

Implement Health Checks

Add Probes to ensure DaemonSet Pods are healthy:

Kubernetesyml
livenessProbe:
    httpGet:
        path: /health
        port: 8080
    initialDelaySeconds: 30
    periodSeconds: 10
readinessProbe:
    httpGet:
        path: /ready
        port: 8080
    initialDelaySeconds: 5
    periodSeconds: 5

Use Namespaces Appropriately

System DaemonSets should run in system namespaces:

Kubernetesyml
metadata:
    namespace: kube-system

Monitoring DaemonSet

Check DaemonSet Status

Kubernetesbash
sudo kubectl get ds

View DaemonSet Details

Kubernetesbash
sudo kubectl describe ds nginx-daemonset

Check DaemonSet Pods

Kubernetesbash
sudo kubectl get pods -l app=nginx-daemon -o wide

Watch DaemonSet Rollout

Kubernetesbash
sudo kubectl rollout status ds nginx-daemonset

Check DaemonSet Events

Kubernetesbash
sudo kubectl get events --sort-by='.lastTimestamp' | grep DaemonSet

Conclusion

In episode 13, we've explored DaemonSet in Kubernetes in depth. We've learned what DaemonSet is, how it differs from ReplicaSet, and when to use it for node-level services.

Key takeaways:

  • DaemonSet ensures one Pod runs on each node
  • Automatically schedules Pods on new nodes
  • Automatically removes Pods from deleted nodes
  • Perfect for cluster-wide services (logging, monitoring, networking)
  • Supports node selection via selectors, affinity, and tolerations
  • Two update strategies: OnDelete and RollingUpdate
  • Different from ReplicaSet which maintains fixed replica count
  • Always set resource limits to prevent resource exhaustion
  • Use tolerations to run on tainted nodes

DaemonSet is essential for running infrastructure components that need to be present on every node in your cluster. By understanding DaemonSet, you can effectively deploy and manage cluster-wide services like log collectors, monitoring agents, and network plugins.

Are you getting a clearer understanding of DaemonSet in Kubernetes? In the next episode 14, we'll discuss Job, a controller designed for running tasks to completion rather than keeping Pods running continuously. Keep your learning momentum going and look forward to the next episode!

Note

If you want to continue to the next episode, you can click the Episode 14 thumbnail below

Episode 14Episode 14

Related Posts