In this episode, we'll discuss an important Kubernetes controller called DaemonSet. We'll learn how DaemonSet ensures that a copy of a Pod runs on all (or some) nodes in the cluster.

Note
If you want to read the previous episode, you can click the Episode 12 thumbnail below
In the previous episode, we learned about ReplicaSet, the modern replacement for ReplicationController with enhanced selector capabilities. In episode 13, we'll discuss a different type of controller: DaemonSet.
Note: Here I'll be using a Kubernetes Cluster installed through K3s.
Unlike ReplicaSet which maintains a specific number of Pod replicas, DaemonSet ensures that a copy of a Pod runs on all (or some) nodes in your cluster. This is particularly useful for running cluster-wide services like log collectors, monitoring agents, or network plugins.
A DaemonSet ensures that all (or some) nodes run a copy of a Pod. As nodes are added to the cluster, Pods are automatically added to them. As nodes are removed from the cluster, those Pods are garbage collected.
Think of DaemonSet like a system daemon in Linux - it runs on every machine to provide system-level services. In Kubernetes, DaemonSet runs a Pod on every node to provide node-level services.
Key characteristics of DaemonSet:
DaemonSet is designed for workloads that need to run on every node in the cluster:
Without DaemonSet, you would need to:
Let's understand the key differences:
| Aspect | DaemonSet | ReplicaSet |
|---|---|---|
| Pod count | One per node | Fixed number across cluster |
| Scheduling | Automatic per node | Scheduler decides placement |
| Use case | Node-level services | Application replicas |
| Scaling | Scales with nodes | Manual scaling |
| Node addition | Auto-creates Pod | No automatic action |
| Node removal | Auto-removes Pod | No automatic action |
Example scenario:
Let's create a basic DaemonSet:
Create a file named daemonset-basic.yml:
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: nginx-daemonset
labels:
app: nginx-daemon
spec:
selector:
matchLabels:
app: nginx-daemon
template:
metadata:
labels:
app: nginx-daemon
spec:
containers:
- name: nginx
image: nginx:1.25
ports:
- containerPort: 80Apply the configuration:
sudo kubectl apply -f daemonset-basic.ymlVerify the DaemonSet is created:
sudo kubectl get daemonsetOr use the shorthand:
sudo kubectl get dsOutput:
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
nginx-daemonset 3 3 3 3 3 <none> 30sThe DESIRED count matches the number of nodes in your cluster.
Check the Pods:
sudo kubectl get pods -o wideOutput:
NAME READY STATUS RESTARTS AGE NODE
nginx-daemonset-abc12 1/1 Running 0 30s node1
nginx-daemonset-def34 1/1 Running 0 30s node2
nginx-daemonset-ghi56 1/1 Running 0 30s node3Notice that there's exactly one Pod per node.
To see detailed information about a DaemonSet:
sudo kubectl describe ds nginx-daemonsetOutput:
Name: nginx-daemonset
Selector: app=nginx-daemon
Node-Selector: <none>
Labels: app=nginx-daemon
Annotations: <none>
Desired Number of Nodes Scheduled: 3
Current Number of Nodes Scheduled: 3
Number of Nodes Scheduled with Up-to-date Pods: 3
Number of Nodes Scheduled with Available Pods: 3
Number of Nodes Misscheduled: 0
Pods Status: 3 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
Labels: app=nginx-daemon
Containers:
nginx:
Image: nginx:1.25
Port: 80/TCP
Host Port: 0/TCP
Environment: <none>
Mounts: <none>
Volumes: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulCreate 2m daemonset-controller Created pod: nginx-daemonset-abc12
Normal SuccessfulCreate 2m daemonset-controller Created pod: nginx-daemonset-def34
Normal SuccessfulCreate 2m daemonset-controller Created pod: nginx-daemonset-ghi56By default, DaemonSet runs on all nodes. You can control which nodes run the DaemonSet Pods using:
Run DaemonSet only on nodes with specific labels:
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: ssd-monitor
spec:
selector:
matchLabels:
app: ssd-monitor
template:
metadata:
labels:
app: ssd-monitor
spec:
nodeSelector:
disktype: ssd
containers:
- name: monitor
image: nginx:1.25This DaemonSet only runs on nodes labeled with disktype: ssd.
First, label a node:
sudo kubectl label nodes node1 disktype=ssdApply the DaemonSet:
sudo kubectl apply -f daemonset-node-selector.ymlCheck Pods:
sudo kubectl get pods -o wideYou'll see Pods only on nodes with the disktype: ssd label.
More flexible node selection using affinity rules:
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: production-monitor
spec:
selector:
matchLabels:
app: prod-monitor
template:
metadata:
labels:
app: prod-monitor
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: environment
operator: In
values:
- production
- staging
containers:
- name: monitor
image: nginx:1.25This DaemonSet runs on nodes where environment is either production or staging.
Run DaemonSet on nodes with specific taints:
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: special-daemon
spec:
selector:
matchLabels:
app: special-daemon
template:
metadata:
labels:
app: special-daemon
spec:
tolerations:
- key: node-role.kubernetes.io/control-plane
operator: Exists
effect: NoSchedule
containers:
- name: daemon
image: nginx:1.25This DaemonSet can run on control plane nodes (which normally have taints preventing regular Pods).
A realistic example of running Fluentd log collector on every node:
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluentd
namespace: kube-system
labels:
app: fluentd
spec:
selector:
matchLabels:
app: fluentd
template:
metadata:
labels:
app: fluentd
spec:
tolerations:
- key: node-role.kubernetes.io/control-plane
effect: NoSchedule
containers:
- name: fluentd
image: fluent/fluentd:v1.16
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 200Mi
volumeMounts:
- name: varlog
mountPath: /var/log
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
volumes:
- name: varlog
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containersThis DaemonSet:
kube-system namespaceRunning Prometheus Node Exporter on every node:
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: node-exporter
namespace: monitoring
labels:
app: node-exporter
spec:
selector:
matchLabels:
app: node-exporter
template:
metadata:
labels:
app: node-exporter
spec:
hostNetwork: true
hostPID: true
containers:
- name: node-exporter
image: prom/node-exporter:latest
args:
- --path.procfs=/host/proc
- --path.sysfs=/host/sys
- --collector.filesystem.mount-points-exclude=^/(sys|proc|dev|host|etc)($$|/)
ports:
- containerPort: 9100
hostPort: 9100
name: metrics
resources:
limits:
memory: 180Mi
requests:
cpu: 100m
memory: 180Mi
volumeMounts:
- name: proc
mountPath: /host/proc
readOnly: true
- name: sys
mountPath: /host/sys
readOnly: true
volumes:
- name: proc
hostPath:
path: /proc
- name: sys
hostPath:
path: /sysThis DaemonSet:
/proc and /sys for system metricsRunning a network plugin on every node:
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-proxy
namespace: kube-system
labels:
k8s-app: kube-proxy
spec:
selector:
matchLabels:
k8s-app: kube-proxy
updateStrategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
template:
metadata:
labels:
k8s-app: kube-proxy
spec:
priorityClassName: system-node-critical
hostNetwork: true
tolerations:
- operator: Exists
effect: NoSchedule
containers:
- name: kube-proxy
image: registry.k8s.io/kube-proxy:v1.28.0
command:
- /usr/local/bin/kube-proxy
- --config=/var/lib/kube-proxy/config.conf
securityContext:
privileged: true
volumeMounts:
- name: kube-proxy
mountPath: /var/lib/kube-proxy
volumes:
- name: kube-proxy
configMap:
name: kube-proxyThis DaemonSet:
DaemonSet supports two update strategies:
Pods are only updated when manually deleted:
spec:
updateStrategy:
type: OnDeleteWith this strategy:
Pods are automatically updated in a rolling fashion:
spec:
updateStrategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1With this strategy:
maxUnavailable settingExample of updating a DaemonSet:
# Edit the DaemonSet
sudo kubectl edit ds nginx-daemonset
# Or update the YAML file and apply
sudo kubectl apply -f daemonset-basic.ymlWatch the rolling update:
sudo kubectl rollout status ds nginx-daemonsetsudo kubectl delete ds nginx-daemonsetThis deletes the DaemonSet and all its Pods.
sudo kubectl delete ds nginx-daemonset --cascade=orphanThis deletes only the DaemonSet, leaving Pods running as orphans.
Deploy monitoring agents to collect metrics from every node:
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: monitoring-agent
spec:
selector:
matchLabels:
app: monitoring
template:
metadata:
labels:
app: monitoring
spec:
containers:
- name: agent
image: monitoring-agent:latest
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 100MiCollect logs from all nodes:
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: log-collector
spec:
selector:
matchLabels:
app: logs
template:
metadata:
labels:
app: logs
spec:
containers:
- name: collector
image: log-collector:latest
volumeMounts:
- name: logs
mountPath: /var/log
volumes:
- name: logs
hostPath:
path: /var/logRun security agents on every node:
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: security-scanner
spec:
selector:
matchLabels:
app: security
template:
metadata:
labels:
app: security
spec:
containers:
- name: scanner
image: security-scanner:latest
securityContext:
privileged: trueDaemonSet Pods run on every node, so resource usage multiplies:
Problem: Without limits, DaemonSet Pods can consume all node resources.
Solution: Always set resource requests and limits:
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 200m
memory: 256MiProblem: Using DaemonSet for regular applications that don't need to run on every node.
Solution: Use Deployment or ReplicaSet for application workloads. Use DaemonSet only for node-level services.
Problem: DaemonSet Pods don't schedule on tainted nodes.
Solution: Add appropriate tolerations:
tolerations:
- key: node-role.kubernetes.io/control-plane
effect: NoScheduleProblem: DaemonSet can't access host directories due to permissions.
Solution: Use appropriate security context and volume mounts:
securityContext:
privileged: true
volumeMounts:
- name: host-path
mountPath: /host
readOnly: trueProblem: Manual Pod deletion required for updates.
Solution: Use RollingUpdate strategy:
updateStrategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1Since DaemonSet runs on every node, resource usage is multiplied:
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 200m
memory: 256MiEnable automatic updates:
updateStrategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1Allow DaemonSet to run on all nodes including control plane:
tolerations:
- operator: Exists
effect: NoScheduleSet appropriate priority for system DaemonSets:
priorityClassName: system-node-criticalAdd Probes to ensure DaemonSet Pods are healthy:
livenessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /ready
port: 8080
initialDelaySeconds: 5
periodSeconds: 5System DaemonSets should run in system namespaces:
metadata:
namespace: kube-systemsudo kubectl get dssudo kubectl describe ds nginx-daemonsetsudo kubectl get pods -l app=nginx-daemon -o widesudo kubectl rollout status ds nginx-daemonsetsudo kubectl get events --sort-by='.lastTimestamp' | grep DaemonSetIn episode 13, we've explored DaemonSet in Kubernetes in depth. We've learned what DaemonSet is, how it differs from ReplicaSet, and when to use it for node-level services.
Key takeaways:
DaemonSet is essential for running infrastructure components that need to be present on every node in your cluster. By understanding DaemonSet, you can effectively deploy and manage cluster-wide services like log collectors, monitoring agents, and network plugins.
Are you getting a clearer understanding of DaemonSet in Kubernetes? In the next episode 14, we'll discuss Job, a controller designed for running tasks to completion rather than keeping Pods running continuously. Keep your learning momentum going and look forward to the next episode!
Note
If you want to continue to the next episode, you can click the Episode 14 thumbnail below