Learning Kubernetes - Episode 34 - Introduction and Explanation of Taints and Tolerations

Learning Kubernetes - Episode 34 - Introduction and Explanation of Taints and Tolerations

In this episode, we'll discuss Kubernetes Taints and Tolerations for node affinity control. We'll learn how taints repel Pods, how tolerations allow Pods to be scheduled on tainted nodes, and best practices for workload placement.

Arman Dwi Pangestu
Arman Dwi PangestuApril 9, 2026
0 views
7 min read

Introduction

Note

If you want to read the previous episode, you can click the Episode 33 thumbnail below

Episode 33Episode 33

In the previous episode, we learned about RBAC and RoleBinding for authorization. In episode 34, we'll discuss Taints and Tolerations, which control which Pods can be scheduled on which nodes.

Note: Here I'll be using a Kubernetes Cluster installed through K3s.

While node affinity attracts Pods to nodes, taints and tolerations work the opposite way - taints repel Pods from nodes unless they have matching tolerations. This enables powerful workload placement strategies like dedicated nodes, GPU nodes, or nodes with special hardware.

What Are Taints and Tolerations?

Taints are properties applied to nodes that repel Pods unless they have matching tolerations.

Tolerations are properties applied to Pods that allow them to be scheduled on nodes with matching taints.

Think of taints like "no entry" signs on nodes - by default, Pods cannot enter. Tolerations are like special passes that allow specific Pods to enter despite the "no entry" sign.

Key characteristics:

  • Taints - Applied to nodes, repel Pods
  • Tolerations - Applied to Pods, allow scheduling on tainted nodes
  • Key-value pairs - Taints and tolerations use key=value format
  • Effects - NoSchedule, PreferNoSchedule, NoExecute
  • Workload placement - Control which Pods run on which nodes
  • Dedicated nodes - Reserve nodes for specific workloads
  • Hardware affinity - Place Pods on nodes with specific hardware

Taint Effects

NoSchedule

Pods without matching toleration cannot be scheduled on the node.

Kubernetesbash
kubectl taint nodes node-1 gpu=true:NoSchedule

Behavior:

  • New Pods without toleration: not scheduled
  • Existing Pods: continue running
  • Strict enforcement

PreferNoSchedule

Kubernetes prefers not to schedule Pods without matching toleration, but will if necessary.

Kubernetesbash
kubectl taint nodes node-1 gpu=true:PreferNoSchedule

Behavior:

  • New Pods without toleration: scheduled if no other nodes available
  • Existing Pods: continue running
  • Soft enforcement

NoExecute

Pods without matching toleration are evicted from the node.

Kubernetesbash
kubectl taint nodes node-1 gpu=true:NoExecute

Behavior:

  • New Pods without toleration: not scheduled
  • Existing Pods without toleration: evicted
  • Strictest enforcement

Adding Taints to Nodes

Add Single Taint

Kubernetesbash
kubectl taint nodes node-1 gpu=true:NoSchedule

Add Multiple Taints

Kubernetesbash
kubectl taint nodes node-1 gpu=true:NoSchedule
kubectl taint nodes node-1 storage=ssd:NoSchedule

Or in one command:

Kubernetesbash
kubectl taint nodes node-1 gpu=true:NoSchedule storage=ssd:NoSchedule

View Taints

Kubernetesbash
kubectl describe node node-1 | grep Taints

Output:

Kubernetesbash
Taints:             gpu=true:NoSchedule,storage=ssd:NoSchedule

Remove Taint

Kubernetesbash
# Remove specific taint
kubectl taint nodes node-1 gpu=true:NoSchedule-
 
# Remove all taints
kubectl taint nodes node-1 gpu- storage-

Adding Tolerations to Pods

Basic Toleration

Kubernetespod-with-toleration.yml
apiVersion: v1
kind: Pod
metadata:
    name: gpu-pod
spec:
    tolerations:
        - key: gpu
          operator: Equal
          value: "true"
          effect: NoSchedule
    containers:
        - name: app
          image: nvidia/cuda:11.0

Toleration Operators

Equal - Value must match exactly:

Kubernetesyml
tolerations:
    - key: gpu
      operator: Equal
      value: "true"
      effect: NoSchedule

Exists - Key must exist, value ignored:

Kubernetesyml
tolerations:
    - key: gpu
      operator: Exists
      effect: NoSchedule

Multiple Tolerations

Kubernetesmulti-toleration.yml
apiVersion: v1
kind: Pod
metadata:
    name: special-pod
spec:
    tolerations:
        - key: gpu
          operator: Equal
          value: "true"
          effect: NoSchedule
        - key: storage
          operator: Equal
          value: ssd
          effect: NoSchedule
    containers:
        - name: app
          image: myapp:latest

Toleration with Timeout

For NoExecute effect, specify how long Pod can stay:

Kubernetestoleration-timeout.yml
apiVersion: v1
kind: Pod
metadata:
    name: temporary-pod
spec:
    tolerations:
        - key: maintenance
          operator: Equal
          value: "true"
          effect: NoExecute
          tolerationSeconds: 3600  # 1 hour
    containers:
        - name: app
          image: myapp:latest

Practical Examples

Example 1: GPU Node

Dedicate node for GPU workloads:

Kubernetesbash
# Taint GPU node
kubectl taint nodes gpu-node gpu=true:NoSchedule

Pod requesting GPU:

Kubernetesgpu-pod.yml
apiVersion: v1
kind: Pod
metadata:
    name: gpu-workload
spec:
    tolerations:
        - key: gpu
          operator: Equal
          value: "true"
          effect: NoSchedule
    containers:
        - name: gpu-app
          image: nvidia/cuda:11.0
          resources:
              limits:
                  nvidia.com/gpu: 1

Example 2: SSD Storage Node

Reserve node with fast storage:

Kubernetesbash
# Taint SSD node
kubectl taint nodes ssd-node storage=ssd:NoSchedule

Pod requiring SSD:

Kubernetesssd-pod.yml
apiVersion: apps/v1
kind: Deployment
metadata:
    name: database
spec:
    replicas: 1
    selector:
        matchLabels:
            app: database
    template:
        metadata:
            labels:
                app: database
        spec:
            tolerations:
                - key: storage
                  operator: Equal
                  value: ssd
                  effect: NoSchedule
            containers:
                - name: postgres
                  image: postgres:15

Example 3: Maintenance Window

Temporarily evict Pods for maintenance:

Kubernetesbash
# Taint node for maintenance
kubectl taint nodes node-1 maintenance=true:NoExecute

Pod tolerating maintenance:

Kubernetesmaintenance-tolerant.yml
apiVersion: v1
kind: Pod
metadata:
    name: maintenance-pod
spec:
    tolerations:
        - key: maintenance
          operator: Equal
          value: "true"
          effect: NoExecute
          tolerationSeconds: 300  # 5 minutes
    containers:
        - name: app
          image: myapp:latest

Example 4: Dedicated Nodes

Reserve nodes for specific team:

Kubernetesbash
# Taint nodes for team-a
kubectl taint nodes node-1 team=a:NoSchedule
kubectl taint nodes node-2 team=a:NoSchedule

Team A workload:

Kubernetesteam-a-deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
    name: team-a-app
spec:
    replicas: 3
    selector:
        matchLabels:
            app: team-a-app
    template:
        metadata:
            labels:
                app: team-a-app
        spec:
            tolerations:
                - key: team
                  operator: Equal
                  value: a
                  effect: NoSchedule
            containers:
                - name: app
                  image: team-a-app:latest

Example 5: Wildcard Toleration

Tolerate any taint with specific key:

Kuberneteswildcard-toleration.yml
apiVersion: v1
kind: Pod
metadata:
    name: flexible-pod
spec:
    tolerations:
        - key: workload-type
          operator: Exists  # Accept any value
          effect: NoSchedule
    containers:
        - name: app
          image: myapp:latest

Taints and Tolerations with Deployments

Deployment with Toleration

Kubernetesdeployment-toleration.yml
apiVersion: apps/v1
kind: Deployment
metadata:
    name: special-workload
spec:
    replicas: 3
    selector:
        matchLabels:
            app: special
    template:
        metadata:
            labels:
                app: special
        spec:
            tolerations:
                - key: workload-type
                  operator: Equal
                  value: special
                  effect: NoSchedule
            containers:
                - name: app
                  image: special-app:latest
                  resources:
                      requests:
                          memory: "256Mi"
                          cpu: "250m"
                      limits:
                          memory: "512Mi"
                          cpu: "500m"

Combining with Node Affinity

Use taints/tolerations with node affinity for powerful placement:

Kubernetesaffinity-and-toleration.yml
apiVersion: v1
kind: Pod
metadata:
    name: placed-pod
spec:
    # Tolerate taint
    tolerations:
        - key: gpu
          operator: Equal
          value: "true"
          effect: NoSchedule
    # Prefer GPU nodes
    affinity:
        nodeAffinity:
            preferredDuringSchedulingIgnoredDuringExecution:
                - weight: 100
                  preference:
                      matchExpressions:
                          - key: gpu
                            operator: In
                            values:
                                - "true"
    containers:
        - name: app
          image: gpu-app:latest

System Taints

Kubernetes automatically taints nodes in certain conditions:

node.kubernetes.io/not-ready

Node is not ready:

Kubernetesbash
Taints: node.kubernetes.io/not-ready:NoExecute

node.kubernetes.io/unreachable

Node is unreachable:

Kubernetesbash
Taints: node.kubernetes.io/unreachable:NoExecute

node.kubernetes.io/memory-pressure

Node has memory pressure:

Kubernetesbash
Taints: node.kubernetes.io/memory-pressure:NoSchedule

node.kubernetes.io/disk-pressure

Node has disk pressure:

Kubernetesbash
Taints: node.kubernetes.io/disk-pressure:NoSchedule

node.kubernetes.io/pid-pressure

Node has PID pressure:

Kubernetesbash
Taints: node.kubernetes.io/pid-pressure:NoSchedule

node.kubernetes.io/network-unavailable

Node network unavailable:

Kubernetesbash
Taints: node.kubernetes.io/network-unavailable:NoSchedule

Viewing Taints

Check Node Taints

Kubernetesbash
kubectl describe node node-1 | grep Taints

Get All Nodes with Taints

Kubernetesbash
kubectl get nodes -o custom-columns=NAME:.metadata.name,TAINTS:.spec.taints

Check Pod Tolerations

Kubernetesbash
kubectl get pod gpu-pod -o yaml | grep -A 10 tolerations

Common Mistakes and Pitfalls

Mistake 1: Forgetting Toleration

Problem: Pod cannot be scheduled on tainted node.

Kubernetesbash
# Node tainted
kubectl taint nodes node-1 gpu=true:NoSchedule
 
# Pod without toleration - not scheduled
kubectl run gpu-pod --image=nvidia/cuda:11.0

Solution: Add toleration to Pod:

Kubernetesyml
tolerations:
    - key: gpu
      operator: Equal
      value: "true"
      effect: NoSchedule

Mistake 2: Wrong Operator

Problem: Toleration doesn't match taint.

Kubernetesyml
# Bad: Wrong operator
tolerations:
    - key: gpu
      operator: In  # Wrong! Should be Equal
      values: ["true"]
      effect: NoSchedule

Solution: Use correct operator:

Kubernetesyml
# Good: Correct operator
tolerations:
    - key: gpu
      operator: Equal
      value: "true"
      effect: NoSchedule

Mistake 3: Mismatched Effect

Problem: Toleration effect doesn't match taint effect.

Kubernetesbash
# Taint with NoExecute
kubectl taint nodes node-1 gpu=true:NoExecute
Kubernetesyml
# Bad: Wrong effect
tolerations:
    - key: gpu
      operator: Equal
      value: "true"
      effect: NoSchedule  # Wrong! Should be NoExecute

Solution: Match effect:

Kubernetesyml
# Good: Matching effect
tolerations:
    - key: gpu
      operator: Equal
      value: "true"
      effect: NoExecute

Mistake 4: Tainting All Nodes

Problem: Tainting all nodes without tolerations.

Kubernetesbash
# Bad: Taints all nodes
for node in $(kubectl get nodes -o name); do
    kubectl taint $node special=true:NoSchedule
done

Solution: Taint only specific nodes:

Kubernetesbash
# Good: Taint only GPU nodes
kubectl taint nodes gpu-node gpu=true:NoSchedule

Mistake 5: Not Removing Taints

Problem: Temporary taints left on nodes.

Solution: Remove taints when done:

Kubernetesbash
kubectl taint nodes node-1 gpu=true:NoSchedule-

Best Practices

Use Descriptive Taint Keys

Kubernetesbash
# Good: Clear purpose
kubectl taint nodes gpu-node gpu=true:NoSchedule
kubectl taint nodes ssd-node storage=ssd:NoSchedule
 
# Avoid: Vague names
kubectl taint nodes node-1 special=true:NoSchedule

Document Taint Purpose

Kubernetesbash
# Add labels to document
kubectl label nodes gpu-node node-type=gpu
kubectl label nodes ssd-node node-type=ssd

Use PreferNoSchedule for Soft Constraints

For non-critical workloads:

Kubernetesbash
kubectl taint nodes node-1 workload=batch:PreferNoSchedule

Combine with Node Affinity

For precise placement:

Kubernetesyml
spec:
    tolerations:
        - key: gpu
          operator: Equal
          value: "true"
          effect: NoSchedule
    affinity:
        nodeAffinity:
            requiredDuringSchedulingIgnoredDuringExecution:
                nodeSelectorTerms:
                    - matchExpressions:
                          - key: gpu
                            operator: In
                            values:
                                - "true"

Set Toleration Timeout for NoExecute

Prevent indefinite Pod eviction:

Kubernetesyml
tolerations:
    - key: maintenance
      operator: Equal
      value: "true"
      effect: NoExecute
      tolerationSeconds: 3600  # 1 hour

Regular Taint Audits

Review taints regularly:

Kubernetesbash
# List all taints
kubectl get nodes -o custom-columns=NAME:.metadata.name,TAINTS:.spec.taints
 
# Check for orphaned taints

Troubleshooting

Pod Not Scheduling

Kubernetesbash
kubectl describe pod gpu-pod
# Events show: node(s) had taints that the pod didn't tolerate

Solution: Add matching toleration:

Kubernetesbash
# Check node taints
kubectl describe node node-1 | grep Taints
 
# Add toleration to Pod

Pod Evicted from Node

Kubernetesbash
kubectl describe pod pod-name
# Status: Evicted
# Reason: Tainted node

Solution: Add NoExecute toleration with timeout:

Kubernetesyml
tolerations:
    - key: maintenance
      operator: Equal
      value: "true"
      effect: NoExecute
      tolerationSeconds: 3600

Taint Not Taking Effect

Kubernetesbash
# Verify taint applied
kubectl describe node node-1 | grep Taints
 
# Check if Pods have toleration
kubectl get pod -o yaml | grep -A 5 tolerations

Viewing Taint and Toleration Details

Get Node Taints

Kubernetesbash
kubectl get nodes -o json | jq '.items[].spec.taints'

Get Pod Tolerations

Kubernetesbash
kubectl get pods -o json | jq '.items[].spec.tolerations'

Describe Node

Kubernetesbash
kubectl describe node node-1
# Shows Taints section

Removing Taints

Remove Specific Taint

Kubernetesbash
kubectl taint nodes node-1 gpu=true:NoSchedule-

Remove All Taints

Kubernetesbash
kubectl taint nodes node-1 gpu- storage- workload-

Conclusion

In episode 34, we've explored Taints and Tolerations in Kubernetes in depth. We've learned how to use taints to repel Pods from nodes and tolerations to allow specific Pods on tainted nodes.

Key takeaways:

  • Taints repel Pods from nodes
  • Tolerations allow Pods on tainted nodes
  • Three effects: NoSchedule, PreferNoSchedule, NoExecute
  • NoSchedule - Prevent scheduling
  • PreferNoSchedule - Soft constraint
  • NoExecute - Evict existing Pods
  • Operators: Equal (exact match), Exists (key only)
  • Use cases: GPU nodes, SSD nodes, dedicated nodes, maintenance
  • Combine with node affinity for precise placement
  • System taints for node conditions
  • Toleration timeout for NoExecute effect
  • Document taint purposes
  • Regular audits of taints
  • Remove taints when no longer needed

Taints and tolerations are powerful tools for workload placement in Kubernetes. By understanding how to use them effectively, you can optimize resource utilization, dedicate nodes for specific workloads, and manage maintenance windows gracefully.

Note

If you want to continue to the next episode, you can click the Episode 35 thumbnail below

Episode 35Episode 35

Related Posts