In this episode, we'll discuss Kubernetes Deployment for managing application rollouts. We'll learn about rolling updates, rollbacks, scaling strategies, and best practices for production deployments.

Note
If you want to read the previous episode, you can click the Episode 25 thumbnail below
In the previous episode, we learned about managing Kubernetes objects using imperative and declarative approaches. In episode 26, we'll discuss Deployment, one of the most important Kubernetes resources for managing application rollouts and updates.
Note: Here I'll be using a Kubernetes Cluster installed through K3s.
Deployment provides declarative updates for Pods and ReplicaSets. It manages the entire lifecycle of your application, from initial deployment to updates and rollbacks, ensuring zero-downtime deployments.
A Deployment is a Kubernetes resource that manages a set of identical Pods, ensuring the desired number of Pods are running and handling updates gracefully.
Think of Deployment like a production manager - it ensures the right number of workers (Pods) are always available, replaces workers when they fail, and coordinates smooth transitions when you need to update your workforce.
Key characteristics of Deployment:
Deployment solves several critical challenges:
Without Deployment, you would manually manage Pods, handle updates carefully to avoid downtime, and implement your own rollback mechanisms.
Let's create a basic Deployment.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.25
ports:
- containerPort: 80Apply the Deployment:
sudo kubectl apply -f nginx-deployment.ymlCheck Deployment status:
sudo kubectl get deploymentsOutput:
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-deployment 3/3 3 3 30sCheck Pods created by Deployment:
sudo kubectl get pods -l app=nginxapiVersion: apps/v1
kind: Deployment
metadata:
name: app-deployment
labels:
app: myapp
spec:
replicas: 5
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
version: v1.0
spec:
containers:
- name: app
image: myapp:1.0
ports:
- containerPort: 8080
resources:
requests:
memory: "256Mi"
cpu: "250m"
limits:
memory: "512Mi"
cpu: "500m"
livenessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /ready
port: 8080
initialDelaySeconds: 5
periodSeconds: 5Adjust the number of replicas.
Using kubectl scale:
sudo kubectl scale deployment nginx-deployment --replicas=5Using kubectl apply:
spec:
replicas: 5 # Changed from 3 to 5sudo kubectl apply -f nginx-deployment.ymlUse HorizontalPodAutoscaler for automatic scaling:
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: nginx-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: nginx-deployment
minReplicas: 3
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70Apply HPA:
sudo kubectl apply -f hpa.ymlCheck HPA status:
sudo kubectl get hpaUpdate Deployments without downtime.
Deployment supports two update strategies:
RollingUpdate (default):
Recreate:
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-deployment
spec:
replicas: 10
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 2 # Max 2 extra Pods during update
maxUnavailable: 1 # Max 1 Pod unavailable during update
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: app
image: myapp:1.0
ports:
- containerPort: 8080maxSurge: Maximum number of Pods that can be created above desired replicas maxUnavailable: Maximum number of Pods that can be unavailable during update
Update image using kubectl set:
sudo kubectl set image deployment/nginx-deployment nginx=nginx:1.26Update using kubectl apply:
# Change image version
containers:
- name: nginx
image: nginx:1.26 # Updated from 1.25sudo kubectl apply -f nginx-deployment.ymlWatch rollout progress:
sudo kubectl rollout status deployment/nginx-deploymentOutput:
Waiting for deployment "nginx-deployment" rollout to finish: 1 out of 3 new replicas have been updated...
Waiting for deployment "nginx-deployment" rollout to finish: 2 out of 3 new replicas have been updated...
Waiting for deployment "nginx-deployment" rollout to finish: 1 old replicas are pending termination...
deployment "nginx-deployment" successfully rolled outRevert to a previous version.
sudo kubectl rollout history deployment/nginx-deploymentOutput:
deployment.apps/nginx-deployment
REVISION CHANGE-CAUSE
1 <none>
2 <none>
3 kubectl set image deployment/nginx-deployment nginx=nginx:1.26sudo kubectl rollout history deployment/nginx-deployment --revision=2sudo kubectl rollout undo deployment/nginx-deploymentsudo kubectl rollout undo deployment/nginx-deployment --to-revision=2Pause rollout:
sudo kubectl rollout pause deployment/nginx-deploymentMake multiple changes:
sudo kubectl set image deployment/nginx-deployment nginx=nginx:1.26
sudo kubectl set resources deployment/nginx-deployment -c=nginx --limits=cpu=500m,memory=512MiResume rollout:
sudo kubectl rollout resume deployment/nginx-deploymentDifferent approaches for updating applications.
Gradually replace old Pods with new ones:
spec:
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%Advantages:
Disadvantages:
Terminate all Pods before creating new ones:
spec:
strategy:
type: RecreateAdvantages:
Disadvantages:
Run two identical environments, switch traffic:
# Blue deployment (current)
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-blue
spec:
replicas: 3
selector:
matchLabels:
app: myapp
version: blue
template:
metadata:
labels:
app: myapp
version: blue
spec:
containers:
- name: app
image: myapp:1.0
---
# Green deployment (new)
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-green
spec:
replicas: 3
selector:
matchLabels:
app: myapp
version: green
template:
metadata:
labels:
app: myapp
version: green
spec:
containers:
- name: app
image: myapp:2.0
---
# Service (switch between blue and green)
apiVersion: v1
kind: Service
metadata:
name: app-service
spec:
selector:
app: myapp
version: blue # Change to 'green' to switch
ports:
- port: 80
targetPort: 8080Gradually shift traffic to new version:
# Stable deployment (90% traffic)
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-stable
spec:
replicas: 9
selector:
matchLabels:
app: myapp
track: stable
template:
metadata:
labels:
app: myapp
track: stable
spec:
containers:
- name: app
image: myapp:1.0
---
# Canary deployment (10% traffic)
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-canary
spec:
replicas: 1
selector:
matchLabels:
app: myapp
track: canary
template:
metadata:
labels:
app: myapp
track: canary
spec:
containers:
- name: app
image: myapp:2.0
---
# Service (routes to both)
apiVersion: v1
kind: Service
metadata:
name: app-service
spec:
selector:
app: myapp # Matches both stable and canary
ports:
- port: 80
targetPort: 8080apiVersion: apps/v1
kind: Deployment
metadata:
name: web-app
labels:
app: web
tier: frontend
spec:
replicas: 5
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 2
maxUnavailable: 1
selector:
matchLabels:
app: web
template:
metadata:
labels:
app: web
version: v1.0
spec:
containers:
- name: nginx
image: nginx:1.25
ports:
- containerPort: 80
name: http
resources:
requests:
memory: "128Mi"
cpu: "100m"
limits:
memory: "256Mi"
cpu: "200m"
livenessProbe:
httpGet:
path: /health
port: 80
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 3
readinessProbe:
httpGet:
path: /ready
port: 80
initialDelaySeconds: 5
periodSeconds: 5
timeoutSeconds: 3
failureThreshold: 3
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- web
topologyKey: kubernetes.io/hostnameapiVersion: v1
kind: ConfigMap
metadata:
name: api-config
data:
LOG_LEVEL: "info"
API_TIMEOUT: "30"
---
apiVersion: v1
kind: Secret
metadata:
name: api-secrets
type: Opaque
stringData:
DATABASE_PASSWORD: "secretpassword"
API_KEY: "abc123xyz789"
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: api-service
spec:
replicas: 3
selector:
matchLabels:
app: api
template:
metadata:
labels:
app: api
spec:
containers:
- name: api
image: myapi:latest
ports:
- containerPort: 8080
envFrom:
- configMapRef:
name: api-config
- secretRef:
name: api-secrets
resources:
requests:
memory: "256Mi"
cpu: "250m"
limits:
memory: "512Mi"
cpu: "500m"apiVersion: apps/v1
kind: Deployment
metadata:
name: app-with-sidecar
spec:
replicas: 3
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
# Main application
- name: app
image: myapp:latest
ports:
- containerPort: 8080
volumeMounts:
- name: logs
mountPath: /var/log/app
# Log shipper sidecar
- name: log-shipper
image: fluent/fluentd:v1.16
volumeMounts:
- name: logs
mountPath: /var/log/app
readOnly: true
volumes:
- name: logs
emptyDir: {}sudo kubectl get deployments
sudo kubectl get deployments -o widesudo kubectl describe deployment nginx-deploymentsudo kubectl get deployment nginx-deployment -o yamlsudo kubectl get deployments --watchsudo kubectl get replicasetssudo kubectl get pods -l app=nginxProblem: Pods can consume unlimited resources.
Solution: Always set resource limits:
resources:
requests:
memory: "256Mi"
cpu: "250m"
limits:
memory: "512Mi"
cpu: "500m"Problem: Kubernetes routes traffic to unhealthy Pods.
Solution: Add liveness and readiness probes:
livenessProbe:
httpGet:
path: /health
port: 8080
readinessProbe:
httpGet:
path: /ready
port: 8080Problem: Update happens too fast, causing issues.
Solution: Configure appropriate maxSurge and maxUnavailable:
rollingUpdate:
maxSurge: 1
maxUnavailable: 0 # Ensures no downtimeProblem: Deploying untested changes to production.
Solution: Test in staging first:
# Test in staging
sudo kubectl apply -f deployment.yml --namespace=staging
# After validation, deploy to production
sudo kubectl apply -f deployment.yml --namespace=productionProblem: Not monitoring deployment progress.
Solution: Always check rollout status:
sudo kubectl rollout status deployment/nginx-deploymentAlways use YAML files:
# Good: Declarative
sudo kubectl apply -f deployment.yml
# Avoid: Imperative
sudo kubectl create deployment nginx --image=nginx:1.25Choose based on load and availability requirements:
# Development: 1-2 replicas
replicas: 1
# Production: 3+ replicas for high availability
replicas: 5Organize and select resources:
metadata:
labels:
app: myapp
version: v1.0
environment: production
tier: backendControl rollout behavior:
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%Document deployment details:
metadata:
annotations:
kubernetes.io/change-cause: "Update to version 1.26"
description: "Production web server"
owner: "platform-team"Isolate environments:
metadata:
name: app-deployment
namespace: productionSet up monitoring and alerts:
sudo kubectl get deployments --watch
sudo kubectl rollout status deployment/appsudo kubectl get deployment nginx-deployment
sudo kubectl describe deployment nginx-deploymentsudo kubectl get replicasets
sudo kubectl describe replicaset <replicaset-name>sudo kubectl get pods -l app=nginx
sudo kubectl describe pod <pod-name>
sudo kubectl logs <pod-name>sudo kubectl get events --sort-by='.lastTimestamp'sudo kubectl rollout undo deployment/nginx-deploymentIn episode 26, we've explored Deployment in Kubernetes in depth. We've learned how to create Deployments, perform rolling updates, rollback changes, and implement different deployment strategies.
Key takeaways:
Deployment is the cornerstone of application management in Kubernetes. By understanding Deployments, you can confidently deploy, update, and manage applications in production with zero downtime and easy rollback capabilities.
Are you getting a clearer understanding of Deployment in Kubernetes? Keep your learning momentum going and look forward to the next episode!
Note
If you want to continue to the next episode, you can click the Episode 27 thumbnail below