In this episode, we'll discuss managing Kubernetes objects using imperative and declarative approaches. We'll learn the differences, when to use each method, and best practices for object management.

Note
If you want to read the previous episode, you can click the Episode 24 thumbnail below
In the previous episode, we learned about Downward API and how to expose Pod metadata to applications. In episode 25, we'll discuss Managing Kubernetes Objects, exploring both imperative and declarative approaches to creating and managing resources.
Note: Here I'll be using a Kubernetes Cluster installed through K3s.
Understanding how to manage Kubernetes objects is fundamental to working effectively with Kubernetes. There are two main approaches: imperative (telling Kubernetes what to do) and declarative (telling Kubernetes what you want).
Object Management refers to how you create, update, and delete Kubernetes resources. The approach you choose affects maintainability, reproducibility, and collaboration.
Think of object management like cooking - imperative is like following step-by-step instructions ("chop onions, heat oil, add onions"), while declarative is like describing the desired outcome ("I want onion soup"). Both get you to the same place, but the approach differs.
Key characteristics of Object Management:
Imperative management means telling Kubernetes exactly what actions to perform using commands.
Create objects directly with kubectl commands.
Creating a Deployment:
sudo kubectl create deployment nginx --image=nginx:1.25Creating a Service:
sudo kubectl expose deployment nginx --port=80 --type=NodePortScaling a Deployment:
sudo kubectl scale deployment nginx --replicas=3Setting Image:
sudo kubectl set image deployment/nginx nginx=nginx:1.26Creating a ConfigMap:
sudo kubectl create configmap app-config --from-literal=key1=value1 --from-literal=key2=value2Creating a Secret:
sudo kubectl create secret generic db-credentials --from-literal=username=admin --from-literal=password=secret123Use configuration files with imperative commands.
Create from file:
sudo kubectl create -f deployment.ymlReplace object:
sudo kubectl replace -f deployment.ymlDelete object:
sudo kubectl delete -f deployment.ymlDeclarative management means describing the desired state in configuration files and letting Kubernetes figure out how to achieve it.
Use kubectl apply with configuration files.
Example: Deployment Configuration
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.25
ports:
- containerPort: 80Apply configuration:
sudo kubectl apply -f nginx-deployment.ymlUpdate configuration (edit file and reapply):
# Edit nginx-deployment.yml (change replicas to 5)
sudo kubectl apply -f nginx-deployment.ymlApply directory:
sudo kubectl apply -f ./manifests/Apply recursively:
sudo kubectl apply -f ./manifests/ --recursivekubectl apply performs a three-way merge:
This enables intelligent updates that preserve manual changes when possible.
Let's compare imperative and declarative management for common tasks.
Imperative:
sudo kubectl create deployment web --image=nginx:1.25 --replicas=3
sudo kubectl expose deployment web --port=80 --type=LoadBalancerDeclarative:
apiVersion: apps/v1
kind: Deployment
metadata:
name: web
spec:
replicas: 3
selector:
matchLabels:
app: web
template:
metadata:
labels:
app: web
spec:
containers:
- name: nginx
image: nginx:1.25
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: web
spec:
type: LoadBalancer
selector:
app: web
ports:
- port: 80
targetPort: 80sudo kubectl apply -f web-deployment.ymlImperative:
sudo kubectl scale deployment web --replicas=5Declarative:
# Edit file: change replicas from 3 to 5
spec:
replicas: 5sudo kubectl apply -f web-deployment.ymlImperative:
sudo kubectl set image deployment/web nginx=nginx:1.26Declarative:
# Edit file: change image version
containers:
- name: nginx
image: nginx:1.26sudo kubectl apply -f web-deployment.ymlAlways use declarative management for production environments:
# Good: Declarative
sudo kubectl apply -f production/
# Avoid: Imperative in production
sudo kubectl create deployment ...Imperative is fine for development and testing:
# Quick test
sudo kubectl run test-pod --image=nginx:1.25 --rm -it -- /bin/bashStructure your manifests logically:
manifests/
├── base/
│ ├── deployment.yml
│ ├── service.yml
│ └── configmap.yml
├── dev/
│ └── kustomization.yml
├── staging/
│ └── kustomization.yml
└── production/
└── kustomization.ymlAlways commit configuration files to Git:
git add manifests/
git commit -m "Add nginx deployment"
git pushInclude metadata for organization:
metadata:
name: nginx
labels:
app: nginx
version: v1.0
environment: production
annotations:
description: "Production web server"
owner: "platform-team"Organize resources by namespace:
metadata:
name: nginx
namespace: productionAdd comments to explain complex configurations:
# Nginx deployment for production web traffic
# Configured with 3 replicas for high availability
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginxDeclarative approach:
# ConfigMap for application configuration
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
data:
DATABASE_HOST: "postgres"
DATABASE_PORT: "5432"
LOG_LEVEL: "info"
---
# Secret for sensitive data
apiVersion: v1
kind: Secret
metadata:
name: app-secrets
type: Opaque
stringData:
DATABASE_PASSWORD: "secretpassword"
API_KEY: "abc123xyz789"
---
# Deployment for application
apiVersion: apps/v1
kind: Deployment
metadata:
name: app
labels:
app: myapp
spec:
replicas: 3
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: app
image: myapp:latest
ports:
- containerPort: 8080
envFrom:
- configMapRef:
name: app-config
- secretRef:
name: app-secrets
---
# Service to expose application
apiVersion: v1
kind: Service
metadata:
name: app
spec:
type: LoadBalancer
selector:
app: myapp
ports:
- port: 80
targetPort: 8080Apply everything:
sudo kubectl apply -f app-stack.ymlBase configuration:
apiVersion: apps/v1
kind: Deployment
metadata:
name: app
spec:
replicas: 1
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: app
image: myapp:latest
ports:
- containerPort: 8080Development overlay:
apiVersion: apps/v1
kind: Deployment
metadata:
name: app
spec:
replicas: 1
template:
spec:
containers:
- name: app
env:
- name: ENVIRONMENT
value: "development"Production overlay:
apiVersion: apps/v1
kind: Deployment
metadata:
name: app
spec:
replicas: 5
template:
spec:
containers:
- name: app
env:
- name: ENVIRONMENT
value: "production"
resources:
requests:
memory: "256Mi"
cpu: "250m"
limits:
memory: "512Mi"
cpu: "500m"Directory structure:
kubernetes/
├── apps/
│ ├── frontend/
│ │ ├── deployment.yml
│ │ ├── service.yml
│ │ └── ingress.yml
│ └── backend/
│ ├── deployment.yml
│ ├── service.yml
│ └── configmap.yml
└── infrastructure/
├── namespaces.yml
└── rbac.ymlApply with Git workflow:
# Clone repository
git clone https://github.com/company/kubernetes-manifests.git
cd kubernetes-manifests
# Apply infrastructure
sudo kubectl apply -f infrastructure/
# Apply applications
sudo kubectl apply -f apps/
# Make changes
vim apps/frontend/deployment.yml
# Commit and push
git add apps/frontend/deployment.yml
git commit -m "Update frontend to v2.0"
git push
# Apply changes
sudo kubectl apply -f apps/frontend/Understanding the differences:
# Creates new deployment
sudo kubectl create -f deployment.yml
# Fails if deployment exists
# Error: deployments.apps "nginx" already exists# Creates deployment if not exists
sudo kubectl apply -f deployment.yml
# Updates deployment if exists
sudo kubectl apply -f deployment.ymlsudo kubectl replace -f deployment.ymlsudo kubectl apply -f deployment.ymlTest changes before applying.
Preview what would happen:
Tip
Dry run client side only does YAML file parsing and validation, it does not perform validation to the API Server.
Whereas server side will perform full lifecycle validation up to the API Server without actually committing so it is not applied to etcd. Here are the differences:
| Feature | Client | Server |
|---|---|---|
| Validasi schema | local | server |
| Admission controller | ❌ | ✅ |
| Default value dari server | ❌ | ✅ |
| Real behavior simulation | ❌ | ✅ |
# Client-side dry run
sudo kubectl apply -f deployment.yml --dry-run=client
# Server-side dry run
sudo kubectl apply -f deployment.yml --dry-run=serverExample result of dry run client and server side
See differences before applying:
sudo kubectl diff -f deployment.ymlOutput shows what would change:
--- /tmp/LIVE-2656508399/apps.v1.Deployment.default.web
+++ /tmp/MERGED-975783886/apps.v1.Deployment.default.web
@@ -6,14 +6,14 @@
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"name":"web","namespace":"default"},"spec":{"replicas":3,"selector":{"matchLabels":{"app":"web"}},"template":{"metadata":{"labels":{"app":"web"}},"spec":{"containers":[{"image":"nginx:1.25","name":"nginx","ports":[{"containerPort":80}]}]}}}}
creationTimestamp: "2026-03-26T16:55:41Z"
- generation: 1
+ generation: 2
name: web
namespace: default
resourceVersion: "480086"
uid: 69e521b0-5556-4354-a788-db0463e1a935
spec:
progressDeadlineSeconds: 600
- replicas: 3
+ replicas: 5
revisionHistoryLimit: 10
selector:
matchLabels:Problem: Using both approaches causes confusion.
Solution: Choose one approach per environment:
# Bad: Mixing approaches
sudo kubectl create deployment nginx --image=nginx:1.25
sudo kubectl apply -f service.yml
# Good: Consistent approach
sudo kubectl apply -f deployment.yml
sudo kubectl apply -f service.ymlProblem: No history of changes.
Solution: Always commit configurations:
git add manifests/
git commit -m "Update deployment"Problem: Manual changes not tracked.
Solution: Always update files and apply:
# Bad: Direct change
sudo kubectl scale deployment nginx --replicas=5
# Good: Update file and apply
# Edit deployment.yml
sudo kubectl apply -f deployment.ymlProblem: Applying untested configurations.
Solution: Use dry-run and diff:
sudo kubectl diff -f deployment.yml
sudo kubectl apply -f deployment.yml --dry-run=serverProblem: Missing required fields.
Solution: Use complete, valid YAML:
# Bad: Incomplete
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
# Good: Complete
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.25# Quick test
sudo kubectl run test --image=nginx:1.25 --rm -it -- /bin/bash# Production deployment
sudo kubectl apply -f production/sudo kubectl get deployments
sudo kubectl get pods
sudo kubectl get servicessudo kubectl describe deployment nginxsudo kubectl get deployment nginx -o yaml# Opens editor (not recommended for production)
sudo kubectl edit deployment nginx# Imperative
sudo kubectl delete deployment nginx
# Declarative
sudo kubectl delete -f deployment.ymlIn episode 25, we've explored Managing Kubernetes Objects using both imperative and declarative approaches. We've learned the differences, advantages, and when to use each method.
Key takeaways:
Understanding object management is fundamental to working effectively with Kubernetes. By choosing the right approach for your use case, you can build maintainable, reproducible, and collaborative infrastructure.
Are you getting a clearer understanding of Managing Kubernetes Objects? Keep your learning momentum going and look forward to the next episode!
Note
If you want to continue to the next episode, you can click the Episode 26 thumbnail below