Di episode ini kita akan coba bahas Kubernetes StatefulSet untuk managing stateful application. Kita akan mempelajari stable network identity, persistent storage, ordered deployment, dan best practice untuk database dan stateful workload.

Catatan
Untuk kalian yang ingin membaca episode sebelumnya, bisa click thumbnail episode 26 di bawah ini
Di episode sebelumnya kita sudah belajar tentang Deployment untuk managing stateless application dengan rolling update dan easy scaling. Selanjutnya di episode 27 kali ini, kita akan coba bahas StatefulSet, designed specifically untuk stateful application yang require stable network identity dan persistent storage.
Catatan: Disini saya akan menggunakan Kubernetes Cluster yang di install melalui K3s.
Sementara Deployment work great untuk stateless application, stateful application seperti database, message queue, dan distributed system need guarantee tentang Pod identity, ordering, dan storage persistence. StatefulSet provide guarantee ini.
StatefulSet adalah Kubernetes workload resource yang manage stateful application, providing stable network identity, persistent storage, dan ordered deployment dan scaling.
Bayangkan StatefulSet seperti numbered team dimana setiap member punya specific role dan identity - member-0 selalu leader, member-1 selalu backup, dan seterusnya. Tidak seperti Deployment dimana semua Pod interchangeable, StatefulSet Pod punya unique, persistent identity.
Karakteristik kunci StatefulSet:
Memahami key difference:
| Aspek | StatefulSet | Deployment |
|---|---|---|
| Pod Identity | Stable, unique (web-0, web-1) | Random (web-abc123) |
| Network Identity | Stable hostname | Random hostname |
| Storage | Individual PVC per Pod | Shared atau no storage |
| Deployment Order | Sequential (0→1→2) | Parallel |
| Scaling Order | Sequential | Parallel |
| Use Case | Database, stateful app | Web server, API |
| Pod Replacement | Same identity preserved | New random identity |
StatefulSet solve critical challenge untuk stateful application:
Tanpa StatefulSet, managing stateful application akan require complex custom logic untuk identity management, storage allocation, dan ordered operation.
Mari kita buat basic StatefulSet.
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
ports:
- port: 80
name: web
clusterIP: None
selector:
app: nginx
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
serviceName: "nginx"
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.25
ports:
- containerPort: 80
name: webApply StatefulSet:
sudo kubectl apply -f web-statefulset.ymlWatch Pod being created:
sudo kubectl get pods -w -l app=nginxOutput menunjukkan sequential creation:
NAME READY STATUS RESTARTS AGE
web-0 0/1 Pending 0 0s
web-0 0/1 ContainerCreating 0 0s
web-0 1/1 Running 0 10s
web-1 0/1 Pending 0 0s
web-1 0/1 ContainerCreating 0 0s
web-1 1/1 Running 0 10s
web-2 0/1 Pending 0 0s
web-2 0/1 ContainerCreating 0 0s
web-2 1/1 Running 0 10sPerhatikan:
apiVersion: v1
kind: Service
metadata:
name: nginx
spec:
ports:
- port: 80
name: web
clusterIP: None
selector:
app: nginx
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
serviceName: "nginx"
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.25
ports:
- containerPort: 80
name: web
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
volumeClaimTemplates:
- metadata:
name: www
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 1GiIni create:
StatefulSet require Headless Service untuk network identity.
Headless Service (clusterIP: None) tidak load balance. Instead, dia return IP address individual Pod, enabling direct Pod-to-Pod communication.
apiVersion: v1
kind: Service
metadata:
name: nginx
spec:
ports:
- port: 80
name: web
clusterIP: None # Make it headless
selector:
app: nginxSetiap Pod dapat predictable DNS name:
<pod-name>.<service-name>.<namespace>.svc.cluster.localContoh:
web-0.nginx.default.svc.cluster.localweb-1.nginx.default.svc.cluster.localweb-2.nginx.default.svc.cluster.localTest DNS resolution:
sudo kubectl run -it --rm debug --image=busybox:1.36 --restart=Never -- nslookup web-0.nginxScale up dan down in order.
sudo kubectl scale statefulset web --replicas=5Pod created sequentially:
sudo kubectl scale statefulset web --replicas=2Pod deleted in reverse order:
Important
Penting: Scaling down tidak delete PersistentVolumeClaim. Mereka remain untuk data safety dan bisa reused jika kalian scale back up.
StatefulSet support dua update strategy.
Update Pod one at a time in reverse order:
spec:
updateStrategy:
type: RollingUpdate
rollingUpdate:
partition: 0Partition: Hanya Pod dengan ordinal >= partition yang updated.
Contoh dengan partition=2:
Pod hanya updated ketika manually deleted:
spec:
updateStrategy:
type: OnDeleteUseful untuk manual control over update.
apiVersion: v1
kind: Service
metadata:
name: mysql
spec:
ports:
- port: 3306
name: mysql
clusterIP: None
selector:
app: mysql
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mysql
spec:
serviceName: mysql
replicas: 3
selector:
matchLabels:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
containers:
- name: mysql
image: mysql:8.0
ports:
- containerPort: 3306
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-secret
key: password
volumeMounts:
- name: data
mountPath: /var/lib/mysql
resources:
requests:
memory: "512Mi"
cpu: "500m"
limits:
memory: "1Gi"
cpu: "1000m"
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 10GiapiVersion: v1
kind: ConfigMap
metadata:
name: postgres-config
data:
POSTGRES_DB: "myapp"
POSTGRES_USER: "appuser"
---
apiVersion: v1
kind: Secret
metadata:
name: postgres-secret
type: Opaque
stringData:
POSTGRES_PASSWORD: "secretpassword"
---
apiVersion: v1
kind: Service
metadata:
name: postgres
spec:
ports:
- port: 5432
name: postgres
clusterIP: None
selector:
app: postgres
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: postgres
spec:
serviceName: postgres
replicas: 3
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:15
ports:
- containerPort: 5432
name: postgres
envFrom:
- configMapRef:
name: postgres-config
- secretRef:
name: postgres-secret
volumeMounts:
- name: data
mountPath: /var/lib/postgresql/data
livenessProbe:
exec:
command:
- pg_isready
- -U
- appuser
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
exec:
command:
- pg_isready
- -U
- appuser
initialDelaySeconds: 5
periodSeconds: 5
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 20GiapiVersion: v1
kind: ConfigMap
metadata:
name: redis-config
data:
redis.conf: |
appendonly yes
appendfilename "appendonly.aof"
---
apiVersion: v1
kind: Service
metadata:
name: redis
spec:
ports:
- port: 6379
name: redis
clusterIP: None
selector:
app: redis
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: redis
spec:
serviceName: redis
replicas: 3
selector:
matchLabels:
app: redis
template:
metadata:
labels:
app: redis
spec:
containers:
- name: redis
image: redis:7.2
ports:
- containerPort: 6379
name: redis
command:
- redis-server
- /etc/redis/redis.conf
volumeMounts:
- name: data
mountPath: /data
- name: config
mountPath: /etc/redis
resources:
requests:
memory: "256Mi"
cpu: "250m"
limits:
memory: "512Mi"
cpu: "500m"
volumes:
- name: config
configMap:
name: redis-config
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 5GiapiVersion: v1
kind: Service
metadata:
name: kafka
spec:
ports:
- port: 9092
name: kafka
clusterIP: None
selector:
app: kafka
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: kafka
spec:
serviceName: kafka
replicas: 3
selector:
matchLabels:
app: kafka
template:
metadata:
labels:
app: kafka
spec:
containers:
- name: kafka
image: confluentinc/cp-kafka:latest
ports:
- containerPort: 9092
name: kafka
env:
- name: KAFKA_BROKER_ID
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: KAFKA_ZOOKEEPER_CONNECT
value: "zookeeper:2181"
- name: KAFKA_ADVERTISED_LISTENERS
value: "PLAINTEXT://$(POD_NAME).kafka:9092"
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
volumeMounts:
- name: data
mountPath: /var/lib/kafka/data
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 10GiControl bagaimana Pod managed during operation.
Pod created/deleted sequentially, waiting untuk setiap Ready:
spec:
podManagementPolicy: OrderedReadyPod created/deleted in parallel (seperti Deployment):
spec:
podManagementPolicy: ParallelUseful ketika Pod order tidak matter tapi kalian masih need stable identity.
sudo kubectl delete statefulset web --cascade=orphanPod remain running tapi tidak longer managed.
sudo kubectl delete statefulset webPod deleted in reverse order.
sudo kubectl delete pvc -l app=nginxWarning
Peringatan: Menghapus PVC permanently delete data. Selalu backup sebelum delete.
Problem: StatefulSet require headless Service.
Solusi: Selalu create headless Service dulu:
apiVersion: v1
kind: Service
metadata:
name: nginx
spec:
clusterIP: None # Required untuk StatefulSet
selector:
app: nginxProblem: serviceName tidak match Service metadata.name.
Solusi: Ensure name match:
# Service
metadata:
name: nginx
# StatefulSet
spec:
serviceName: "nginx" # Harus matchProblem: PVC tidak bisa provisioned.
Solusi: Ensure StorageClass exist atau specify one:
volumeClaimTemplates:
- metadata:
name: data
spec:
storageClassName: fast-ssd
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 10GiProblem: Data lost ketika scaling down.
Solusi: PVC preserved by design. Delete manually hanya ketika certain:
# Scaling down tidak delete PVC
sudo kubectl scale statefulset web --replicas=1
# PVC remain untuk web-1, web-2
sudo kubectl get pvcProblem: Pod bisa consume unlimited resource.
Solusi: Selalu set limit untuk stateful workload:
resources:
requests:
memory: "512Mi"
cpu: "500m"
limits:
memory: "1Gi"
cpu: "1000m"Pilih storage based on requirement:
# Fast SSD untuk database
storageClassName: fast-ssd
# Standard HDD untuk log
storageClassName: standardProtect against voluntary disruption:
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
name: web-pdb
spec:
minAvailable: 2
selector:
matchLabels:
app: nginxPrepare environment sebelum main container start:
initContainers:
- name: init-config
image: busybox:1.36
command:
- sh
- -c
- |
echo "Initializing..."
# Setup configurationMonitor Pod health:
livenessProbe:
tcpSocket:
port: 3306
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
exec:
command:
- mysqladmin
- ping
initialDelaySeconds: 5
periodSeconds: 5Spread Pod across node:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- mysql
topologyKey: kubernetes.io/hostnameImplement backup strategy:
# Contoh: Backup MySQL
sudo kubectl exec mysql-0 -- mysqldump -u root -p$PASSWORD --all-databases > backup.sqlsudo kubectl get statefulsets
sudo kubectl get statefulsets -o widesudo kubectl describe statefulset websudo kubectl get pods -l app=nginxsudo kubectl get pvcsudo kubectl run -it --rm debug --image=busybox:1.36 --restart=Never -- nslookup web-0.nginxsudo kubectl get statefulset web
sudo kubectl describe statefulset websudo kubectl get pods -l app=nginx
sudo kubectl describe pod web-0
sudo kubectl logs web-0sudo kubectl get pvc
sudo kubectl describe pvc www-web-0sudo kubectl get service nginx
sudo kubectl describe service nginxsudo kubectl get events --sort-by='.lastTimestamp'Pada episode 27 ini, kita telah membahas StatefulSet di Kubernetes secara mendalam. Kita sudah belajar cara manage stateful application dengan stable identity, persistent storage, dan ordered operation.
Key takeaway:
StatefulSet essential untuk running stateful application di Kubernetes. Dengan memahami StatefulSet, kalian bisa confidently deploy dan manage database, distributed system, dan stateful workload lainnya dengan guaranteed identity dan storage persistence.
Bagaimana, makin jelas kan tentang StatefulSet di Kubernetes? Jadi, pastikan tetap semangat belajar dan nantikan episode selanjutnya!
Catatan
Untuk kalian yang ingin melanjutkan ke episode selanjutnya, bisa click thumbnail episode 28 di bawah ini