Di episode ini kita akan coba bahas cara sharing volume antara Pod di Kubernetes. Kita akan mempelajari PersistentVolume, PersistentVolumeClaim, StorageClass, dan strategi untuk sharing data across Pod.

Di episode sebelumnya kita sudah belajar tentang Secret. Selanjutnya di episode 22 kali ini, kita akan coba bahas Sharing Volume Between Pod, exploring cara share data across multiple Pod menggunakan PersistentVolume dan PersistentVolumeClaim.
Catatan: Disini saya akan menggunakan Kubernetes Cluster yang di install melalui K3s.
Sementara volume di episode sebelumnya adalah Pod-scoped, sharing data antara Pod require different approach. PersistentVolume menyediakan cluster-level storage resource yang bisa claimed oleh multiple Pod, enable data sharing dan persistence beyond Pod lifecycle.
Volume Sharing adalah kemampuan untuk multiple Pod access storage resource yang sama. Ini enable scenario seperti shared file storage, collaborative workload, dan data exchange antara application.
Bayangkan shared volume seperti network drive di office - multiple employee (Pod) bisa access file yang sama, collaborate di document, dan share data tanpa duplicate storage.
Karakteristik kunci Shared Volume:
Sharing volume solve beberapa important use case:
Tanpa shared volume, setiap Pod butuh storage sendiri, making data sharing complex dan inefficient.
PersistentVolume adalah cluster-level storage resource yang di provision oleh administrator atau dynamically created oleh StorageClass.
Contoh: NFS PersistentVolume
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-pv
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
nfs:
server: nfs-server.example.com
path: /shared/dataContoh: HostPath PersistentVolume (untuk testing)
apiVersion: v1
kind: PersistentVolume
metadata:
name: hostpath-pv
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
hostPath:
path: /mnt/data
type: DirectoryOrCreateWarning
Warning: hostPath PersistentVolume hanya suitable untuk single-node testing. Gunakan network storage untuk production.
Access mode determine bagaimana volume bisa mounted:
ReadWriteOnce (RWO)
accessModes:
- ReadWriteOnceReadOnlyMany (ROX)
accessModes:
- ReadOnlyManyReadWriteMany (RWX)
accessModes:
- ReadWriteManyReadWriteOncePod (RWOP)
accessModes:
- ReadWriteOncePodDefine apa yang terjadi ke volume ketika claim deleted:
Retain
persistentVolumeReclaimPolicy: RetainDelete
persistentVolumeReclaimPolicy: DeleteRecycle (Deprecated)
PersistentVolumeClaim adalah request untuk storage oleh user. Seperti Pod requesting CPU dan memory, tapi untuk storage.
Contoh: Basic PVC
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: data-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5GiContoh: PVC dengan StorageClass
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: fast-storage-pvc
spec:
accessModes:
- ReadWriteOnce
storageClassName: fast-ssd
resources:
requests:
storage: 10GiContoh: Shared Storage PVC
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: shared-data-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 20GiKetika kalian create PVC, Kubernetes find matching PV:
Check PVC status:
sudo kubectl get pvcOutput:
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
data-pvc Bound pv-001 5Gi RWO standard 2m
shared-pvc Bound nfs-pv 20Gi RWX nfs 1mPod reference PVC untuk mount persistent storage.
apiVersion: v1
kind: Pod
metadata:
name: app-pod
spec:
containers:
- name: app
image: nginx:1.25
volumeMounts:
- name: data
mountPath: /usr/share/nginx/html
volumes:
- name: data
persistentVolumeClaim:
claimName: data-pvcapiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: shared-storage
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
---
apiVersion: v1
kind: Pod
metadata:
name: writer-pod
spec:
containers:
- name: writer
image: busybox:1.36
command:
- sh
- -c
- while true; do date >> /data/log.txt; sleep 5; done
volumeMounts:
- name: shared
mountPath: /data
volumes:
- name: shared
persistentVolumeClaim:
claimName: shared-storage
---
apiVersion: v1
kind: Pod
metadata:
name: reader-pod
spec:
containers:
- name: reader
image: busybox:1.36
command:
- sh
- -c
- tail -f /data/log.txt
volumeMounts:
- name: shared
mountPath: /data
volumes:
- name: shared
persistentVolumeClaim:
claimName: shared-storageKedua Pod bisa access data yang sama simultaneously.
StorageClass menyediakan cara untuk describe different storage type dan enable dynamic provisioning.
Contoh: Local Storage
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumerContoh: NFS Storage
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: nfs-storage
provisioner: nfs.csi.k8s.io
parameters:
server: nfs-server.example.com
share: /shared
reclaimPolicy: Retain
volumeBindingMode: ImmediateContoh: AWS EBS (untuk AWS)
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: fast-ssd
provisioner: ebs.csi.aws.com
parameters:
type: gp3
iops: "3000"
throughput: "125"
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumerSet default StorageClass:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: standard
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumerCheck default StorageClass:
sudo kubectl get storageclassOutput:
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE AGE
standard (default) kubernetes.io/no-provisioner Delete WaitForFirstConsumer 5d
fast-ssd ebs.csi.aws.com Delete WaitForFirstConsumer 2d# PersistentVolume
apiVersion: v1
kind: PersistentVolume
metadata:
name: web-content-pv
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
nfs:
server: nfs-server.example.com
path: /web/content
---
# PersistentVolumeClaim
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: web-content-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
---
# Deployment dengan multiple replica sharing volume
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-server
spec:
replicas: 3
selector:
matchLabels:
app: web
template:
metadata:
labels:
app: web
spec:
containers:
- name: nginx
image: nginx:1.25
ports:
- containerPort: 80
volumeMounts:
- name: content
mountPath: /usr/share/nginx/html
volumes:
- name: content
persistentVolumeClaim:
claimName: web-content-pvcSemua tiga replica share web content yang sama.
# PVC untuk shared upload
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: uploads-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 50Gi
---
# API server yang receive upload
apiVersion: apps/v1
kind: Deployment
metadata:
name: api-server
spec:
replicas: 2
selector:
matchLabels:
app: api
template:
metadata:
labels:
app: api
spec:
containers:
- name: api
image: myapi:latest
ports:
- containerPort: 8080
volumeMounts:
- name: uploads
mountPath: /var/uploads
volumes:
- name: uploads
persistentVolumeClaim:
claimName: uploads-pvc
---
# Worker yang process upload
apiVersion: apps/v1
kind: Deployment
metadata:
name: upload-processor
spec:
replicas: 3
selector:
matchLabels:
app: processor
template:
metadata:
labels:
app: processor
spec:
containers:
- name: processor
image: processor:latest
volumeMounts:
- name: uploads
mountPath: /var/uploads
readOnly: true
volumes:
- name: uploads
persistentVolumeClaim:
claimName: uploads-pvcAPI server write upload, processor read mereka.
# PVC untuk shared configuration
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: config-pvc
spec:
accessModes:
- ReadOnlyMany
resources:
requests:
storage: 1Gi
---
# Config manager (write config)
apiVersion: v1
kind: Pod
metadata:
name: config-manager
spec:
containers:
- name: manager
image: config-manager:latest
volumeMounts:
- name: config
mountPath: /config
volumes:
- name: config
persistentVolumeClaim:
claimName: config-pvc
---
# Application (read config)
apiVersion: apps/v1
kind: Deployment
metadata:
name: app
spec:
replicas: 5
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: app
image: myapp:latest
volumeMounts:
- name: config
mountPath: /etc/app/config
readOnly: true
volumes:
- name: config
persistentVolumeClaim:
claimName: config-pvc# Shared PVC untuk common data
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: shared-data-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 20Gi
---
# StatefulSet dengan both shared dan individual storage
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: database
spec:
serviceName: database
replicas: 3
selector:
matchLabels:
app: db
template:
metadata:
labels:
app: db
spec:
containers:
- name: postgres
image: postgres:15
volumeMounts:
# Individual storage per Pod
- name: data
mountPath: /var/lib/postgresql/data
# Shared storage across semua Pod
- name: shared
mountPath: /shared
volumes:
- name: shared
persistentVolumeClaim:
claimName: shared-data-pvc
# Individual PVC per Pod
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10GiExpand existing PVC size (jika supported oleh StorageClass).
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: expandable-storage
provisioner: kubernetes.io/no-provisioner
allowVolumeExpansion: trueEdit PVC untuk increase size:
sudo kubectl edit pvc data-pvcChange storage request:
spec:
resources:
requests:
storage: 20Gi # Increased dari 10GiAtau patch directly:
sudo kubectl patch pvc data-pvc -p '{"spec":{"resources":{"requests":{"storage":"20Gi"}}}}'Important
Penting: Volume expansion hanya increase size, never decrease. Beberapa storage type require Pod restart untuk recognize new size.
Problem: Gunakan ReadWriteOnce untuk multi-Pod sharing.
Solusi: Gunakan ReadWriteMany untuk true sharing:
# Buruk: Tidak bisa share across node
accessModes:
- ReadWriteOnce
# Bagus: Bisa share across node
accessModes:
- ReadWriteManyProblem: PVC stay di Pending state.
Solusi: Create matching PV atau gunakan dynamic provisioning:
# Check PVC status
sudo kubectl describe pvc <pvc-name>
# Look for event explaining kenapa binding failedProblem: PVC request lebih dari PV provide.
Solusi: Match PVC request ke available PV capacity:
# PV capacity
capacity:
storage: 10Gi
# PVC request (harus <= PV capacity)
resources:
requests:
storage: 10GiProblem: hostPath hanya work di single node.
Solusi: Gunakan network storage untuk multi-node cluster:
# Buruk: hostPath untuk production
hostPath:
path: /data
# Bagus: NFS untuk production
nfs:
server: nfs-server.example.com
path: /shared/dataProblem: Data deleted ketika PVC removed.
Solusi: Gunakan Retain untuk important data:
persistentVolumeReclaimPolicy: RetainDefine storage tier untuk different need:
# Fast SSD untuk database
storageClassName: fast-ssd
# Standard HDD untuk log
storageClassName: standard
# Shared NFS untuk upload
storageClassName: nfs-storagePilih based on sharing requirement:
# Single Pod, single node
accessModes:
- ReadWriteOnce
# Multiple Pod, multiple node
accessModes:
- ReadWriteMany
# Read-only sharing
accessModes:
- ReadOnlyManyProtect important data:
persistentVolumeReclaimPolicy: RetainOrganize storage resource:
metadata:
name: database-pv
labels:
type: database
environment: production
tier: fastCheck PVC usage regularly:
sudo kubectl get pvc
sudo kubectl describe pvc <pvc-name>Enable volume expansion:
allowVolumeExpansion: trueJangan share PVC unnecessarily:
# Bagus: Separate PVC untuk different purpose
- name: database-pvc
- name: logs-pvc
- name: uploads-pvcsudo kubectl get pvsudo kubectl get pvc
sudo kubectl get pvc -A # Semua namespacesudo kubectl describe pv <pv-name>sudo kubectl describe pvc <pvc-name>sudo kubectl get storageclasssudo kubectl exec <pod-name> -- df -hCheck kenapa PVC tidak binding:
sudo kubectl describe pvc <pvc-name>Common cause:
Set proper security context:
securityContext:
fsGroup: 2000
runAsUser: 1000Verify PVC bound dan mounted:
sudo kubectl get pvc
sudo kubectl describe pod <pod-name>Check disk usage dan expand jika needed:
sudo kubectl exec <pod-name> -- df -h
sudo kubectl patch pvc <pvc-name> -p '{"spec":{"resources":{"requests":{"storage":"20Gi"}}}}'Pada episode 22 ini, kita telah membahas Sharing Volume Between Pod di Kubernetes. Kita sudah belajar tentang PersistentVolume, PersistentVolumeClaim, StorageClass, dan strategi untuk sharing data across Pod.
Key takeaway:
Sharing volume antara Pod essential untuk collaborative workload dan data exchange di Kubernetes. Dengan memahami PersistentVolume dan PersistentVolumeClaim, kalian bisa design robust storage architecture untuk application kalian.
Bagaimana, makin jelas kan tentang Sharing Volume di Kubernetes? Jadi, pastikan tetap semangat belajar dan nantikan episode selanjutnya!