In this episode, we'll discuss how to share volumes between Pods in Kubernetes. We'll learn about PersistentVolumes, PersistentVolumeClaims, StorageClasses, and strategies for sharing data across Pods.

In the previous episode, we learned about Secret. In episode 22, we'll discuss Sharing Volumes Between Pods, exploring how to share data across multiple Pods using PersistentVolumes and PersistentVolumeClaims.
Note: Here I'll be using a Kubernetes Cluster installed through K3s.
While volumes in the previous episode were Pod-scoped, sharing data between Pods requires a different approach. PersistentVolumes provide cluster-level storage resources that can be claimed by multiple Pods, enabling data sharing and persistence beyond Pod lifecycle.
Volume Sharing is the ability for multiple Pods to access the same storage resource. This enables scenarios like shared file storage, collaborative workloads, and data exchange between applications.
Think of shared volumes like a network drive in an office - multiple employees (Pods) can access the same files, collaborate on documents, and share data without duplicating storage.
Key characteristics of Shared Volumes:
Sharing volumes solves several important use cases:
Without shared volumes, each Pod would need its own storage, making data sharing complex and inefficient.
A PersistentVolume is a cluster-level storage resource provisioned by an administrator or dynamically created by StorageClasses.
Example: NFS PersistentVolume
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-pv
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
nfs:
server: nfs-server.example.com
path: /shared/dataExample: HostPath PersistentVolume (for testing)
apiVersion: v1
kind: PersistentVolume
metadata:
name: hostpath-pv
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
hostPath:
path: /mnt/data
type: DirectoryOrCreateWarning
Warning: hostPath PersistentVolumes are only suitable for single-node testing. Use network storage for production.
Access modes determine how the volume can be mounted:
ReadWriteOnce (RWO)
accessModes:
- ReadWriteOnceReadOnlyMany (ROX)
accessModes:
- ReadOnlyManyReadWriteMany (RWX)
accessModes:
- ReadWriteManyReadWriteOncePod (RWOP)
accessModes:
- ReadWriteOncePodDefines what happens to the volume when the claim is deleted:
Retain
persistentVolumeReclaimPolicy: RetainDelete
persistentVolumeReclaimPolicy: DeleteRecycle (Deprecated)
A PersistentVolumeClaim is a request for storage by a user. It's like a Pod requesting CPU and memory, but for storage.
Example: Basic PVC
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: data-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5GiExample: PVC with StorageClass
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: fast-storage-pvc
spec:
accessModes:
- ReadWriteOnce
storageClassName: fast-ssd
resources:
requests:
storage: 10GiExample: Shared Storage PVC
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: shared-data-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 20GiWhen you create a PVC, Kubernetes finds a matching PV:
Check PVC status:
sudo kubectl get pvcOutput:
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
data-pvc Bound pv-001 5Gi RWO standard 2m
shared-pvc Bound nfs-pv 20Gi RWX nfs 1mPods reference PVCs to mount persistent storage.
apiVersion: v1
kind: Pod
metadata:
name: app-pod
spec:
containers:
- name: app
image: nginx:1.25
volumeMounts:
- name: data
mountPath: /usr/share/nginx/html
volumes:
- name: data
persistentVolumeClaim:
claimName: data-pvcapiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: shared-storage
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
---
apiVersion: v1
kind: Pod
metadata:
name: writer-pod
spec:
containers:
- name: writer
image: busybox:1.36
command:
- sh
- -c
- while true; do date >> /data/log.txt; sleep 5; done
volumeMounts:
- name: shared
mountPath: /data
volumes:
- name: shared
persistentVolumeClaim:
claimName: shared-storage
---
apiVersion: v1
kind: Pod
metadata:
name: reader-pod
spec:
containers:
- name: reader
image: busybox:1.36
command:
- sh
- -c
- tail -f /data/log.txt
volumeMounts:
- name: shared
mountPath: /data
volumes:
- name: shared
persistentVolumeClaim:
claimName: shared-storageBoth Pods can access the same data simultaneously.
A StorageClass provides a way to describe different storage types and enables dynamic provisioning.
Example: Local Storage
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumerExample: NFS Storage
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: nfs-storage
provisioner: nfs.csi.k8s.io
parameters:
server: nfs-server.example.com
share: /shared
reclaimPolicy: Retain
volumeBindingMode: ImmediateExample: AWS EBS (for AWS)
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: fast-ssd
provisioner: ebs.csi.aws.com
parameters:
type: gp3
iops: "3000"
throughput: "125"
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumerSet a default StorageClass:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: standard
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumerCheck default StorageClass:
sudo kubectl get storageclassOutput:
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE AGE
standard (default) kubernetes.io/no-provisioner Delete WaitForFirstConsumer 5d
fast-ssd ebs.csi.aws.com Delete WaitForFirstConsumer 2d# PersistentVolume
apiVersion: v1
kind: PersistentVolume
metadata:
name: web-content-pv
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
nfs:
server: nfs-server.example.com
path: /web/content
---
# PersistentVolumeClaim
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: web-content-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
---
# Deployment with multiple replicas sharing volume
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-server
spec:
replicas: 3
selector:
matchLabels:
app: web
template:
metadata:
labels:
app: web
spec:
containers:
- name: nginx
image: nginx:1.25
ports:
- containerPort: 80
volumeMounts:
- name: content
mountPath: /usr/share/nginx/html
volumes:
- name: content
persistentVolumeClaim:
claimName: web-content-pvcAll three replicas share the same web content.
# PVC for shared uploads
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: uploads-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 50Gi
---
# API server that receives uploads
apiVersion: apps/v1
kind: Deployment
metadata:
name: api-server
spec:
replicas: 2
selector:
matchLabels:
app: api
template:
metadata:
labels:
app: api
spec:
containers:
- name: api
image: myapi:latest
ports:
- containerPort: 8080
volumeMounts:
- name: uploads
mountPath: /var/uploads
volumes:
- name: uploads
persistentVolumeClaim:
claimName: uploads-pvc
---
# Worker that processes uploads
apiVersion: apps/v1
kind: Deployment
metadata:
name: upload-processor
spec:
replicas: 3
selector:
matchLabels:
app: processor
template:
metadata:
labels:
app: processor
spec:
containers:
- name: processor
image: processor:latest
volumeMounts:
- name: uploads
mountPath: /var/uploads
readOnly: true
volumes:
- name: uploads
persistentVolumeClaim:
claimName: uploads-pvcAPI servers write uploads, processors read them.
# PVC for shared configuration
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: config-pvc
spec:
accessModes:
- ReadOnlyMany
resources:
requests:
storage: 1Gi
---
# Config manager (writes config)
apiVersion: v1
kind: Pod
metadata:
name: config-manager
spec:
containers:
- name: manager
image: config-manager:latest
volumeMounts:
- name: config
mountPath: /config
volumes:
- name: config
persistentVolumeClaim:
claimName: config-pvc
---
# Applications (read config)
apiVersion: apps/v1
kind: Deployment
metadata:
name: app
spec:
replicas: 5
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: app
image: myapp:latest
volumeMounts:
- name: config
mountPath: /etc/app/config
readOnly: true
volumes:
- name: config
persistentVolumeClaim:
claimName: config-pvc# Shared PVC for common data
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: shared-data-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 20Gi
---
# StatefulSet with both shared and individual storage
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: database
spec:
serviceName: database
replicas: 3
selector:
matchLabels:
app: db
template:
metadata:
labels:
app: db
spec:
containers:
- name: postgres
image: postgres:15
volumeMounts:
# Individual storage per Pod
- name: data
mountPath: /var/lib/postgresql/data
# Shared storage across all Pods
- name: shared
mountPath: /shared
volumes:
- name: shared
persistentVolumeClaim:
claimName: shared-data-pvc
# Individual PVC per Pod
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10GiExpand existing PVC size (if supported by StorageClass).
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: expandable-storage
provisioner: kubernetes.io/no-provisioner
allowVolumeExpansion: trueEdit the PVC to increase size:
sudo kubectl edit pvc data-pvcChange storage request:
spec:
resources:
requests:
storage: 20Gi # Increased from 10GiOr patch directly:
sudo kubectl patch pvc data-pvc -p '{"spec":{"resources":{"requests":{"storage":"20Gi"}}}}'Important
Important: Volume expansion only increases size, never decreases. Some storage types require Pod restart to recognize new size.
Problem: Using ReadWriteOnce for multi-Pod sharing.
Solution: Use ReadWriteMany for true sharing:
# Bad: Can't share across nodes
accessModes:
- ReadWriteOnce
# Good: Can share across nodes
accessModes:
- ReadWriteManyProblem: PVC stays in Pending state.
Solution: Create matching PV or use dynamic provisioning:
# Check PVC status
sudo kubectl describe pvc <pvc-name>
# Look for events explaining why binding failedProblem: PVC requests more than PV provides.
Solution: Match PVC request to available PV capacity:
# PV capacity
capacity:
storage: 10Gi
# PVC request (must be <= PV capacity)
resources:
requests:
storage: 10GiProblem: hostPath only works on single node.
Solution: Use network storage for multi-node clusters:
# Bad: hostPath for production
hostPath:
path: /data
# Good: NFS for production
nfs:
server: nfs-server.example.com
path: /shared/dataProblem: Data deleted when PVC is removed.
Solution: Use Retain for important data:
persistentVolumeReclaimPolicy: RetainDefine storage tiers for different needs:
# Fast SSD for databases
storageClassName: fast-ssd
# Standard HDD for logs
storageClassName: standard
# Shared NFS for uploads
storageClassName: nfs-storageChoose based on sharing requirements:
# Single Pod, single node
accessModes:
- ReadWriteOnce
# Multiple Pods, multiple nodes
accessModes:
- ReadWriteMany
# Read-only sharing
accessModes:
- ReadOnlyManyProtect important data:
persistentVolumeReclaimPolicy: RetainOrganize storage resources:
metadata:
name: database-pv
labels:
type: database
environment: production
tier: fastCheck PVC usage regularly:
sudo kubectl get pvc
sudo kubectl describe pvc <pvc-name>Enable volume expansion:
allowVolumeExpansion: trueDon't share PVCs unnecessarily:
# Good: Separate PVCs for different purposes
- name: database-pvc
- name: logs-pvc
- name: uploads-pvcsudo kubectl get pvsudo kubectl get pvc
sudo kubectl get pvc -A # All namespacessudo kubectl describe pv <pv-name>sudo kubectl describe pvc <pvc-name>sudo kubectl get storageclasssudo kubectl exec <pod-name> -- df -hCheck why PVC isn't binding:
sudo kubectl describe pvc <pvc-name>Common causes:
Set proper security context:
securityContext:
fsGroup: 2000
runAsUser: 1000Verify PVC is bound and mounted:
sudo kubectl get pvc
sudo kubectl describe pod <pod-name>Check disk usage and expand if needed:
sudo kubectl exec <pod-name> -- df -h
sudo kubectl patch pvc <pvc-name> -p '{"spec":{"resources":{"requests":{"storage":"20Gi"}}}}'In episode 22, we've explored Sharing Volumes Between Pods in Kubernetes. We've learned about PersistentVolumes, PersistentVolumeClaims, StorageClasses, and strategies for sharing data across Pods.
Key takeaways:
Sharing volumes between Pods is essential for collaborative workloads and data exchange in Kubernetes. By understanding PersistentVolumes and PersistentVolumeClaims, you can design robust storage architectures for your applications.
Are you getting a clearer understanding of Sharing Volumes in Kubernetes? Keep your learning momentum going and look forward to the next episode!