In this episode, we'll deep-dive into PersistentVolume (PV) and PersistentVolumeClaim (PVC) in Kubernetes. You'll learn how Kubernetes abstracts storage, how PVs are provisioned, and how applications claim storage through PVCs.

Note
If you want to read the previous episode, you can click the Episode 21 thumbnail below
In the previous episode, we explored Volumes in Kubernetes and touched briefly on persistentVolumeClaim as a volume type. In this episode (21.1), we'll go deeper and focus entirely on PersistentVolume (PV) and PersistentVolumeClaim (PVC) — the two core objects that Kubernetes uses to decouple storage provisioning from application consumption.
Note: Here I'll be using a Kubernetes Cluster installed through K3s.
Without PVs and PVCs, every team that deploys a stateful application has to manually manage storage backends. With them, the storage lifecycle can be managed independently from the application lifecycle — making it possible to run databases, message queues, and file storage reliably in Kubernetes.
A PersistentVolume (PV) is a piece of storage in the cluster that has been provisioned by an administrator or dynamically provisioned using a StorageClass. It is a cluster-level resource — meaning it exists independently of any Pod or Namespace.
Think of a PV like a physical disk that IT has formatted and made available. It's there waiting to be used. The specifics of where that disk lives (NFS server, cloud block storage, local disk) are abstracted behind the PV API object.
Key characteristics of PersistentVolume:
A PersistentVolumeClaim (PVC) is a request for storage by a user or application. It is Namespace-scoped and specifies requirements such as storage size and access mode. Kubernetes finds a suitable PV that satisfies the claim and binds them together.
Think of PVC like a purchase order. The developer says: "I need 10Gi of read-write storage." Kubernetes then finds a PV that matches and binds the two together.
Key characteristics of PersistentVolumeClaim:
Understanding the full lifecycle prevents data loss and misconfiguration.
PVs can be provisioned in two ways:
StorageClass automatically creates a PV when a matching PVC is submittedKubernetes control plane watches for unbound PVCs and matches them to available PVs based on:
Once a match is found, both the PV and PVC move to Bound state. A PV can only be bound to one PVC at a time.
A Pod references the PVC by name. Kubernetes mounts the underlying storage into the container at the specified path.
When a PVC is deleted, the PV enters the Released state. What happens next depends on the Reclaim Policy:
rm -rf) and make available againAfter Released, a PV with Retain policy holds data and cannot be bound to a new PVC without manual intervention. An admin must delete and recreate the PV (or clean the underlying storage) to reuse it.
A PV manifest defines the storage backend and its characteristics.
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-local-data
labels:
type: local
env: production
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
hostPath:
path: /data/k8s/pv-local-dataapiVersion: v1
kind: PersistentVolume
metadata:
name: pv-nfs-data
spec:
storageClassName: nfs
capacity:
storage: 50Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
nfs:
server: 192.168.1.100
path: /exports/k8s-data| Field | Description |
|---|---|
capacity.storage | The storage size this PV offers |
accessModes | How the volume can be mounted |
storageClassName | Links PV to a StorageClass |
persistentVolumeReclaimPolicy | What happens when PVC is deleted |
volumeMode | Filesystem (default) or Block |
nodeAffinity | Constrain PV to specific nodes (for local volumes) |
A PVC is simpler — it describes what the application needs without specifying how or where.
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: app-data-pvc
namespace: production
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10GiapiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: app-data-pvc
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
selector:
matchLabels:
type: local
env: productionTip
Use label selectors when you want to bind a PVC to a specific PV, not just any PV that satisfies the size and access mode requirements. This is useful for ensuring a particular application always gets the same storage.
Access modes define how a volume can be mounted across nodes. Not all storage backends support all modes.
| Mode | Short | Description |
|---|---|---|
ReadWriteOnce | RWO | Read-write by a single node |
ReadOnlyMany | ROX | Read-only by many nodes |
ReadWriteMany | RWX | Read-write by many nodes |
ReadWriteOncePod | RWOP | Read-write by a single Pod (K8s 1.22+) |
# Single-node databases (MySQL, PostgreSQL)
accessModes:
- ReadWriteOnce
# Shared read-only config or static assets
accessModes:
- ReadOnlyMany
# Shared writable storage (NFS, CephFS)
accessModes:
- ReadWriteMany
# Strict single-Pod guarantee
accessModes:
- ReadWriteOncePodWarning
The access mode is a capability declaration, not an enforcement mechanism at the node level. ReadWriteOnce means the volume can only be mounted as read-write on one node at a time — but multiple Pods on the same node can use it simultaneously.
Reclaim policy controls what happens to the underlying storage when a PVC is deleted.
persistentVolumeReclaimPolicy: RetainReleased statepersistentVolumeReclaimPolicy: DeleteapiVersion: v1
kind: PersistentVolume
metadata:
name: db-pv-production
spec:
capacity:
storage: 100Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: fast-ssd
# ... backend specWith static provisioning, an admin must pre-create PVs. With dynamic provisioning, a StorageClass automatically creates a PV when a matching PVC is submitted — no admin intervention needed.
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: fast-ssd
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
reclaimPolicy: Retain
allowVolumeExpansion: trueapiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: aws-ebs-gp3
provisioner: ebs.csi.aws.com
parameters:
type: gp3
iopsPerGB: "3000"
throughput: "125"
encrypted: "true"
reclaimPolicy: Delete
allowVolumeExpansion: true
volumeBindingMode: WaitForFirstConsumerapiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: app-data-pvc
namespace: production
spec:
storageClassName: fast-ssd
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20GiWhen this PVC is created, the fast-ssd StorageClass triggers automatic PV creation and binding. No manual PV creation needed.
Tip
Set a default StorageClass in your cluster so that PVCs that don't specify a storageClassName are automatically handled. In K3s, local-path is the default StorageClass.
This is the most common pattern for self-managed databases: an admin pre-creates a PV on a fast local disk, and the database workload claims it.
apiVersion: v1
kind: PersistentVolume
metadata:
name: postgres-pv
labels:
app: postgres
spec:
storageClassName: manual
capacity:
storage: 20Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
hostPath:
path: /data/k8s/postgres
type: DirectoryOrCreateWhen multiple Pods need to read and write the same shared storage (e.g., a legacy file upload service with multiple replicas), use ReadWriteMany with an NFS-backed PV.
After applying manifests, verify binding:
# List all PersistentVolumes
sudo kubectl get pv
# List all PersistentVolumeClaims
sudo kubectl get pvc
# List PVCs in a specific namespace
sudo kubectl get pvc -n production
# Describe a specific PV for detailed info
sudo kubectl describe pv postgres-pv
# Describe a specific PVC
sudo kubectl describe pvc postgres-pvcExpected output for a healthy bound PV:
Expected output for a healthy bound PVC:
If your application outgrows the original PVC size, you can expand it (if the StorageClass has allowVolumeExpansion: true).
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: postgres-pvc
spec:
storageClassName: fast-ssd
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
storage: 50GiApply the change and Kubernetes will trigger a resize operation on the underlying storage.
Caution
Volume shrinking is not supported. You can only expand a PVC, never reduce it. Plan your initial size conservatively, but not too small — leave room for growth.
Symptom: PVC stays Pending indefinitely.
Causes and fixes:
| Cause | Fix |
|---|---|
| No PV with matching access mode | Create a PV with correct access mode |
| PV capacity is smaller than PVC request | Increase PV capacity or reduce PVC request |
| StorageClass mismatch | Ensure storageClassName matches between PV and PVC |
| No default StorageClass | Add a default StorageClass or specify one explicitly |
sudo kubectl describe pvc <pvc-name>
# Look at the "Events" section for the root causeProblem: Data disappears after Pod restarts.
Root cause: Using emptyDir or hostPath instead of a PVC.
# Bad: data is lost when Pod is deleted or rescheduled
volumes:
- name: data
emptyDir: {}
# Good: data persists across Pod restarts and rescheduling
volumes:
- name: data
persistentVolumeClaim:
claimName: app-data-pvcProblem: Accidentally deleting a PVC deletes the application's data.
Prevention: Use Retain reclaim policy and protect PVCs with finalizers or RBAC policies that prevent unintended deletion.
Warning
A PVC deletion with Delete reclaim policy will permanently destroy the underlying storage and all its data. There is no undo. Always use Retain for production stateful workloads.
Problem: Pod cannot find the PVC.
Root cause: PVCs are Namespace-scoped. A Pod in namespace: production cannot reference a PVC in namespace: default.
sudo kubectl get pvc -n production
sudo kubectl get pod -n productionProblem: Pod is rescheduled to a different node and storage data is no longer available.
Root cause: hostPath PVs are tied to a specific node. When a Pod moves to another node, it doesn't find the data.
Fix: Use nodeAffinity to pin the Pod to the node with the hostPath data, or better — use proper networked storage (NFS, CSI driver, cloud volumes).
apiVersion: v1
kind: PersistentVolume
metadata:
name: local-pv
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /data/k8s
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- node-01Don't rely on defaults. Always set persistentVolumeReclaimPolicy explicitly:
# For production databases
persistentVolumeReclaimPolicy: Retain
# For dev/test ephemeral storage
persistentVolumeReclaimPolicy: DeleteAvoid static PV management at scale. Define clear StorageClasses for different tiers:
# High-performance SSD for databases
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: fast-ssd
annotations:
storageclass.kubernetes.io/is-default-class: "false"
provisioner: ebs.csi.aws.com
parameters:
type: gp3
reclaimPolicy: Retain
allowVolumeExpansion: true
---
# Standard HDD for backups/logs
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: standard-hdd
provisioner: ebs.csi.aws.com
parameters:
type: st1
reclaimPolicy: Delete
allowVolumeExpansion: trueLabel PVs for easy filtering and selector-based binding:
metadata:
name: my-pv
labels:
env: production
app: postgres
tier: databaseAlways enable allowVolumeExpansion so you can grow PVCs without downtime:
allowVolumeExpansion: trueWaitForFirstConsumer Volume Binding ModeThis delays PV binding until a Pod is scheduled, ensuring the PV is created in the same availability zone as the Pod:
volumeBindingMode: WaitForFirstConsumerImportant
Without WaitForFirstConsumer, a PV can be provisioned in a different availability zone than where the Pod lands, causing a scheduling failure. This is especially important in multi-zone cloud environments.
Integrate PV/PVC status into your observability stack:
# Watch status continuously
sudo kubectl get pv,pvc --all-namespaces -w
# Check for Released or Failed PVs (potential orphans)
sudo kubectl get pv | grep -v BoundPVs and PVCs add complexity. Sometimes simpler solutions are right:
emptyDir or no volume at all.configMap and secret volumes.emptyDir is fine.Note
PVs and PVCs shine when you need to persist application state inside the cluster. For everything else, lean on managed services or simpler volume types.
In episode 21.1, we've covered PersistentVolume (PV) and PersistentVolumeClaim (PVC) in depth. These two objects are the backbone of stateful application storage in Kubernetes.
Key takeaways:
Retain vs Delete) controls what happens to data when PVC is deletedRetain for production databases to prevent accidental data lossWaitForFirstConsumer in multi-zone clusters to avoid zone-mismatch failuresallowVolumeExpansion on StorageClasses for online volume growthUnderstanding PV and PVC puts you in control of storage management in Kubernetes — a critical skill for running stateful applications reliably in production.
Are you getting a clearer picture of how Kubernetes manages persistent storage? Keep the momentum going and look forward to the next episode!
Note
If you want to continue to the next episode, you can click the Episode 22 thumbnail below