In this episode, we'll discuss an important concept in Kubernetes called ReplicationController. We'll learn how ReplicationController ensures a specified number of Pod replicas are running at any given time.

Note
If you want to read the previous episode, you can click the Episode 10 thumbnail below
In the previous episode, we learned about Probes in Kubernetes for ensuring application health and availability. In episode 11, we'll discuss an important concept for managing Pod replicas: ReplicationController.
Note: Here I'll be using a Kubernetes Cluster installed through K3s.
ReplicationController is one of the fundamental controllers in Kubernetes that ensures a specified number of Pod replicas are running at any given time. While ReplicationController has been largely superseded by ReplicaSet and Deployment in modern Kubernetes, understanding it is important because it introduces core concepts that are used throughout Kubernetes.
Important
Important Note: ReplicationController is considered legacy. In modern Kubernetes, you should use ReplicaSet (which we'll cover in the next episode) or Deployment instead. However, understanding ReplicationController helps you grasp the fundamental concepts of replica management in Kubernetes.
A ReplicationController ensures that a specified number of Pod replicas are running at any given time. If there are too many Pods, it will kill some. If there are too few, it will start more. Think of it as a supervisor that constantly monitors your Pods and maintains the desired state.
Key responsibilities of ReplicationController:
ReplicationController continuously monitors the cluster and compares the actual state with the desired state:
Without ReplicationController, if you manually create Pods:
With ReplicationController:
A ReplicationController consists of three main components:
The number of Pod replicas you want to run:
spec:
replicas: 3 # Run 3 PodsLabels used to identify which Pods the ReplicationController manages:
spec:
selector:
app: nginx
environment: productionThe template used to create new Pods:
spec:
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.25Let's create a basic ReplicationController:
Create a file named replication-controller.yml:
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx-rc
labels:
app: nginx
spec:
replicas: 3
selector:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.25
ports:
- containerPort: 80This ReplicationController:
app: nginxApply the configuration:
sudo kubectl apply -f replication-controller.ymlVerify the ReplicationController is created:
sudo kubectl get replicationcontrollerOr use the shorthand:
sudo kubectl get rcOutput:
NAME DESIRED CURRENT READY AGE
nginx-rc 3 3 3 30sCheck the Pods created by the ReplicationController:
sudo kubectl get podsOutput:
NAME READY STATUS RESTARTS AGE
nginx-rc-abc12 1/1 Running 0 30s
nginx-rc-def34 1/1 Running 0 30s
nginx-rc-ghi56 1/1 Running 0 30sNotice that Pod names are automatically generated with the ReplicationController name as prefix.
To see detailed information about a ReplicationController:
sudo kubectl describe rc nginx-rcOutput:
Name: nginx-rc
Namespace: default
Selector: app=nginx
Labels: app=nginx
Annotations: <none>
Replicas: 3 current / 3 desired
Pods Status: 3 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
Labels: app=nginx
Containers:
nginx:
Image: nginx:1.25
Port: 80/TCP
Host Port: 0/TCP
Environment: <none>
Mounts: <none>
Volumes: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulCreate 2m replication-controller Created pod: nginx-rc-abc12
Normal SuccessfulCreate 2m replication-controller Created pod: nginx-rc-def34
Normal SuccessfulCreate 2m replication-controller Created pod: nginx-rc-ghi56Let's demonstrate how ReplicationController automatically replaces failed Pods:
sudo kubectl get podsOutput:
NAME READY STATUS RESTARTS AGE
nginx-rc-abc12 1/1 Running 0 5m
nginx-rc-def34 1/1 Running 0 5m
nginx-rc-ghi56 1/1 Running 0 5msudo kubectl delete pod nginx-rc-abc12sudo kubectl get pods -wYou'll see:
NAME READY STATUS RESTARTS AGE
nginx-rc-abc12 1/1 Terminating 0 5m
nginx-rc-def34 1/1 Running 0 5m
nginx-rc-ghi56 1/1 Running 0 5m
nginx-rc-xyz78 0/1 ContainerCreating 0 1s
nginx-rc-xyz78 1/1 Running 0 3sThe ReplicationController immediately creates a new Pod (nginx-rc-xyz78) to maintain the desired count of 3 replicas.
sudo kubectl get rc nginx-rcOutput:
NAME DESIRED CURRENT READY AGE
nginx-rc 3 3 3 6mThe replica count remains at 3, as expected.
You can scale ReplicationController up or down in several ways:
Scale up to 5 replicas:
sudo kubectl scale rc nginx-rc --replicas=5Verify:
sudo kubectl get podsOutput:
NAME READY STATUS RESTARTS AGE
nginx-rc-def34 1/1 Running 0 10m
nginx-rc-ghi56 1/1 Running 0 10m
nginx-rc-xyz78 1/1 Running 0 5m
nginx-rc-jkl90 1/1 Running 0 10s
nginx-rc-mno12 1/1 Running 0 10sScale down to 2 replicas:
sudo kubectl scale rc nginx-rc --replicas=2ReplicationController will terminate 3 Pods to maintain 2 replicas.
Edit the ReplicationController directly:
sudo kubectl edit rc nginx-rcChange the replicas field:
spec:
replicas: 4 # Change from 3 to 4Save and exit. Kubernetes will automatically create one more Pod.
Update your replication-controller.yml file:
spec:
replicas: 6 # Changed from 3 to 6Apply the changes:
sudo kubectl apply -f replication-controller.ymlReplicationController uses label selectors to identify which Pods it manages. Let's explore this:
Create a Pod manually with the same labels:
apiVersion: v1
kind: Pod
metadata:
name: manual-nginx
labels:
app: nginx # Same label as ReplicationController selector
spec:
containers:
- name: nginx
image: nginx:1.25Apply:
sudo kubectl apply -f manual-pod.ymlCheck Pods:
sudo kubectl get podsYou'll notice that ReplicationController will delete one of its own Pods because the total count (including your manual Pod) exceeds the desired replica count.
Get a Pod managed by ReplicationController:
sudo kubectl get pods -l app=nginxRemove the label from one Pod:
sudo kubectl label pod nginx-rc-def34 app-The Pod is no longer managed by ReplicationController. ReplicationController will create a new Pod to maintain the desired count.
Check Pods:
sudo kubectl get podsYou'll see:
When you update the Pod template in a ReplicationController, it only affects new Pods. Existing Pods are not updated.
Edit the ReplicationController:
sudo kubectl edit rc nginx-rcChange the image version:
spec:
template:
spec:
containers:
- name: nginx
image: nginx:1.26 # Changed from 1.25 to 1.26Save and exit.
Check existing Pods:
sudo kubectl get pods -o wideExisting Pods still use the old image (nginx:1.25).
To apply the new template, you need to delete existing Pods:
# Delete all Pods (ReplicationController will recreate them with new template)
sudo kubectl delete pods -l app=nginxOr scale down to 0 and back up:
sudo kubectl scale rc nginx-rc --replicas=0
sudo kubectl scale rc nginx-rc --replicas=3Tip
This limitation is one reason why Deployment is preferred over ReplicationController. Deployments handle rolling updates automatically.
There are two ways to delete a ReplicationController:
This deletes both the ReplicationController and all its Pods:
sudo kubectl delete rc nginx-rcAll Pods managed by the ReplicationController will be terminated.
This deletes only the ReplicationController, leaving Pods running:
sudo kubectl delete rc nginx-rc --cascade=orphanThe Pods will continue running but are no longer managed by any controller.
Verify:
sudo kubectl get rcNo ReplicationController.
sudo kubectl get podsPods are still running.
Warning
Orphaned Pods won't be automatically replaced if they fail. You'll need to manage them manually or create a new ReplicationController to adopt them.
Create a ReplicationController for a web application:
apiVersion: v1
kind: ReplicationController
metadata:
name: web-app-rc
labels:
app: web-app
tier: frontend
spec:
replicas: 5
selector:
app: web-app
tier: frontend
template:
metadata:
labels:
app: web-app
tier: frontend
spec:
containers:
- name: web-app
image: nginx:1.25
ports:
- containerPort: 80
resources:
requests:
memory: "128Mi"
cpu: "100m"
limits:
memory: "256Mi"
cpu: "200m"apiVersion: v1
kind: ReplicationController
metadata:
name: app-with-env-rc
spec:
replicas: 3
selector:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: app
image: nginx:1.25
env:
- name: ENVIRONMENT
value: "production"
- name: LOG_LEVEL
value: "info"
ports:
- containerPort: 80apiVersion: v1
kind: ReplicationController
metadata:
name: nginx-with-probes-rc
spec:
replicas: 3
selector:
app: nginx-probes
template:
metadata:
labels:
app: nginx-probes
spec:
containers:
- name: nginx
image: nginx:1.25
ports:
- containerPort: 80
livenessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 3
periodSeconds: 10
readinessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 5
periodSeconds: 5While ReplicationController is still supported, ReplicaSet is the modern replacement. Here are the key differences:
| Feature | ReplicationController | ReplicaSet |
|---|---|---|
| Selector | Equality-based only | Set-based selectors |
| API Version | v1 | apps/v1 |
| Status | Legacy | Current standard |
| Used by | Standalone | Used by Deployments |
| Selector flexibility | Limited | More flexible |
Equality-based selector (ReplicationController):
selector:
app: nginx
tier: frontendSet-based selector (ReplicaSet):
selector:
matchLabels:
app: nginx
matchExpressions:
- key: tier
operator: In
values:
- frontend
- backendImportant
Recommendation: Use ReplicaSet or Deployment instead of ReplicationController for new applications. ReplicationController is maintained for backward compatibility but lacks modern features.
The Pod template labels must match the selector:
Wrong:
spec:
selector:
app: nginx # Selector
template:
metadata:
labels:
app: web # Doesn't match!Correct:
spec:
selector:
app: nginx
template:
metadata:
labels:
app: nginx # Matches selectorUpdating the Pod template doesn't update existing Pods automatically.
Solution: Use Deployment for rolling updates, or manually delete Pods to force recreation.
Without resource limits, Pods can consume all node resources.
Solution: Always set resource requests and limits:
resources:
requests:
memory: "128Mi"
cpu: "100m"
limits:
memory: "256Mi"
cpu: "200m"ReplicationController is designed for stateless applications.
Solution: Use StatefulSet for stateful applications (databases, etc.).
Without Probes, ReplicationController can't detect unhealthy Pods.
Solution: Always add Liveness and Readiness Probes.
Use descriptive names for ReplicationControllers:
metadata:
name: web-app-frontend-rc # Clear and descriptiveAdd labels for better organization:
metadata:
labels:
app: web-app
tier: frontend
environment: production
version: v1.0Consider your application's needs:
Always set resource requests and limits:
resources:
requests:
memory: "128Mi"
cpu: "100m"
limits:
memory: "256Mi"
cpu: "200m"Always add Probes:
livenessProbe:
httpGet:
path: /healthz
port: 8080
readinessProbe:
httpGet:
path: /ready
port: 8080For new applications, use ReplicaSet or Deployment:
# Modern approach - use Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.25sudo kubectl get rcsudo kubectl get events --sort-by='.lastTimestamp' | grep ReplicationControllersudo kubectl get pods -l app=nginx -wsudo kubectl top pods -l app=nginxIn episode 11, we've explored ReplicationController in Kubernetes in depth. We've learned what ReplicationController is, how it works, and how to use it to manage Pod replicas.
Key takeaways:
While ReplicationController is still supported, it's considered legacy. Modern Kubernetes applications should use ReplicaSet (which we'll cover in the next episode) or Deployment for better features like rolling updates, rollback capabilities, and more flexible selectors.
Are you getting a clearer understanding of ReplicationController in Kubernetes? In the next episode 12, we'll discuss ReplicaSet, the modern replacement for ReplicationController with enhanced features and better selector support. Keep your learning momentum going and look forward to the next episode!
Note
If you want to continue reading, you can click the Episode 12 thumbnail below