Learning Kubernetes - Episode 11 - Introduction and Explanation of ReplicationController

Learning Kubernetes - Episode 11 - Introduction and Explanation of ReplicationController

In this episode, we'll discuss an important concept in Kubernetes called ReplicationController. We'll learn how ReplicationController ensures a specified number of Pod replicas are running at any given time.

Arman Dwi Pangestu
Arman Dwi PangestuMarch 14, 2026
0 views
8 min read

Introduction

Note

If you want to read the previous episode, you can click the Episode 10 thumbnail below

Episode 10Episode 10

In the previous episode, we learned about Probes in Kubernetes for ensuring application health and availability. In episode 11, we'll discuss an important concept for managing Pod replicas: ReplicationController.

Note: Here I'll be using a Kubernetes Cluster installed through K3s.

ReplicationController is one of the fundamental controllers in Kubernetes that ensures a specified number of Pod replicas are running at any given time. While ReplicationController has been largely superseded by ReplicaSet and Deployment in modern Kubernetes, understanding it is important because it introduces core concepts that are used throughout Kubernetes.

Important

Important Note: ReplicationController is considered legacy. In modern Kubernetes, you should use ReplicaSet (which we'll cover in the next episode) or Deployment instead. However, understanding ReplicationController helps you grasp the fundamental concepts of replica management in Kubernetes.

What Is ReplicationController?

A ReplicationController ensures that a specified number of Pod replicas are running at any given time. If there are too many Pods, it will kill some. If there are too few, it will start more. Think of it as a supervisor that constantly monitors your Pods and maintains the desired state.

Key responsibilities of ReplicationController:

  • Maintain desired replica count - Ensures the specified number of Pods are always running
  • Self-healing - Automatically replaces Pods that fail or are deleted
  • Scaling - Allows you to scale Pods up or down
  • Load distribution - Distributes Pods across multiple nodes

How ReplicationController Works

ReplicationController continuously monitors the cluster and compares the actual state with the desired state:

  1. Desired State: You specify you want 3 replicas
  2. Actual State: ReplicationController counts running Pods
  3. Reconciliation: If actual ≠ desired, ReplicationController takes action
    • Too few Pods → Create new Pods
    • Too many Pods → Delete excess Pods

Why Do We Need ReplicationController?

Without ReplicationController, if you manually create Pods:

  • No automatic recovery - If a Pod crashes, it stays crashed
  • No scaling - You have to manually create/delete Pods
  • No load distribution - You have to manually distribute Pods across nodes
  • Manual management - You have to track and manage each Pod individually

With ReplicationController:

  • Automatic recovery - Failed Pods are automatically replaced
  • Easy scaling - Change replica count and Kubernetes handles the rest
  • High availability - Multiple replicas ensure service continuity
  • Simplified management - Manage multiple Pods as a single unit

ReplicationController Components

A ReplicationController consists of three main components:

1. Replica Count

The number of Pod replicas you want to run:

Kubernetesyml
spec:
    replicas: 3  # Run 3 Pods

2. Pod Selector

Labels used to identify which Pods the ReplicationController manages:

Kubernetesyml
spec:
    selector:
        app: nginx
        environment: production

3. Pod Template

The template used to create new Pods:

Kubernetesyml
spec:
    template:
        metadata:
            labels:
                app: nginx
        spec:
            containers:
                - name: nginx
                  image: nginx:1.25

Creating a ReplicationController

Let's create a basic ReplicationController:

Example 1: Basic ReplicationController

Create a file named replication-controller.yml:

Kubernetesreplication-controller.yml
apiVersion: v1
kind: ReplicationController
metadata:
    name: nginx-rc
    labels:
        app: nginx
spec:
    replicas: 3
    selector:
        app: nginx
    template:
        metadata:
            labels:
                app: nginx
        spec:
            containers:
                - name: nginx
                  image: nginx:1.25
                  ports:
                      - containerPort: 80

This ReplicationController:

  • Creates 3 replicas of nginx Pods
  • Selects Pods with label app: nginx
  • Uses nginx:1.25 image

Apply the configuration:

Kubernetesbash
sudo kubectl apply -f replication-controller.yml

Verify the ReplicationController is created:

Kubernetesbash
sudo kubectl get replicationcontroller

Or use the shorthand:

Kubernetesbash
sudo kubectl get rc

Output:

Kubernetesbash
NAME       DESIRED   CURRENT   READY   AGE
nginx-rc   3         3         3       30s

Check the Pods created by the ReplicationController:

Kubernetesbash
sudo kubectl get pods

Output:

Kubernetesbash
NAME             READY   STATUS    RESTARTS   AGE
nginx-rc-abc12   1/1     Running   0          30s
nginx-rc-def34   1/1     Running   0          30s
nginx-rc-ghi56   1/1     Running   0          30s

Notice that Pod names are automatically generated with the ReplicationController name as prefix.

Viewing ReplicationController Details

To see detailed information about a ReplicationController:

Kubernetesbash
sudo kubectl describe rc nginx-rc

Output:

Kubernetesbash
Name:         nginx-rc
Namespace:    default
Selector:     app=nginx
Labels:       app=nginx
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  3 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=nginx
  Containers:
   nginx:
    Image:        nginx:1.25
    Port:         80/TCP
    Host Port:    0/TCP
    Environment:  <none>
    Mounts:       <none>
  Volumes:        <none>
Events:
  Type    Reason            Age   From                    Message
  ----    ------            ----  ----                    -------
  Normal  SuccessfulCreate  2m    replication-controller  Created pod: nginx-rc-abc12
  Normal  SuccessfulCreate  2m    replication-controller  Created pod: nginx-rc-def34
  Normal  SuccessfulCreate  2m    replication-controller  Created pod: nginx-rc-ghi56

Self-Healing Demonstration

Let's demonstrate how ReplicationController automatically replaces failed Pods:

Step 1: Check Current Pods

Kubernetesbash
sudo kubectl get pods

Output:

Kubernetesbash
NAME             READY   STATUS    RESTARTS   AGE
nginx-rc-abc12   1/1     Running   0          5m
nginx-rc-def34   1/1     Running   0          5m
nginx-rc-ghi56   1/1     Running   0          5m

Step 2: Delete a Pod

Kubernetesbash
sudo kubectl delete pod nginx-rc-abc12

Step 3: Watch ReplicationController Create a New Pod

Kubernetesbash
sudo kubectl get pods -w

You'll see:

Kubernetesbash
NAME             READY   STATUS              RESTARTS   AGE
nginx-rc-abc12   1/1     Terminating         0          5m
nginx-rc-def34   1/1     Running             0          5m
nginx-rc-ghi56   1/1     Running             0          5m
nginx-rc-xyz78   0/1     ContainerCreating   0          1s
nginx-rc-xyz78   1/1     Running             0          3s

The ReplicationController immediately creates a new Pod (nginx-rc-xyz78) to maintain the desired count of 3 replicas.

Step 4: Verify Replica Count

Kubernetesbash
sudo kubectl get rc nginx-rc

Output:

Kubernetesbash
NAME       DESIRED   CURRENT   READY   AGE
nginx-rc   3         3         3       6m

The replica count remains at 3, as expected.

Scaling ReplicationController

You can scale ReplicationController up or down in several ways:

Method 1: Using kubectl scale Command

Scale up to 5 replicas:

Kubernetesbash
sudo kubectl scale rc nginx-rc --replicas=5

Verify:

Kubernetesbash
sudo kubectl get pods

Output:

Kubernetesbash
NAME             READY   STATUS    RESTARTS   AGE
nginx-rc-def34   1/1     Running   0          10m
nginx-rc-ghi56   1/1     Running   0          10m
nginx-rc-xyz78   1/1     Running   0          5m
nginx-rc-jkl90   1/1     Running   0          10s
nginx-rc-mno12   1/1     Running   0          10s

Scale down to 2 replicas:

Kubernetesbash
sudo kubectl scale rc nginx-rc --replicas=2

ReplicationController will terminate 3 Pods to maintain 2 replicas.

Method 2: Editing the ReplicationController

Edit the ReplicationController directly:

Kubernetesbash
sudo kubectl edit rc nginx-rc

Change the replicas field:

Kubernetesyml
spec:
    replicas: 4  # Change from 3 to 4

Save and exit. Kubernetes will automatically create one more Pod.

Method 3: Updating the YAML File

Update your replication-controller.yml file:

Kubernetesreplication-controller.yml
spec:
    replicas: 6  # Changed from 3 to 6

Apply the changes:

Kubernetesbash
sudo kubectl apply -f replication-controller.yml

Label Selector Behavior

ReplicationController uses label selectors to identify which Pods it manages. Let's explore this:

Example: Creating a Pod with Matching Labels

Create a Pod manually with the same labels:

Kubernetesmanual-pod.yml
apiVersion: v1
kind: Pod
metadata:
    name: manual-nginx
    labels:
        app: nginx  # Same label as ReplicationController selector
spec:
    containers:
        - name: nginx
          image: nginx:1.25

Apply:

Kubernetesbash
sudo kubectl apply -f manual-pod.yml

Check Pods:

Kubernetesbash
sudo kubectl get pods

You'll notice that ReplicationController will delete one of its own Pods because the total count (including your manual Pod) exceeds the desired replica count.

Example: Removing Labels from a Pod

Get a Pod managed by ReplicationController:

Kubernetesbash
sudo kubectl get pods -l app=nginx

Remove the label from one Pod:

Kubernetesbash
sudo kubectl label pod nginx-rc-def34 app-

The Pod is no longer managed by ReplicationController. ReplicationController will create a new Pod to maintain the desired count.

Check Pods:

Kubernetesbash
sudo kubectl get pods

You'll see:

  • The Pod with removed label still running (but not managed)
  • A new Pod created by ReplicationController

Updating Pod Template

When you update the Pod template in a ReplicationController, it only affects new Pods. Existing Pods are not updated.

Example: Updating Image Version

Edit the ReplicationController:

Kubernetesbash
sudo kubectl edit rc nginx-rc

Change the image version:

Kubernetesyml
spec:
    template:
        spec:
            containers:
                - name: nginx
                  image: nginx:1.26  # Changed from 1.25 to 1.26

Save and exit.

Check existing Pods:

Kubernetesbash
sudo kubectl get pods -o wide

Existing Pods still use the old image (nginx:1.25).

To apply the new template, you need to delete existing Pods:

Kubernetesbash
# Delete all Pods (ReplicationController will recreate them with new template)
sudo kubectl delete pods -l app=nginx

Or scale down to 0 and back up:

Kubernetesbash
sudo kubectl scale rc nginx-rc --replicas=0
sudo kubectl scale rc nginx-rc --replicas=3

Tip

This limitation is one reason why Deployment is preferred over ReplicationController. Deployments handle rolling updates automatically.

Deleting ReplicationController

There are two ways to delete a ReplicationController:

Method 1: Delete ReplicationController and Pods

This deletes both the ReplicationController and all its Pods:

Kubernetesbash
sudo kubectl delete rc nginx-rc

All Pods managed by the ReplicationController will be terminated.

Method 2: Delete ReplicationController but Keep Pods

This deletes only the ReplicationController, leaving Pods running:

Kubernetesbash
sudo kubectl delete rc nginx-rc --cascade=orphan

The Pods will continue running but are no longer managed by any controller.

Verify:

Kubernetesbash
sudo kubectl get rc

No ReplicationController.

Kubernetesbash
sudo kubectl get pods

Pods are still running.

Warning

Orphaned Pods won't be automatically replaced if they fail. You'll need to manage them manually or create a new ReplicationController to adopt them.

Practical Examples

Example 1: Web Application with Multiple Replicas

Create a ReplicationController for a web application:

Kubernetesweb-app-rc.yml
apiVersion: v1
kind: ReplicationController
metadata:
    name: web-app-rc
    labels:
        app: web-app
        tier: frontend
spec:
    replicas: 5
    selector:
        app: web-app
        tier: frontend
    template:
        metadata:
            labels:
                app: web-app
                tier: frontend
        spec:
            containers:
                - name: web-app
                  image: nginx:1.25
                  ports:
                      - containerPort: 80
                  resources:
                      requests:
                          memory: "128Mi"
                          cpu: "100m"
                      limits:
                          memory: "256Mi"
                          cpu: "200m"

Example 2: ReplicationController with Environment Variables

Kubernetesapp-with-env-rc.yml
apiVersion: v1
kind: ReplicationController
metadata:
    name: app-with-env-rc
spec:
    replicas: 3
    selector:
        app: myapp
    template:
        metadata:
            labels:
                app: myapp
        spec:
            containers:
                - name: app
                  image: nginx:1.25
                  env:
                      - name: ENVIRONMENT
                        value: "production"
                      - name: LOG_LEVEL
                        value: "info"
                  ports:
                      - containerPort: 80

Example 3: ReplicationController with Probes

Kubernetesrc-with-probes.yml
apiVersion: v1
kind: ReplicationController
metadata:
    name: nginx-with-probes-rc
spec:
    replicas: 3
    selector:
        app: nginx-probes
    template:
        metadata:
            labels:
                app: nginx-probes
        spec:
            containers:
                - name: nginx
                  image: nginx:1.25
                  ports:
                      - containerPort: 80
                  livenessProbe:
                      httpGet:
                          path: /
                          port: 80
                      initialDelaySeconds: 3
                      periodSeconds: 10
                  readinessProbe:
                      httpGet:
                          path: /
                          port: 80
                      initialDelaySeconds: 5
                      periodSeconds: 5

ReplicationController vs ReplicaSet

While ReplicationController is still supported, ReplicaSet is the modern replacement. Here are the key differences:

FeatureReplicationControllerReplicaSet
SelectorEquality-based onlySet-based selectors
API Versionv1apps/v1
StatusLegacyCurrent standard
Used byStandaloneUsed by Deployments
Selector flexibilityLimitedMore flexible

Equality-based selector (ReplicationController):

Kubernetesyml
selector:
    app: nginx
    tier: frontend

Set-based selector (ReplicaSet):

Kubernetesyml
selector:
    matchLabels:
        app: nginx
    matchExpressions:
        - key: tier
          operator: In
          values:
              - frontend
              - backend

Important

Recommendation: Use ReplicaSet or Deployment instead of ReplicationController for new applications. ReplicationController is maintained for backward compatibility but lacks modern features.

Common Mistakes and Pitfalls

Mistake 1: Mismatched Labels

The Pod template labels must match the selector:

Wrong:

Kubernetesyml
spec:
    selector:
        app: nginx  # Selector
    template:
        metadata:
            labels:
                app: web  # Doesn't match!

Correct:

Kubernetesyml
spec:
    selector:
        app: nginx
    template:
        metadata:
            labels:
                app: nginx  # Matches selector

Mistake 2: Expecting Automatic Updates

Updating the Pod template doesn't update existing Pods automatically.

Solution: Use Deployment for rolling updates, or manually delete Pods to force recreation.

Mistake 3: Not Setting Resource Limits

Without resource limits, Pods can consume all node resources.

Solution: Always set resource requests and limits:

Kubernetesyml
resources:
    requests:
        memory: "128Mi"
        cpu: "100m"
    limits:
        memory: "256Mi"
        cpu: "200m"

Mistake 4: Using ReplicationController for Stateful Applications

ReplicationController is designed for stateless applications.

Solution: Use StatefulSet for stateful applications (databases, etc.).

Mistake 5: Not Using Probes

Without Probes, ReplicationController can't detect unhealthy Pods.

Solution: Always add Liveness and Readiness Probes.

Best Practices

Use Meaningful Names

Use descriptive names for ReplicationControllers:

Kubernetesyml
metadata:
    name: web-app-frontend-rc  # Clear and descriptive

Add Labels for Organization

Add labels for better organization:

Kubernetesyml
metadata:
    labels:
        app: web-app
        tier: frontend
        environment: production
        version: v1.0

Set Appropriate Replica Counts

Consider your application's needs:

  • High availability: At least 3 replicas
  • Development: 1-2 replicas
  • Production: 3+ replicas across multiple nodes

Use Resource Limits

Always set resource requests and limits:

Kubernetesyml
resources:
    requests:
        memory: "128Mi"
        cpu: "100m"
    limits:
        memory: "256Mi"
        cpu: "200m"

Implement Health Checks

Always add Probes:

Kubernetesyml
livenessProbe:
    httpGet:
        path: /healthz
        port: 8080
readinessProbe:
    httpGet:
        path: /ready
        port: 8080

Consider Migration to ReplicaSet/Deployment

For new applications, use ReplicaSet or Deployment:

Kubernetesyml
# Modern approach - use Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
    name: nginx-deployment
spec:
    replicas: 3
    selector:
        matchLabels:
            app: nginx
    template:
        metadata:
            labels:
                app: nginx
        spec:
            containers:
                - name: nginx
                  image: nginx:1.25

Monitoring ReplicationController

Check ReplicationController Status

Kubernetesbash
sudo kubectl get rc

Watch ReplicationController Events

Kubernetesbash
sudo kubectl get events --sort-by='.lastTimestamp' | grep ReplicationController

Monitor Pod Status

Kubernetesbash
sudo kubectl get pods -l app=nginx -w

Check Resource Usage

Kubernetesbash
sudo kubectl top pods -l app=nginx

Conclusion

In episode 11, we've explored ReplicationController in Kubernetes in depth. We've learned what ReplicationController is, how it works, and how to use it to manage Pod replicas.

Key takeaways:

  • ReplicationController ensures a specified number of Pod replicas are running
  • Provides self-healing by automatically replacing failed Pods
  • Supports scaling up and down
  • Uses label selectors to identify managed Pods
  • Legacy technology - use ReplicaSet or Deployment for new applications
  • Pod template updates don't affect existing Pods
  • Can delete ReplicationController while keeping Pods (orphan mode)

While ReplicationController is still supported, it's considered legacy. Modern Kubernetes applications should use ReplicaSet (which we'll cover in the next episode) or Deployment for better features like rolling updates, rollback capabilities, and more flexible selectors.

Are you getting a clearer understanding of ReplicationController in Kubernetes? In the next episode 12, we'll discuss ReplicaSet, the modern replacement for ReplicationController with enhanced features and better selector support. Keep your learning momentum going and look forward to the next episode!

Note

If you want to continue reading, you can click the Episode 12 thumbnail below

Episode 12Episode 12

Related Posts