Learning Kubernetes - Episode 3 - Installing Kubernetes Cluster (Master & Worker Nodes)

Learning Kubernetes - Episode 3 - Installing Kubernetes Cluster (Master & Worker Nodes)

In this episode, we'll start hands-on practice by installing a Kubernetes Cluster using various methods: Minikube, Kind, K3s, and manual K8s installation using kubeadm.

Arman Dwi Pangestu
Arman Dwi PangestuMay 11, 2025
0 views
27 min read

Introduction

After discussing Kubernetes concepts and architecture in the previous episode, in episode 3 we'll start hands-on practice by installing a Kubernetes Cluster for both Master Node and Worker Node. To install a Kubernetes Cluster for development, testing, or learning purposes, there are several methods using the following tools:

  1. Minikube
  2. Kind
  3. K3s
  4. K8s

To help you choose which tool to use, here's an explanation of each tool so you can select the best one for your needs.

Overview of Minikube, Kind, K3s, and K8s

As container orchestration continues to evolve, developers have many tools available for developing Kubernetes locally. Among these tools, Minikube, Kind, K3s, and K8s stand out as popular choices for developers who want to test, develop, and run Kubernetes applications locally.

Minikube

Minikube is a widely adopted tool designed to run a Kubernetes Cluster on various operating systems, including macOS, Linux, and Windows. Minikube provides a simple way for developers to run Kubernetes locally and is ideal for testing applications in a controlled environment. Minikube supports several hypervisors like VirtualBox, VMware, and HyperKit, making it flexible for various infrastructures. Additionally, Minikube offers features like the ability to enable or disable certain Kubernetes components, allowing developers to customize their environment to match production settings. This flexibility is crucial for debugging and ensuring applications behave as expected before deployment.

Kind

Kind, short for Kubernetes in Docker, is another option that allows users to create a Kubernetes Cluster using Docker containers as nodes. This approach follows containerized principles, enabling quick cluster setup and teardown. Kind is very useful for testing Kubernetes itself and is typically used by developers in CI/CD pipelines. Its ability to run clusters in Docker means developers can easily replicate their production environment in a lightweight way, making it an excellent choice for continuous integration workflows. Additionally, Kind supports multi-node clusters, which can be beneficial for simulating more complex scenarios that developers might encounter in real-world applications.

K3s

K3s, on the other hand, is a lightweight Kubernetes distribution developed by Rancher Labs. This distribution aims to provide a simplified version of Kubernetes, making it suitable for environments with limited resources. K3s is very useful for edge computing, IoT applications, and scenarios where a full Kubernetes installation cannot be deployed due to hardware limitations. With a binary size under 100 MB, K3s is designed to run on low-power devices like Raspberry Pi and can be deployed quickly and easily. Additionally, K3s comes with built-in Helm support, making it easy to manage applications and services within the cluster, and automatically handles common tasks like certificate and network management, which can significantly reduce operational costs for users.

K8s (Kubernetes)

Kubernetes, often abbreviated as K8s, is an open-source container orchestration platform used for automating deployment, scaling, and management of containerized applications. Unlike Minikube, Kind, and K3s, Kubernetes is more commonly used in production environments, though it can also be used for development and testing.

Comparison of Main Features: Minikube, Kind, K3s, and K8s

When comparing the main features of Minikube, Kind, K3s, and K8s, it's important to consider several factors that determine their usefulness and performance:

Resource Requirements

Minikube generally requires more resources because it runs a full Kubernetes Cluster in a Virtual Machine (depending on the operating system; if Minikube runs on Linux, it can run directly in a container). Kind, while lighter than Minikube, still requires Docker resources. K3s is optimized for minimal resource consumption, while K8s itself is intended for production environments and will consume significantly more resources compared to other tools.

Installation Complexity

Minikube offers a very easy installation process, although setting up the required Hypervisor can be time-consuming. Kind has simpler setup that only requires Docker. K3s can often be installed with just a few commands, while K8s requires fairly complex installation because each component must be installed manually, such as kubectl, kubeadm, CRI, CNI, and so on.

Networking and Storage

Minikube provides a full-featured networking stack, including LoadBalancer support. Kind's networking depends on Docker's networking capabilities. K3s includes built-in options for lightweight networking and storage management, while K8s supports all features but still requires more complex setup.

Extensibility

Minikube supports add-ons that can easily enhance functionality. Kind allows users to customize clusters through configuration files like YAML files, and K3s is compatible with Kubernetes, allowing the use of existing Kubernetes extensions and APIs.

Another important aspect to consider is the use case scenario for each tool. Minikube is very useful for developers who want to test applications in an environment very similar to a production Kubernetes Cluster. This makes it ideal for those who need to validate their applications against the full Kubernetes API. On the other hand, Kind shines in CI/CD environments where fast cluster spin-up and tear-down are essential for automated testing. Its ability to create clusters in Docker containers makes it a favorite among developers who want to integrate Kubernetes testing into their existing workflows.

Additionally, community support and documentation around these tools can significantly influence their adoption. Minikube has a strong community and extensive documentation, making it easy for newcomers to find resources and solve problems. Kind, while still relatively new, has benefited from Kubernetes community support, ensuring its documentation is continuously updated. K3s, developed by Rancher Labs, also has strong community engagement and offers comprehensive resources, especially for those interested in deploying lightweight Kubernetes Clusters in edge computing or IoT device scenarios.

Performance Metrics: Which Tool Performs Best?

Evaluating performance across Minikube, Kind, and K3s requires examining various metrics, such as startup time, resource utilization, and operational stability.

Startup Time

Kind is often the fastest to start because it directly uses Docker containers. Minikube can take longer to bootstrap due to the overhead of starting a virtual machine, while K3s offers fast deployment with minimal configuration.

Resource Utilization

K3s excels in this category because it's designed to run in resource-limited settings. Minikube tends to consume more RAM and CPU, while Kind's Docker container-based approach can be more efficient than traditional virtual machine approaches.

Operational Stability

All four have proven stable in various environments. However, K3s includes a lightweight built-in etcd alternative that can improve reliability and performance.

Use Cases

When should you choose Minikube, Kind, K3s, or K8s? Understanding scenarios where each tool excels can significantly influence your decision in selecting the right tool for local Kubernetes development.

Minikube Usage

Best suited for development seeking an out-of-the-box Kubernetes experience with a complete feature set. Ideal for exploring Kubernetes capabilities, testing robust applications, or when working with various add-ons.

Kind Usage

Excellent for continuous integration environments that prioritize speed and efficiency. This type is favored by developers who need to quickly spin up clusters for testing purposes.

K3s Usage

The right solution for developers targeting edge computing, IoT devices, or resource-constrained applications. Its lightweight nature makes it the preferred choice when Kubernetes must run smoothly on less powerful hardware.

K8s (Kubernetes) Usage

Highly suitable for production environments, though it can also be used for development or testing. All Kubernetes features are certainly supported, but it requires more complex installation and configuration and consumes more resources compared to other tools.

Installation and Setup

After knowing which tool is most suitable and chosen for use, we'll discuss installation and setup for each Kubernetes Cluster tool.

Prerequisites

However, before starting installation and setup, we need some preparation first, including:

Note: Here I've prepared a Virtual Machine on Proxmox with Ubuntu Server 24.04 LTS, installed Docker / Containerd, and set up local domain pointing on the local Name Server.

  1. Virtual Machine / OS
  1. Docker / Containerd (Container Runtime Interface or CRI)
  1. Local domain pointing (optional)

Note: For those using the same operating system, Ubuntu Server 24.04 LTS, and want to install Docker, you can use the following commands:

  1. Add GPG Key
bash
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
  1. Add Docker Repository
bash
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
  1. Update Repository & Install Docker Community Edition
bash
sudo apt update && apt-cache policy docker-ce && sudo apt install docker-ce
  1. Add current user to docker group so you don't need to use sudo
bash
sudo usermod -aG docker ${USER} && su - ${USER}
  1. Install Docker utilities
bash
sudo apt update && sudo apt install docker-ce-cli containerd.io docker-compose-plugin docker-compose

After all the prerequisites above are met, we can proceed to install the tools discussed earlier to create a Kubernetes Cluster.

Installing Minikube

To install a Kubernetes Cluster using Minikube, it's quite straightforward. Just run the following commands:

Install Minikube Binary

The first step to install Minikube is to download and install the binary file. To do this, run the following command:

Kubernetesbash
curl -LO https://github.com/kubernetes/minikube/releases/latest/download/minikube-linux-amd64
sudo install minikube-linux-amd64 /usr/local/bin/minikube && rm minikube-linux-amd64

Running Minikube

After the binary is successfully installed, the next step is to run Minikube. To run it, you can use the following command:

Note: The following command will create a Kubernetes Cluster, and Minikube will download required dependencies like Kubernetes, CNI (Container Networking Interface), and so on. The default Kubernetes Cluster created by Minikube is 1 node, where the Control Plane (Master Node) and Data Plane (Worker Node) will be one component on the same node.

To create a Kubernetes Cluster with more than 1 node in Minikube, you can run the following command:

Kubernetesbash
minikube start --nodes [total_node] -p [cluster_name]

However, running a Kubernetes Cluster with more than 1 node will certainly consume more resources compared to just 1 node.

If you want to run Minikube with its network directly connected to the Host, you can run Minikube using the following command:

Kubernetesbash
minikube start --vm-drive=none

However, the above command requires additional manual setup for dependencies like CRI and CNI. For more information, you can read this issue: #33

Kubernetesbash
minikube start

Note: Here, since I've allocated considerable hardware resources to the Virtual Machine, I'll run the Minikube Kubernetes Cluster with a total of 3 nodes: 1 Control Plane (Master Node) and 2 Data Plane (Worker Nodes). I'm running it using the following command:

Kubernetesbash
minikube start --nodes 3 -p minikube

If the above command runs successfully, the output will look like this:

bash
😄  minikube v1.35.0 on Ubuntu 24.04 (kvm/amd64)
  Automatically selected the docker driver. Other choices: ssh, none
📌  Using Docker driver with root privileges
👍  Starting "minikube" primary control-plane node in "minikube" cluster
🚜  Pulling base image v0.0.46 ...
💾  Downloading Kubernetes v1.32.0 preload ...
    > preloaded-images-k8s-v18-v1...:  333.57 MiB / 333.57 MiB  100.00% 1.72 Mi
    > gcr.io/k8s-minikube/kicbase...:  500.31 MiB / 500.31 MiB  100.00% 1.67 Mi
🔥  Creating docker container (CPUs=2, Memory=2200MB) ...
🐳  Preparing Kubernetes v1.32.0 on Docker 27.4.1 ...
 Generating certificates and keys ...
 Booting up control plane ...
 Configuring RBAC rules ...
🔗  Configuring CNI (Container Networking Interface) ...
🔎  Verifying Kubernetes components...
 Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟  Enabled addons: storage-provisioner, default-storageclass
 
👍  Starting "minikube-m02" worker node in "minikube" cluster
🚜  Pulling base image v0.0.46 ...
🔥  Creating docker container (CPUs=2, Memory=2200MB) ...
🌐  Found network options:
 NO_PROXY=192.168.49.2
🐳  Preparing Kubernetes v1.32.0 on Docker 27.4.1 ...
 env NO_PROXY=192.168.49.2
🔎  Verifying Kubernetes components...
 
👍  Starting "minikube-m03" worker node in "minikube" cluster
🚜  Pulling base image v0.0.46 ...
🔥  Creating docker container (CPUs=2, Memory=2200MB) ...
🌐  Found network options:
 NO_PROXY=192.168.49.2,192.168.49.3
🐳  Preparing Kubernetes v1.32.0 on Docker 27.4.1 ...
 env NO_PROXY=192.168.49.2
 env NO_PROXY=192.168.49.2,192.168.49.3
🔎  Verifying Kubernetes components...
💡  kubectl not found. If you need it, try: 'minikube kubectl -- get pods -A'
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

After the Kubernetes Cluster is successfully created, you can add the following alias to your ~/.bashrc or ~/.zshrc file to make kubectl commands easier:

bash
alias kubectl="minikube kubectl --"

To verify that the Kubernetes Cluster was successfully created, you can run the following command to check which nodes are registered in the cluster:

Kubernetesbash
kubectl get nodes -o wide

If the above command runs successfully, you'll see how many nodes are in the cluster and other information like STATUS, ROLE, VERSION, and so on:

Kubernetesbash
    > kubectl.sha256:  64 B / 64 B [-------------------------] 100.00% ? p/s 0s
    > kubectl:  54.67 MiB / 54.67 MiB [--------------] 100.00% 2.70 MiB p/s 20s
NAME           STATUS   ROLES           AGE     VERSION   INTERNAL-IP    EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION     CONTAINER-RUNTIME
minikube       Ready    control-plane   8m8s    v1.32.0   192.168.49.2   <none>        Ubuntu 22.04.5 LTS   6.8.0-51-generic   docker://27.4.1
minikube-m02   Ready    <none>          7m42s   v1.32.0   192.168.49.3   <none>        Ubuntu 22.04.5 LTS   6.8.0-51-generic   docker://27.4.1
minikube-m03   Ready    <none>          7m29s   v1.32.0   192.168.49.4   <none>        Ubuntu 22.04.5 LTS   6.8.0-51-generic   docker://27.4.1

Testing Nginx Deployment on Minikube

To further verify that the Kubernetes Cluster installation was successful, we can try deploying a default Nginx application. To deploy it, you can create a YAML file using the following command:

Note: If you don't want to write the following YAML configuration manually, you can use the one from the GitHub repository I created:

Kubernetesbash
kubectl apply -f https://raw.githubusercontent.com/armandwipangestu/belajar-k8s/refs/heads/main/episode-3/example/nginx-deployment.yml
bash
nvim nginx-deployment.yml

Then fill in the configuration like this:

Kubernetesnginx-deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
    name: nginx-deployment
spec:
    replicas: 1
    selector:
        matchLabels:
            app: nginx
    template:
        metadata:
            labels:
                app: nginx
        spec:
            containers:
                - name: nginx
                  image: nginx:latest
                  ports:
                      - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
    name: nginx-service
spec:
    type: NodePort
    selector:
        app: nginx
    ports:
        - protocol: TCP
          port: 80
          targetPort: 80
          nodePort: 30080

After that, deploy the YAML file configuration to the Kubernetes Cluster using the following command:

Kubernetesbash
kubectl apply -f nginx-deployment.yml

Next, check if the pod and service are running successfully using the following commands:

Kubernetesbash
kubectl get pods
kubectl get svc

If the pod and service are running successfully, the output will look like this:

Kubernetesbash
NAME                              READY   STATUS    RESTARTS   AGE
nginx-deployment-96b9d695-546qw   1/1     Running   0          101s
 
NAME            TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
kubernetes      ClusterIP   10.96.0.1        <none>        443/TCP        20m
nginx-service   NodePort    10.106.176.104   <none>        80:30080/TCP   99s

To further verify that Nginx is running and accessible, you can check using the following command:

Note: Adjust the IP address of the node and the port of the service being used

bash
curl http://192.168.49.2:30080

If the above command runs successfully, the result will look like this:

html
<html>
    <head>
        <title>Welcome to nginx!</title>
        <style>
            html {
                color-scheme: light dark;
            }
            body {
                width: 35em;
                margin: 0 auto;
                font-family: Tahoma, Verdana, Arial, sans-serif;
            }
        </style>
    </head>
    <body>
        <h1>Welcome to nginx!</h1>
        <p>
            If you see this page, the nginx web server is successfully installed
            and working. Further configuration is required.
        </p>
 
        <p>
            For online documentation and support please refer to
            <a href="http://nginx.org/">nginx.org</a>. Commercial support is
            available at <a href="http://nginx.com/">nginx.com</a>.
        </p>
 
        <p><em>Thank you for using nginx.</em></p>
    </body>
</html>

Installing Kind

To install a Kubernetes Cluster using Kind, it's similar to Minikube and quite straightforward. Just run the following commands:

Install Kind Binary

The first step to install Kind is to download and install the binary file. To do this, run the following command:

Kubernetesbash
[ $(uname -m) = x86_64 ] && curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.27.0/kind-linux-amd64
chmod +x ./kind
sudo mv ./kind /usr/local/bin/kind

Running Kind

After the binary is successfully installed, the next step is to run Kind. To run it, you can use the following command:

Note: The following command will create a Kubernetes Cluster, and Kind will download required dependencies like Kubernetes images, CNI (Container Networking Interface), and so on. The default Kubernetes Cluster created by Kind is 1 node, where the Control Plane (Master Node) and Data Plane (Worker Node) will be one component on the same node.

To create a Kubernetes Cluster with more than 1 node in Kind, you can define a YAML file first like this:

Kubernetesyml
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
    - role: control-plane
    - role: worker
    - role: worker

Then run the following command:

Kubernetesbash
kind create cluster --name [cluster_name] --config [config_file.yml]

However, running a Kubernetes Cluster with more than 1 node will certainly consume more resources compared to just 1 node.

Kubernetesbash
kind create cluster

Note: Here, since I've allocated considerable hardware resources to the Virtual Machine, I'll run the Kind Kubernetes Cluster with a total of 3 nodes: 1 Control Plane (Master Node) and 2 Data Plane (Worker Nodes). I'm running it using the following command:

Kubernetesbash
kind create cluster --name dev-cluster --config kind-config-cluster.yml

If the above command runs successfully, the output will look like this:

Kubernetesbash
Creating cluster "dev-cluster" ...
 Ensuring node image (kindest/node:v1.32.2) 🖼
 Preparing nodes 📦 📦 📦
 Writing configuration 📜
 Starting control-plane 🕹️
 Installing CNI 🔌
 Installing StorageClass 💾
 Joining worker nodes 🚜
Set kubectl context to "kind-dev-cluster"
You can now use your cluster with:
 
kubectl cluster-info --context kind-dev-cluster
 
Have a nice day! 👋

Note: Since interacting with a Kubernetes Cluster requires the kubectl binary, we need to manually install it.

  1. Download kubectl binary
Kubernetesbash
curl -LO "https://dl.k8s.io/release/$(curl -Ls https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
  1. Change execute mode and move to binary PATH
Kubernetesbash
chmod +x kubectl && sudo mv kubectl /usr/local/bin

To verify that the Kubernetes Cluster was successfully created, you can run the following command to check which nodes are registered in the cluster:

Kubernetesbash
kubectl get nodes -o wide

If the above command runs successfully, you'll see how many nodes are in the cluster and other information like STATUS, ROLE, VERSION, and so on:

Kubernetesbash
NAME                        STATUS   ROLES           AGE   VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE                         KERNEL-VERSION     CONTAINER-RUNTIME
dev-cluster-control-plane   Ready    control-plane   49s   v1.32.2   172.18.0.4    <none>        Debian GNU/Linux 12 (bookworm)   6.8.0-51-generic   containerd://2.0.2
dev-cluster-worker          Ready    <none>          41s   v1.32.2   172.18.0.3    <none>        Debian GNU/Linux 12 (bookworm)   6.8.0-51-generic   containerd://2.0.2
dev-cluster-worker2         Ready    <none>          40s   v1.32.2   172.18.0.2    <none>        Debian GNU/Linux 12 (bookworm)   6.8.0-51-generic   containerd://2.0.2

Testing Nginx Deployment on Kind

To further verify that the Kubernetes Cluster installation was successful, we can try deploying a default Nginx application. To deploy it, you can create a YAML file using the following command:

Note: If you don't want to write the following YAML configuration manually, you can use the one from the GitHub repository I created:

Kubernetesbash
kubectl apply -f https://raw.githubusercontent.com/armandwipangestu/belajar-k8s/refs/heads/main/episode-3/example/nginx-deployment.yml
bash
nvim nginx-deployment.yml

Then fill in the configuration like this:

Kubernetesnginx-deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
    name: nginx-deployment
spec:
    replicas: 1
    selector:
        matchLabels:
            app: nginx
    template:
        metadata:
            labels:
                app: nginx
        spec:
            containers:
                - name: nginx
                  image: nginx:latest
                  ports:
                      - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
    name: nginx-service
spec:
    type: NodePort
    selector:
        app: nginx
    ports:
        - protocol: TCP
          port: 80
          targetPort: 80
          nodePort: 30080

After that, deploy the YAML file configuration to the Kubernetes Cluster using the following command:

Kubernetesbash
kubectl apply -f nginx-deployment.yml

Next, check if the pod and service are running successfully using the following commands:

Kubernetesbash
kubectl get pods
kubectl get svc

If the pod and service are running successfully, the output will look like this:

Kubernetesbash
NAME                              READY   STATUS    RESTARTS   AGE
nginx-deployment-96b9d695-dp2hw   1/1     Running   0          47s
 
NAME            TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
kubernetes      ClusterIP   10.96.0.1       <none>        443/TCP        2m45s
nginx-service   NodePort    10.96.169.192   <none>        80:30080/TCP   50s

To further verify that Nginx is running and accessible, you can check using the following command:

Note: Adjust the IP address of the node and the port of the service being used

bash
curl http://172.18.0.4:30080

If the above command runs successfully, the result will look like this:

html
<html>
    <head>
        <title>Welcome to nginx!</title>
        <style>
            html {
                color-scheme: light dark;
            }
            body {
                width: 35em;
                margin: 0 auto;
                font-family: Tahoma, Verdana, Arial, sans-serif;
            }
        </style>
    </head>
    <body>
        <h1>Welcome to nginx!</h1>
        <p>
            If you see this page, the nginx web server is successfully installed
            and working. Further configuration is required.
        </p>
 
        <p>
            For online documentation and support please refer to
            <a href="http://nginx.org/">nginx.org</a>. Commercial support is
            available at <a href="http://nginx.com/">nginx.com</a>.
        </p>
 
        <p><em>Thank you for using nginx.</em></p>
    </body>
</html>

Installing K3s

To install a Kubernetes Cluster using K3s, just run the following commands:

Fetch and Execute K3s Installer Script

To install K3s, we can directly fetch the installer script and execute it using the following command:

Kubernetesbash
curl -sfL https://get.k3s.io | sh -

The above command will directly run the K3s installer script automatically, which will download the k3s, kubectl, crictl binaries, and so on. So if the above command runs successfully, the output will look like this:

bash
[INFO]  Finding release for channel stable
[INFO]  Using v1.32.4+k3s1 as release
[INFO]  Downloading hash https://github.com/k3s-io/k3s/releases/download/v1.32.4+k3s1/sha256sum-amd64.txt
[INFO]  Skipping binary downloaded, installed k3s matches hash
[INFO]  Skipping installation of SELinux RPM
[INFO]  Skipping /usr/local/bin/kubectl symlink to k3s, already exists
[INFO]  Skipping /usr/local/bin/crictl symlink to k3s, already exists
[INFO]  Skipping /usr/local/bin/ctr symlink to k3s, command exists in PATH at /usr/bin/ctr
[INFO]  Creating killall script /usr/local/bin/k3s-killall.sh
[INFO]  Creating uninstall script /usr/local/bin/k3s-uninstall.sh
[INFO]  env: Creating environment file /etc/systemd/system/k3s.service.env
[INFO]  systemd: Creating service file /etc/systemd/system/k3s.service
[INFO]  systemd: Enabling k3s unit
Created symlink /etc/systemd/system/multi-user.target.wants/k3s.service /etc/systemd/system/k3s.service
[INFO]  No change detected so skipping service start

To verify that K3s was successfully installed, we can run the following command:

Kubernetesbash
sudo kubectl get nodes -o wide

The result will look like this:

bash
NAME    STATUS   ROLES                  AGE    VERSION        INTERNAL-IP   EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION     CONTAINER-RUNTIME
k3s-1   Ready    control-plane,master   103s   v1.32.4+k3s1   20.20.20.11   <none>        Ubuntu 24.04.1 LTS   6.8.0-51-generic   containerd://2.0.4-k3s2

The default Kubernetes Cluster created by K3s is 1 node, where the Control Plane (Master Node) and Data Plane (Worker Node) will be one component on the same node.

To create a Kubernetes Cluster with more than 1 node in K3s, we can join agent nodes (or worker nodes) to the server node (master node). To join them, run the following commands:

  1. Get the token from the server node

Note: Run the following command on the server node (master node)

bash
sudo cat /var/lib/rancher/k3s/server/node-token
  1. Register or join the agent node to the cluster using the following command:

Note: Run the following command on the agent node (worker node) that wants to join the cluster. Replace [server_node] and [server_token] accordingly. For example:

bash
curl -sfL https://get.k3s.io | K3S_URL=https://k3s-1.home.internal:6443 K3S_TOKEN=K10b38a2664587403a2a91c5e62db5e8bd446be0676d83d41faa1625dfb8f4ffd98::server:be12c62352f3e34c487ce809072b87a6 sh -
bash
curl -sfL https://get.k3s.io | K3S_URL=https://[server_node]:6443 K3S_TOKEN=[server_token] sh -
  1. Verify that the agent node has successfully joined the cluster

Note: Run the following kubectl command on the server node (master node).

Kubernetesbash
sudo kubectl get pods -o wide

If the agent node successfully joins, the output will look like this:

Kubernetesbash
NAME    STATUS   ROLES                  AGE     VERSION        INTERNAL-IP   EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION     CONTAINER-RUNTIME
k3s-1   Ready    control-plane,master   8m17s   v1.32.4+k3s1   20.20.20.11   <none>        Ubuntu 24.04.1 LTS   6.8.0-51-generic   containerd://2.0.4-k3s2
k3s-4   Ready    <none>                 9s      v1.32.4+k3s1   20.20.20.14   <none>        Ubuntu 24.04.1 LTS   6.8.0-51-generic   containerd://2.0.4-k3s2

Testing Nginx Deployment on K3s

To further verify that the Kubernetes Cluster installation was successful, we can try deploying a default Nginx application. To deploy it, you can create a YAML file using the following command:

Note: If you don't want to write the following YAML configuration manually, you can use the one from the GitHub repository I created:

Kubernetesbash
sudo kubectl apply -f https://raw.githubusercontent.com/armandwipangestu/belajar-k8s/refs/heads/main/episode-3/example/nginx-deployment.yml
bash
nvim nginx-deployment.yml

Then fill in the configuration like this:

Kubernetesnginx-deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
    name: nginx-deployment
spec:
    replicas: 1
    selector:
        matchLabels:
            app: nginx
    template:
        metadata:
            labels:
                app: nginx
        spec:
            containers:
                - name: nginx
                  image: nginx:latest
                  ports:
                      - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
    name: nginx-service
spec:
    type: NodePort
    selector:
        app: nginx
    ports:
        - protocol: TCP
          port: 80
          targetPort: 80
          nodePort: 30080

After that, deploy the YAML file configuration to the Kubernetes Cluster using the following command:

Kubernetesbash
sudo kubectl apply -f nginx-deployment.yml

Next, check if the pod and service are running successfully using the following commands:

Kubernetesbash
sudo kubectl get pods
sudo kubectl get svc

If the pod and service are running successfully, the output will look like this:

Kubernetesbash
NAME                              READY   STATUS    RESTARTS   AGE
nginx-deployment-96b9d695-9wzxn   1/1     Running   0          116s
 
NAME            TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
kubernetes      ClusterIP   10.43.0.1      <none>        443/TCP        12m
nginx-service   NodePort    10.43.159.48   <none>        80:30080/TCP   2m6s

To further verify that Nginx is running and accessible, you can check using the following command:

Note: Adjust the IP address of the node and the port of the service being used

bash
curl http://20.20.20.11:30080

If the above command runs successfully, the result will look like this:

Note: Since K3s runs on the host network, we can directly access the IP 20.20.20.11 from any computer on the same network, like my laptop below.

html
<html>
    <head>
        <title>Welcome to nginx!</title>
        <style>
            html {
                color-scheme: light dark;
            }
            body {
                width: 35em;
                margin: 0 auto;
                font-family: Tahoma, Verdana, Arial, sans-serif;
            }
        </style>
    </head>
    <body>
        <h1>Welcome to nginx!</h1>
        <p>
            If you see this page, the nginx web server is successfully installed
            and working. Further configuration is required.
        </p>
 
        <p>
            For online documentation and support please refer to
            <a href="http://nginx.org/">nginx.org</a>. Commercial support is
            available at <a href="http://nginx.com/">nginx.com</a>.
        </p>
 
        <p><em>Thank you for using nginx.</em></p>
    </body>
</html>

Installing K8s

To install a Kubernetes Cluster using K8s, it's slightly different from the previous tools. We need to manually install starting from adding the repository, installing dependencies like kubeadm, kubelet, kubectl, CRI, and CNI.

Adding K8s Repository

The first step to install a Kubernetes Cluster using K8s is to add the Kubernetes repository first. To add it, you can run the following command:

Kubernetesbash
echo "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.32/deb/ /" | sudo tee /etc/apt/sources.list.d/kubernetes.list
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.32/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
sudo apt update

Install kubeadm, kubelet, kubectl, and cni

After the Kubernetes repository is successfully added, the next step is to install the required components using the following command:

Kubernetesbash
sudo apt install kubeadm kubelet kubectl kubernetes-cni -y

Network Configuration

Next, configure the network on the host OS so the Kubernetes Cluster can run normally. We need 2 configurations:

  1. Enable iptables to filter traffic from the bridge (used for NetworkPolicy and inter-Pod communication).
  2. Enable IP packet forwarding between interfaces (Pod routing to the internet and Pod to Pod).

To enable them, run the following commands:

Kubernetesbash
lsmod | grep br_netfilter
sysctl net.bridge.bridge-nf-call-iptables
sysctl net.ipv4.ip_forward
sudo modprobe overlay
sudo modprobe br_netfilter
 
sudo cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF
 
sudo cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
EOF
 
sudo sysctl --system
sysctl net.bridge.bridge-nf-call-iptables

Generate Default Containerd Config

After configuring the network, generate the default configuration for Containerd. To do this, run the following command:

Kubernetesbash
# backup configuration
sudo mv /etc/containerd/config.toml /etc/containerd/config.toml.bak
 
# generate default configuration
sudo containerd config default | sudo tee /etc/containerd/config.toml > /dev/null

Next, enable the SystemdCgroup feature from Containerd using the following command:

Kubernetesbash
cat /etc/containerd/config.toml | grep SystemdCgroup
sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/' /etc/containerd/config.toml
cat /etc/containerd/config.toml | grep SystemdCgroup

After that, restart the Containerd service using the following command:

Kubernetesbash
sudo systemctl restart containerd

Initialize Control Plane

After all the above processes are completed, we can create the Kubernetes Cluster by initializing the Master Node or Control Plane using the kubeadm command like this:

Note: Replace [ip_master_node] with the IP address being used. For example:

Kubernetesbash
sudo kubeadm init --control-plane-endpoint "20.20.20.11:6443" --upload-certs --pod-network-cidr=10.244.0.0/16
Kubernetesbash
sudo kubeadm init --control-plane-endpoint "[ip_master_node]:6443" --upload-certs --pod-network-cidr=10.244.0.0/16

If the above command runs successfully, you should see output like this:

Kubernetesbash
I0512 13:14:15.500217    3037 version.go:261] remote version is much newer: v1.33.0; falling back to: stable-1.32
W0512 13:14:15.780527    3037 version.go:109] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.32.txt": Get "https://cdn.dl.k8s.io/release/stable-1.32.txt": dial tcp 146.75.45.55:443: connect: no route to host
W0512 13:14:15.780549    3037 version.go:110] falling back to the local client version: v1.32.4
[init] Using Kubernetes version: v1.32.4
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
W0512 13:14:15.841813    3037 checks.go:846] detected that the sandbox image "registry.k8s.io/pause:3.8" of the container runtime is inconsistent with that used by kubeadm.It is recommended to use "registry.k8s.io/pause:3.10" as the CRI sandbox image.
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 20.20.20.11]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[etcd/server serving cert is signed for DNS names [k8s-1 localhost] and IPs [20.20.20.11 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-1 localhost] and IPs [20.20.20.11 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 501.293317ms
[api-check] Waiting for a healthy API server. This can take up to 4m0s
[api-check] The API server is healthy after 6.000748929s
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
43d861fb4825a3ba2477f045569bf8f8f80c41c66c9fee6e245e55b67e29c1cc
[mark-control-plane] Marking the node k8s-1 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node k8s-1 as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: ajrbck.gadjppj7nq122dde
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
 
Your Kubernetes control-plane has initialized successfully!
 
To start using your cluster, you need to run the following as a regular user:
 
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
 
Alternatively, if you are the root user, you can run:
 
  export KUBECONFIG=/etc/kubernetes/admin.conf
 
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/
 
You can now join any number of control-plane nodes running the following command on each as root:
 
  kubeadm join 20.20.20.11:6443 --token ajrbck.gadjppj7nq122dde \
        --discovery-token-ca-cert-hash sha256:a2cfd158e6346f9ca75589ad98e0fcc76d89f03e89b2b5f84e7fe87a4328fdc9 \
        --control-plane --certificate-key 43d861fb4825a3ba2477f045569bf8f8f80c41c66c9fee6e245e55b67e29c1cc
 
Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.
 
Then you can join any number of worker nodes by running the following on each as root:
 
kubeadm join 20.20.20.11:6443 --token ajrbck.gadjppj7nq122dde \
        --discovery-token-ca-cert-hash sha256:a2cfd158e6346f9ca75589ad98e0fcc76d89f03e89b2b5f84e7fe87a4328fdc9

Note: The default Kubernetes Cluster created by K8s is 1 node, where the Control Plane (Master Node) and Data Plane (Worker Node) will be one component on the same node.

To create a Kubernetes Cluster with more than 1 node in K8s, we can join worker nodes (data plane) to the master node (control plane). To join them, run the following command:

Kubernetesbash
sudo kubeadm join [ip_master_node]:6443 --token [token_master_node] --discovery-token-ca-cert-hash sha256:[hash_number]

Since I prepared 2 VMs for the Kubernetes Cluster installed using K8s, I can join the worker node using the following command:

Kubernetesbash
sudo kubeadm join 20.20.20.11:6443 --token ajrbck.gadjppj7nq122dde --discovery-token-ca-cert-hash sha256:a2cfd158e6346f9ca75589ad98e0fcc76d89f03e89b2b5f84e7fe87a4328fdc9

If the above command runs successfully, the output will look like this:

bash
[preflight] Running pre-flight checks
[preflight] Reading configuration from the "kubeadm-config" ConfigMap in namespace "kube-system"...
[preflight] Use 'kubeadm init phase upload-config --config your-config.yaml' to re-upload it.
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 501.725345ms
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap
 
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
 
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

Make sure the steps like adding the repository, installing dependency packages, configuring the network, and so on are also completed on the node that will join the cluster.

Configure kubectl

After the Kubernetes Cluster is successfully created, configure kubectl so we can interact with the cluster. To do this, run the following command on the Master Node (Control Plane):

Kubernetesbash
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

To verify that the Kubernetes Cluster was successfully created, you can check the nodes registered in the cluster using the following command:

Kubernetesbash
kubectl get nodes -o wide

If the above command runs successfully, the output will look like this:

Note: You can see the STATUS of the nodes in this cluster is NotReady because we haven't set up the CNI (Container Networking Interface) yet.

Kubernetesbash
NAME    STATUS     ROLES           AGE     VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION     CONTAINER-RUNTIME
k8s-1   NotReady   control-plane   7m57s   v1.32.4   20.20.20.11   <none>        Ubuntu 24.04.1 LTS   6.8.0-51-generic   containerd://1.7.25
k8s-4   NotReady   <none>          3m18s   v1.32.4   20.20.20.14   <none>        Ubuntu 24.04.1 LTS   6.8.0-51-generic   containerd://1.7.25

Install CNI Calico

After successfully installing and setting up the Kubernetes Cluster using K8s, we need to install a CNI or Container Networking Interface. Here I'll use Calico as the CNI. To install it, run the following command:

Kubernetesbash
kubectl apply -f https://raw.githubusercontent.com/projectcalico/calico/v3.29.3/manifests/tigera-operator.yaml
kubectl apply -f https://raw.githubusercontent.com/projectcalico/calico/v3.29.3/manifests/calico.yaml

Next, check if the pods are running successfully using the following command:

Kubernetesbash
kubectl get pods -n tigera-operator
kubectl get pods -n kube-system

If the above command runs successfully, the output will look like this:

Note: Wait a few minutes for all pods to be truly running.

Kubernetesbash
NAME                               READY   STATUS    RESTARTS   AGE
tigera-operator-789496d6f5-qh7nq   1/1     Running   0          65s
 
NAME                                      READY   STATUS    RESTARTS      AGE
calico-kube-controllers-79949b87d-9jlfh   1/1     Running   0             8m20s
calico-node-48xls                         1/1     Running   0             8m20s
calico-node-r9pl8                         1/1     Running   0             8m20s
coredns-668d6bf9bc-pz2dm                  1/1     Running   0             33m
coredns-668d6bf9bc-vw2dh                  1/1     Running   0             33m
etcd-k8s-1                                1/1     Running   1 (13m ago)   33m
kube-apiserver-k8s-1                      1/1     Running   1 (13m ago)   33m
kube-controller-manager-k8s-1             1/1     Running   1 (13m ago)   33m
kube-proxy-ff8cd                          1/1     Running   1 (13m ago)   33m
kube-proxy-j25xc                          1/1     Running   1 (13m ago)   28m
kube-scheduler-k8s-1                      1/1     Running   1 (13m ago)   33m

If the CNI is successfully installed, now when we check the node STATUS in the cluster again, it should be Ready like this:

Kubernetesbash
kubectl get nodes -o wide
Kubernetesbash
NAME    STATUS   ROLES           AGE   VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION     CONTAINER-RUNTIME
k8s-1   Ready    control-plane   31m   v1.32.4   20.20.20.11   <none>        Ubuntu 24.04.1 LTS   6.8.0-59-generic   containerd://1.7.25
k8s-4   Ready    <none>          27m   v1.32.4   20.20.20.14   <none>        Ubuntu 24.04.1 LTS   6.8.0-51-generic   containerd://1.7.25

Testing Nginx Deployment on K8s

To further verify that the Kubernetes Cluster installation was successful, we can try deploying a default Nginx application. To deploy it, you can create a YAML file using the following command:

Note: If you don't want to write the following YAML configuration manually, you can use the one from the GitHub repository I created:

Kubernetesbash
kubectl apply -f https://raw.githubusercontent.com/armandwipangestu/belajar-k8s/refs/heads/main/episode-3/example/nginx-deployment.yml
bash
nvim nginx-deployment.yml

Then fill in the configuration like this:

Kubernetesnginx-deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
    name: nginx-deployment
spec:
    replicas: 1
    selector:
        matchLabels:
            app: nginx
    template:
        metadata:
            labels:
                app: nginx
        spec:
            containers:
                - name: nginx
                  image: nginx:latest
                  ports:
                      - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
    name: nginx-service
spec:
    type: NodePort
    selector:
        app: nginx
    ports:
        - protocol: TCP
          port: 80
          targetPort: 80
          nodePort: 30080

After that, deploy the YAML file configuration to the Kubernetes Cluster using the following command:

Kubernetesbash
kubectl apply -f nginx-deployment.yml

Next, check if the pod and service are running successfully using the following commands:

Kubernetesbash
kubectl get pods
kubectl get svc

If the pod and service are running successfully, the output will look like this:

Kubernetesbash
NAME                              READY   STATUS    RESTARTS   AGE
nginx-deployment-96b9d695-bqcpz   1/1     Running   0          21m
 
NAME            TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
kubernetes      ClusterIP   10.96.0.1        <none>        443/TCP        34m
nginx-service   NodePort    10.107.141.163   <none>        80:30080/TCP   21m

To further verify that Nginx is running and accessible, you can check using the following command:

Note: Adjust the IP address of the node and the port of the service being used

bash
curl http://20.20.20.11:30080

If the above command runs successfully, the result will look like this:

Note: Since K8s runs on the host network, we can directly access the IP 20.20.20.11 from any computer on the same network, like my laptop below.

html
<html>
    <head>
        <title>Welcome to nginx!</title>
        <style>
            html {
                color-scheme: light dark;
            }
            body {
                width: 35em;
                margin: 0 auto;
                font-family: Tahoma, Verdana, Arial, sans-serif;
            }
        </style>
    </head>
    <body>
        <h1>Welcome to nginx!</h1>
        <p>
            If you see this page, the nginx web server is successfully installed
            and working. Further configuration is required.
        </p>
 
        <p>
            For online documentation and support please refer to
            <a href="http://nginx.org/">nginx.org</a>. Commercial support is
            available at <a href="http://nginx.com/">nginx.com</a>.
        </p>
 
        <p><em>Thank you for using nginx.</em></p>
    </body>
</html>

Conclusion

After learning about various tools like Minikube, Kind, K3s, and K8s, we now know that each tool has its own advantages and use cases, whether for local development, CI/CD, or lightweight production environments.

By understanding these tools, we can make wiser choices about which one suits our needs and available resources.

Was episode 3 interesting? We've practiced setting up a Kubernetes Cluster, learned about various tools we can use, and even tried deploying a default Nginx application that we could access. In the next episode 4, we'll explore one of the Kubernetes Objects called Node. So keep your learning spirit alive.


Related Posts