Learning Kubernetes - Episode 18 - Introduction and Explanation of Service

Learning Kubernetes - Episode 18 - Introduction and Explanation of Service

In this episode, we'll discuss Kubernetes Service, the fundamental networking abstraction for exposing applications. We'll learn about Service types, how they enable Pod communication, and best practices for service discovery.

Arman Dwi Pangestu
Arman Dwi PangestuMarch 21, 2026
0 views
9 min read

Introduction

Note

If you want to read the previous episode, you can click the Episode 17 thumbnail below

Episode 17Episode 17

In the previous episode, we learned about working with multiple resources using the all keyword. In episode 18, we'll discuss Service, one of the most fundamental concepts in Kubernetes networking.

Note: Here I'll be using a Kubernetes Cluster installed through K3s.

Pods in Kubernetes are ephemeral - they can be created, destroyed, and recreated with different IP addresses. Service provides a stable endpoint for accessing Pods, abstracting away the dynamic nature of Pod IPs and enabling reliable communication between application components.

What Is Service?

A Service in Kubernetes is an abstraction that defines a logical set of Pods and a policy for accessing them. Services enable network access to a set of Pods, providing a stable IP address and DNS name even as Pods are created and destroyed.

Think of Service like a load balancer with service discovery - it maintains a stable endpoint while automatically routing traffic to healthy Pods that match its selector. When Pods come and go, the Service automatically updates its list of endpoints.

Key characteristics of Service:

  • Stable endpoint - Provides consistent IP and DNS name
  • Load balancing - Distributes traffic across multiple Pods
  • Service discovery - Enables Pods to find each other via DNS
  • Label selector - Automatically discovers Pods with matching labels
  • Multiple types - ClusterIP, NodePort, LoadBalancer, ExternalName
  • Port mapping - Maps service ports to Pod ports
  • Session affinity - Optional sticky sessions

Why Do We Need Service?

Service solves several critical networking challenges:

  • Dynamic Pod IPs - Pods get new IPs when recreated; Service provides stable endpoint
  • Load balancing - Distributes traffic across multiple Pod replicas
  • Service discovery - Applications can find each other using DNS names
  • Decoupling - Frontend doesn't need to know backend Pod IPs
  • External access - Exposes applications outside the cluster
  • Health checking - Only routes to healthy Pods
  • Port abstraction - Service port can differ from Pod port

Without Service, you would need to:

  • Track Pod IPs manually
  • Implement your own load balancing
  • Update configurations when Pods change
  • Handle Pod failures manually

Service Types

Kubernetes provides four Service types:

ClusterIP (Default)

Exposes Service on a cluster-internal IP. Service is only accessible within the cluster.

Use case: Internal communication between microservices

NodePort

Exposes Service on each Node's IP at a static port. Makes Service accessible from outside the cluster.

Use case: Development, testing, or when LoadBalancer is unavailable

LoadBalancer

Exposes Service externally using a cloud provider's load balancer.

Use case: Production external access in cloud environments

ExternalName

Maps Service to an external DNS name.

Use case: Accessing external services with Kubernetes DNS

Creating a ClusterIP Service

ClusterIP is the default Service type for internal cluster communication.

Example 1: Basic ClusterIP Service

First, create a Deployment:

Kubernetesnginx-deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
    name: nginx-deployment
spec:
    replicas: 3
    selector:
        matchLabels:
            app: nginx
    template:
        metadata:
            labels:
                app: nginx
        spec:
            containers:
                - name: nginx
                  image: nginx:1.25
                  ports:
                      - containerPort: 80

Apply the Deployment:

Kubernetesbash
sudo kubectl apply -f nginx-deployment.yml

Create a ClusterIP Service:

Kubernetesnginx-service-clusterip.yml
apiVersion: v1
kind: Service
metadata:
    name: nginx-service
spec:
    type: ClusterIP
    selector:
        app: nginx
    ports:
        - protocol: TCP
          port: 80
          targetPort: 80

Apply the Service:

Kubernetesbash
sudo kubectl apply -f nginx-service-clusterip.yml

Verify the Service:

Kubernetesbash
sudo kubectl get service nginx-service

Output:

Kubernetesbash
NAME            TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
nginx-service   ClusterIP   10.43.100.50    <none>        80/TCP    30s

The Service gets a stable ClusterIP (10.43.100.50) that won't change.

Testing ClusterIP Service

Test the Service from within the cluster:

Kubernetesbash
# Create a test Pod
sudo kubectl run test-pod --image=curlimages/curl:latest --rm -it -- sh
 
# Inside the Pod, test the Service
curl http://nginx-service
curl http://nginx-service.default.svc.cluster.local

The Service load balances requests across all three nginx Pods.

Service Discovery with DNS

Kubernetes automatically creates DNS records for Services.

DNS Format

Services can be accessed using these DNS names:

Within same namespace:

plaintext
<service-name>

From different namespace:

plaintext
<service-name>.<namespace>

Fully qualified domain name (FQDN):

plaintext
<service-name>.<namespace>.svc.cluster.local

Example: DNS Service Discovery

Kubernetesbackend-service.yml
apiVersion: v1
kind: Service
metadata:
    name: backend
    namespace: production
spec:
    selector:
        app: backend
    ports:
        - port: 8080
          targetPort: 8080

Frontend Pods can access this Service using:

  • backend (if in same namespace)
  • backend.production (from different namespace)
  • backend.production.svc.cluster.local (FQDN)

Creating a NodePort Service

NodePort exposes Service on each Node's IP at a static port (30000-32767).

Example: NodePort Service

Kubernetesnginx-service-nodeport.yml
apiVersion: v1
kind: Service
metadata:
    name: nginx-nodeport
spec:
    type: NodePort
    selector:
        app: nginx
    ports:
        - protocol: TCP
          port: 80
          targetPort: 80
          nodePort: 30080

Apply the Service:

Kubernetesbash
sudo kubectl apply -f nginx-service-nodeport.yml

Verify:

Kubernetesbash
sudo kubectl get service nginx-nodeport

Output:

Kubernetesbash
NAME             TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
nginx-nodeport   NodePort   10.43.100.51    <none>        80:30080/TCP   30s

Access the Service from outside the cluster:

Kubernetesbash
curl http://<node-ip>:30080

Important

Important: NodePort Services are accessible on ALL nodes in the cluster, even if the Pod isn't running on that node. Kubernetes routes traffic to the appropriate node.

Creating a LoadBalancer Service

LoadBalancer creates an external load balancer (in supported cloud environments).

Example: LoadBalancer Service

Kubernetesnginx-service-loadbalancer.yml
apiVersion: v1
kind: Service
metadata:
    name: nginx-loadbalancer
spec:
    type: LoadBalancer
    selector:
        app: nginx
    ports:
        - protocol: TCP
          port: 80
          targetPort: 80

Apply the Service:

Kubernetesbash
sudo kubectl apply -f nginx-service-loadbalancer.yml

Verify:

Kubernetesbash
sudo kubectl get service nginx-loadbalancer

Output (in cloud environment):

Kubernetesbash
NAME                 TYPE           CLUSTER-IP      EXTERNAL-IP      PORT(S)        AGE
nginx-loadbalancer   LoadBalancer   10.43.100.52    203.0.113.10     80:31234/TCP   2m

The EXTERNAL-IP is the public IP provided by the cloud load balancer.

Note

Note: LoadBalancer type requires cloud provider support (AWS, GCP, Azure). In local clusters like Minikube or K3s, you may need MetalLB or similar solutions.


If you are using K3s, Service with LoadBalancer type will automatically be handled by Klipper like this

KubernetesK3s LoadBalancer Klipper
Name:                 svclb-nodejs-loadbalancer-ecbf2424-hns6d
Namespace:            kube-system
Priority:             2000001000
Priority Class Name:  system-node-critical
Service Account:      svclb
Node:                 devnull/10.10.10.4
Start Time:           Wed, 18 Mar 2026 22:58:59 +0700
Labels:               app=svclb-nodejs-loadbalancer-ecbf2424
                  controller-revision-hash=6cbb5b89c6
                  pod-template-generation=1
                  svccontroller.k3s.cattle.io/svcname=nodejs-loadbalancer
                  svccontroller.k3s.cattle.io/svcnamespace=default
Annotations:          <none>
Status:               Running
IP:                   10.42.0.164
IPs:
IP:           10.42.0.164
Controlled By:  DaemonSet/svclb-nodejs-loadbalancer-ecbf2424
Containers:
lb-tcp-30031:
Container ID:   containerd://58b6ef773ed3645804c1798224f165882988021e6dc77d36518a9f3e4425d108
Image:          rancher/klipper-lb:v0.4.13
Image ID:       docker.io/rancher/klipper-lb@sha256:7eb86d5b908ec6ddd9796253d8cc2f43df99420fc8b8a18452a94dc56f86aca0
Port:           30031/TCP
Host Port:      30031/TCP
State:          Running
  Started:      Wed, 18 Mar 2026 22:58:59 +0700
Ready:          True
Restart Count:  0
Environment:
  SRC_PORT:    30031
  SRC_RANGES:  0.0.0.0/0
  DEST_PROTO:  TCP
  DEST_PORT:   30031
  DEST_IPS:    10.43.186.199
Mounts:        <none>
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       True 
ContainersReady             True 
PodScheduled                True 
Volumes:                      <none>
QoS Class:                    BestEffort
Node-Selectors:               <none>
Tolerations:                  CriticalAddonsOnly op=Exists
                          node-role.kubernetes.io/control-plane:NoSchedule op=Exists
                          node-role.kubernetes.io/master:NoSchedule op=Exists
                          node.kubernetes.io/disk-pressure:NoSchedule op=Exists
                          node.kubernetes.io/memory-pressure:NoSchedule op=Exists
                          node.kubernetes.io/not-ready:NoExecute op=Exists
                          node.kubernetes.io/pid-pressure:NoSchedule op=Exists
                          node.kubernetes.io/unreachable:NoExecute op=Exists
                          node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events:
Type    Reason     Age   From               Message
----    ------     ----  ----               -------
Normal  Scheduled  45s   default-scheduler  Successfully assigned kube-system/svclb-nodejs-loadbalancer-ecbf2424-hns6d to devnull
Normal  Pulled     46s   kubelet            Container image "rancher/klipper-lb:v0.4.13" already present on machine
Normal  Created    46s   kubelet            Created container: lb-tcp-30031
Normal  Started    46s   kubelet            Started container lb-tcp-30031

Port Configuration

Port Fields

Kubernetesyml
ports:
    - protocol: TCP
      port: 80          # Service port (what clients connect to)
      targetPort: 8080  # Pod port (where container listens)
      nodePort: 30080   # Node port (for NodePort/LoadBalancer)
  • port: The port the Service listens on
  • targetPort: The port on the Pod (can be port number or name)
  • nodePort: The port on each Node (NodePort/LoadBalancer only)

Example: Different Port Mapping

Kubernetesapi-service.yml
apiVersion: v1
kind: Service
metadata:
    name: api-service
spec:
    selector:
        app: api
    ports:
        - name: http
          protocol: TCP
          port: 80
          targetPort: 8080
        - name: https
          protocol: TCP
          port: 443
          targetPort: 8443

This Service:

  • Listens on port 80, forwards to Pod port 8080
  • Listens on port 443, forwards to Pod port 8443

Using Named Ports

Define named ports in Pods:

Kubernetespod-named-ports.yml
apiVersion: v1
kind: Pod
metadata:
    name: web-pod
    labels:
        app: web
spec:
    containers:
        - name: web
          image: nginx:1.25
          ports:
              - name: http
                containerPort: 80
              - name: metrics
                containerPort: 9090

Reference named ports in Service:

Kubernetesservice-named-ports.yml
apiVersion: v1
kind: Service
metadata:
    name: web-service
spec:
    selector:
        app: web
    ports:
        - name: http
          port: 80
          targetPort: http
        - name: metrics
          port: 9090
          targetPort: metrics

Session Affinity

Control whether requests from the same client go to the same Pod.

ClientIP Session Affinity

Kubernetesservice-session-affinity.yml
apiVersion: v1
kind: Service
metadata:
    name: sticky-service
spec:
    selector:
        app: web
    sessionAffinity: ClientIP
    sessionAffinityConfig:
        clientIP:
            timeoutSeconds: 10800
    ports:
        - port: 80
          targetPort: 80

With sessionAffinity: ClientIP, requests from the same client IP go to the same Pod for the specified timeout (default 10800 seconds = 3 hours).

Headless Service

A Service without a ClusterIP, used for direct Pod-to-Pod communication.

Example: Headless Service

Kubernetesheadless-service.yml
apiVersion: v1
kind: Service
metadata:
    name: database
spec:
    clusterIP: None
    selector:
        app: database
    ports:
        - port: 5432
          targetPort: 5432

With clusterIP: None, DNS returns Pod IPs directly instead of a Service IP.

Use case: StatefulSets where each Pod needs a stable identity.

Service Without Selector

Create a Service that doesn't automatically select Pods.

Example: Manual Endpoints

Kubernetesservice-no-selector.yml
apiVersion: v1
kind: Service
metadata:
    name: external-database
spec:
    ports:
        - port: 5432
          targetPort: 5432

Manually create Endpoints:

Kubernetesendpoints.yml
apiVersion: v1
kind: Endpoints
metadata:
    name: external-database
subsets:
    - addresses:
          - ip: 192.168.1.100
      ports:
          - port: 5432

Use case: Accessing external services or databases outside Kubernetes.

ExternalName Service

Map a Service to an external DNS name.

Example: ExternalName Service

Kubernetesexternal-service.yml
apiVersion: v1
kind: Service
metadata:
    name: external-api
spec:
    type: ExternalName
    externalName: api.example.com

Pods can access external-api which resolves to api.example.com.

Use case: Abstracting external service URLs, making it easy to change them later.

Practical Examples

Example 1: Microservices Architecture

Frontend, backend, and database services:

Kubernetesmicroservices-services.yml
# Frontend Service (LoadBalancer for external access)
apiVersion: v1
kind: Service
metadata:
    name: frontend
spec:
    type: LoadBalancer
    selector:
        app: frontend
    ports:
        - port: 80
          targetPort: 3000
---
# Backend Service (ClusterIP for internal access)
apiVersion: v1
kind: Service
metadata:
    name: backend
spec:
    type: ClusterIP
    selector:
        app: backend
    ports:
        - port: 8080
          targetPort: 8080
---
# Database Service (Headless for StatefulSet)
apiVersion: v1
kind: Service
metadata:
    name: database
spec:
    clusterIP: None
    selector:
        app: database
    ports:
        - port: 5432
          targetPort: 5432

Example 2: Multi-Port Service

Application with HTTP and metrics endpoints:

Kubernetesmulti-port-service.yml
apiVersion: v1
kind: Service
metadata:
    name: app-service
spec:
    selector:
        app: myapp
    ports:
        - name: http
          port: 80
          targetPort: 8080
        - name: metrics
          port: 9090
          targetPort: 9090
        - name: health
          port: 8081
          targetPort: 8081

Example 3: Environment-Specific Services

Different services for different environments:

Kubernetesenv-services.yml
# Production Service
apiVersion: v1
kind: Service
metadata:
    name: api
    namespace: production
spec:
    selector:
        app: api
        environment: production
    ports:
        - port: 80
          targetPort: 8080
---
# Staging Service
apiVersion: v1
kind: Service
metadata:
    name: api
    namespace: staging
spec:
    selector:
        app: api
        environment: staging
    ports:
        - port: 80
          targetPort: 8080

Viewing Service Details

Get Services

Kubernetesbash
sudo kubectl get services

Or shorthand:

Kubernetesbash
sudo kubectl get svc

Describe Service

Kubernetesbash
sudo kubectl describe service nginx-service

Output shows:

Kubernetesbash
Name:              nginx-service
Namespace:         default
Labels:            <none>
Annotations:       <none>
Selector:          app=nginx
Type:              ClusterIP
IP Family Policy:  SingleStack
IP Families:       IPv4
IP:                10.43.100.50
IPs:               10.43.100.50
Port:              <unset>  80/TCP
TargetPort:        80/TCP
Endpoints:         10.42.0.10:80,10.42.0.11:80,10.42.0.12:80
Session Affinity:  None
Events:            <none>

View Endpoints

Kubernetesbash
sudo kubectl get endpoints nginx-service

Output:

Kubernetesbash
NAME            ENDPOINTS                                   AGE
nginx-service   10.42.0.10:80,10.42.0.11:80,10.42.0.12:80   5m

Shows the actual Pod IPs the Service routes to.

Common Mistakes and Pitfalls

Mistake 1: Selector Mismatch

Problem: Service selector doesn't match Pod labels.

Solution: Ensure labels match exactly:

Kubernetesyml
# Pod labels
labels:
    app: nginx
    version: v1
 
# Service selector must match
selector:
    app: nginx
    version: v1

Mistake 2: Wrong Target Port

Problem: targetPort doesn't match container port.

Solution: Verify container port:

Kubernetesbash
sudo kubectl get pod <pod-name> -o jsonpath='{.spec.containers[*].ports[*].containerPort}'

Mistake 3: Using LoadBalancer Locally

Problem: LoadBalancer pending in local clusters.

Solution: Use NodePort for local development or install MetalLB.

Mistake 4: Not Checking Endpoints

Problem: Service has no endpoints.

Solution: Check if Pods are running and labels match:

Kubernetesbash
sudo kubectl get endpoints <service-name>
sudo kubectl get pods -l app=<label>

Mistake 5: Forgetting DNS Suffix

Problem: Can't access Service from different namespace.

Solution: Use full DNS name:

plaintext
<service-name>.<namespace>.svc.cluster.local

Best Practices

Use Meaningful Service Names

Choose clear, descriptive names:

Kubernetesyml
# Good
name: user-api
name: payment-service
name: database-primary
 
# Avoid
name: svc1
name: service
name: app

Always Set Resource Limits on Pods

Services route to Pods, so ensure Pods have resource limits:

Kubernetesyml
resources:
    requests:
        memory: "256Mi"
        cpu: "250m"
    limits:
        memory: "512Mi"
        cpu: "500m"

Use Named Ports

Makes configuration clearer:

Kubernetesyml
ports:
    - name: http
      port: 80
      targetPort: http
    - name: metrics
      port: 9090
      targetPort: metrics

Implement Health Checks

Ensure Services only route to healthy Pods:

Kubernetesyml
livenessProbe:
    httpGet:
        path: /health
        port: 8080
readinessProbe:
    httpGet:
        path: /ready
        port: 8080

Use ClusterIP for Internal Services

Don't expose internal services unnecessarily:

Kubernetesyml
# Internal microservice
type: ClusterIP
 
# Only expose what needs external access
type: LoadBalancer

Document Service Dependencies

Add annotations documenting dependencies:

Kubernetesyml
metadata:
    annotations:
        description: "User API service"
        depends-on: "database, cache"
        owner: "backend-team"

Conclusion

In episode 18, we've explored Service in Kubernetes in depth. We've learned what Services are, the different Service types, and how to use them for reliable application networking.

Key takeaways:

  • Service provides stable endpoint for accessing Pods
  • Four types: ClusterIP (internal), NodePort (node access), LoadBalancer (external), ExternalName (DNS mapping)
  • Uses label selectors to automatically discover Pods
  • Provides load balancing across multiple Pod replicas
  • Enables service discovery via DNS
  • ClusterIP is default and most common for internal communication
  • Port mapping allows Service port to differ from Pod port
  • Session affinity enables sticky sessions
  • Headless Services for direct Pod access
  • Always verify selector matches Pod labels
  • Check endpoints to ensure Service finds Pods

Service is fundamental to Kubernetes networking, enabling reliable communication between application components. By understanding Services, you can build robust, scalable microservices architectures with proper service discovery and load balancing.

Are you getting a clearer understanding of Service in Kubernetes? In the next episode 19, we'll discuss Ingress, which provides sophisticated HTTP/HTTPS routing to Services with features like host-based routing, path-based routing, and TLS termination. Keep your learning momentum going and look forward to the next episode!

Note

If you want to continue to the next episode, you can click the Episode 19 thumbnail below

Episode 19Episode 19

Related Posts