In this episode, we'll discuss Kubernetes ServiceAccount for Pod identity and authentication. We'll learn how ServiceAccounts work, how to create and use them, token management, and best practices for secure Pod authentication.

Note
If you want to read the previous episode, you can click the Episode 31 thumbnail below
In the previous episode, we learned about Vertical Pod Autoscaler (VPA) for automatic resource sizing. In episode 32, we'll discuss ServiceAccount, which provides identity for Pods and enables them to authenticate with the Kubernetes API server.
Note: Here I'll be using a Kubernetes Cluster installed through K3s.
Just like users need accounts to access systems, Pods need ServiceAccounts to interact with the Kubernetes API. ServiceAccounts enable secure, controlled access to cluster resources, allowing applications to query cluster state, create resources, or perform operations based on assigned permissions.
ServiceAccount is a Kubernetes resource that provides an identity for processes running in Pods. It enables Pods to authenticate with the Kubernetes API server and perform authorized operations.
Think of ServiceAccount like an employee badge - it identifies who you are (authentication) and determines what doors you can open (authorization via RBAC). Each Pod gets a badge (ServiceAccount) that grants specific access levels.
Key characteristics of ServiceAccount:
Understanding the key differences:
| Aspect | ServiceAccount | User Account |
|---|---|---|
| Purpose | For Pods/applications | For humans |
| Scope | Namespace-scoped | Cluster-wide |
| Management | Managed by Kubernetes | External (LDAP, OIDC, etc.) |
| Token | Stored in Secrets | External auth system |
| Creation | kubectl create | External identity provider |
| Use Case | Application access | Human access |
| Lifecycle | Tied to namespace | Independent |
ServiceAccount solves critical authentication and authorization challenges:
Without ServiceAccounts, all Pods would use the same identity, making it impossible to implement proper access control or audit who did what.
Every namespace automatically gets a default ServiceAccount:
kubectl get serviceaccountOutput:
NAME SECRETS AGE
default 0 10dEvery Pod automatically uses the default ServiceAccount unless specified otherwise.
ServiceAccount tokens are JWT (JSON Web Tokens) that authenticate Pods:
Kubernetes 1.24+:
Before Kubernetes 1.24:
Tokens are automatically mounted in Pods at:
/var/run/secrets/kubernetes.io/serviceaccount/Contains:
token - JWT authentication tokenca.crt - CA certificate for API servernamespace - Current namespaceapiVersion: v1
kind: ServiceAccount
metadata:
name: my-app-sa
namespace: defaultCreate:
kubectl apply -f my-serviceaccount.ymlVerify:
kubectl get serviceaccount my-app-saapiVersion: v1
kind: ServiceAccount
metadata:
name: my-app-sa
namespace: default
annotations:
description: "ServiceAccount for my application"
owner: "platform-team"For pulling images from private registries:
apiVersion: v1
kind: ServiceAccount
metadata:
name: my-app-sa
imagePullSecrets:
- name: docker-registry-secretapiVersion: v1
kind: Pod
metadata:
name: my-app
spec:
serviceAccountName: my-app-sa
containers:
- name: app
image: nginx:1.25apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
replicas: 3
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
serviceAccountName: my-app-sa
containers:
- name: app
image: nginx:1.25Prevent token from being mounted:
apiVersion: v1
kind: Pod
metadata:
name: my-app
spec:
serviceAccountName: my-app-sa
automountServiceAccountToken: false
containers:
- name: app
image: nginx:1.25ServiceAccounts work with RBAC to control permissions.
apiVersion: v1
kind: ServiceAccount
metadata:
name: app-reader
namespace: defaultapiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: pod-reader
namespace: default
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list", "watch"]apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: read-pods
namespace: default
subjects:
- kind: ServiceAccount
name: app-reader
namespace: default
roleRef:
kind: Role
name: pod-reader
apiGroup: rbac.authorization.k8s.ioApply all:
kubectl apply -f app-serviceaccount.yml
kubectl apply -f pod-reader-role.yml
kubectl apply -f pod-reader-binding.yml# ServiceAccount
apiVersion: v1
kind: ServiceAccount
metadata:
name: app-reader
namespace: default
---
# Role
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: pod-reader
namespace: default
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list", "watch"]
---
# RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: read-pods
namespace: default
subjects:
- kind: ServiceAccount
name: app-reader
namespace: default
roleRef:
kind: Role
name: pod-reader
apiGroup: rbac.authorization.k8s.io
---
# Pod using ServiceAccount
apiVersion: v1
kind: Pod
metadata:
name: app-pod
spec:
serviceAccountName: app-reader
containers:
- name: app
image: nginx:1.25Inside a Pod, access the API using the mounted token:
# Get token
TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)
# Get namespace
NAMESPACE=$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace)
# Get CA certificate
CACERT=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt
# Call API
curl --cacert $CACERT \
-H "Authorization: Bearer $TOKEN" \
https://kubernetes.default.svc/api/v1/namespaces/$NAMESPACE/podsapiVersion: v1
kind: Pod
metadata:
name: kubectl-pod
spec:
serviceAccountName: app-reader
containers:
- name: kubectl
image: bitnami/kubectl:latest
command: ["sleep", "3600"]Inside the Pod:
kubectl exec -it kubectl-pod -- /bin/bash
# Inside Pod
kubectl get pods
kubectl get servicesPython example:
from kubernetes import client, config
# Load in-cluster config (uses ServiceAccount)
config.load_incluster_config()
# Create API client
v1 = client.CoreV1Api()
# List pods
pods = v1.list_namespaced_pod(namespace="default")
for pod in pods.items:
print(f"Pod: {pod.metadata.name}")apiVersion: v1
kind: Secret
metadata:
name: my-app-sa-token
annotations:
kubernetes.io/service-account.name: my-app-sa
type: kubernetes.io/service-account-tokenGet token:
kubectl get secret my-app-sa-token -o jsonpath="{.data.token}" | base64 --decodeCreate short-lived token:
kubectl create token my-app-saCreate token with custom duration:
kubectl create token my-app-sa --duration=24hCheck token expiration:
kubectl create token my-app-sa | jwt decode -# ServiceAccount
apiVersion: v1
kind: ServiceAccount
metadata:
name: readonly-app
namespace: production
---
# Role - Read pods and services
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: readonly-role
namespace: production
rules:
- apiGroups: [""]
resources: ["pods", "services"]
verbs: ["get", "list", "watch"]
---
# RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: readonly-binding
namespace: production
subjects:
- kind: ServiceAccount
name: readonly-app
namespace: production
roleRef:
kind: Role
name: readonly-role
apiGroup: rbac.authorization.k8s.io
---
# Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: readonly-app
namespace: production
spec:
replicas: 2
selector:
matchLabels:
app: readonly-app
template:
metadata:
labels:
app: readonly-app
spec:
serviceAccountName: readonly-app
containers:
- name: app
image: myapp:latest# ServiceAccount
apiVersion: v1
kind: ServiceAccount
metadata:
name: cicd-deployer
namespace: default
---
# ClusterRole - Deploy applications
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: deployer-role
rules:
- apiGroups: ["apps"]
resources: ["deployments", "replicasets"]
verbs: ["get", "list", "create", "update", "patch", "delete"]
- apiGroups: [""]
resources: ["services", "configmaps", "secrets"]
verbs: ["get", "list", "create", "update", "patch", "delete"]
---
# ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: cicd-deployer-binding
subjects:
- kind: ServiceAccount
name: cicd-deployer
namespace: default
roleRef:
kind: ClusterRole
name: deployer-role
apiGroup: rbac.authorization.k8s.io# ServiceAccount
apiVersion: v1
kind: ServiceAccount
metadata:
name: monitoring-app
namespace: monitoring
---
# ClusterRole - Read metrics
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: metrics-reader
rules:
- apiGroups: [""]
resources: ["nodes", "pods", "services"]
verbs: ["get", "list", "watch"]
- apiGroups: ["metrics.k8s.io"]
resources: ["nodes", "pods"]
verbs: ["get", "list"]
---
# ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: monitoring-binding
subjects:
- kind: ServiceAccount
name: monitoring-app
namespace: monitoring
roleRef:
kind: ClusterRole
name: metrics-reader
apiGroup: rbac.authorization.k8s.io
---
# Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: monitoring-app
namespace: monitoring
spec:
replicas: 1
selector:
matchLabels:
app: monitoring
template:
metadata:
labels:
app: monitoring
spec:
serviceAccountName: monitoring-app
containers:
- name: prometheus
image: prom/prometheus:latest# ServiceAccount
apiVersion: v1
kind: ServiceAccount
metadata:
name: backup-job-sa
namespace: default
---
# Role - Access to backup resources
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: backup-role
namespace: default
rules:
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list"]
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list", "create"]
---
# RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: backup-binding
namespace: default
subjects:
- kind: ServiceAccount
name: backup-job-sa
namespace: default
roleRef:
kind: Role
name: backup-role
apiGroup: rbac.authorization.k8s.io
---
# Job
apiVersion: batch/v1
kind: Job
metadata:
name: backup-job
spec:
template:
spec:
serviceAccountName: backup-job-sa
containers:
- name: backup
image: backup-tool:latest
command: ["./backup.sh"]
restartPolicy: OnFailureProblem: Default ServiceAccount has no specific permissions.
# Bad: Using default ServiceAccount
spec:
# No serviceAccountName specified
containers:
- name: app
image: myapp:latestSolution: Create dedicated ServiceAccount:
# Good: Dedicated ServiceAccount
spec:
serviceAccountName: my-app-sa
containers:
- name: app
image: myapp:latestProblem: Granting cluster-admin to ServiceAccount.
# Bad: Too many permissions
roleRef:
kind: ClusterRole
name: cluster-adminSolution: Grant minimum necessary permissions:
# Good: Specific permissions
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list"]Problem: Multiple applications using same ServiceAccount.
Solution: Create separate ServiceAccount per application:
# App 1
serviceAccountName: app1-sa
# App 2
serviceAccountName: app2-saProblem: Mounting tokens in Pods that don't need API access.
Solution: Disable when not needed:
spec:
automountServiceAccountToken: falseProblem: Tokens that never expire are security risks.
Solution: Use short-lived tokens (1.24+):
kubectl create token my-app-sa --duration=1hGrant only necessary permissions:
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list"] # Only what's neededSeparate identity for each app:
# Frontend
serviceAccountName: frontend-sa
# Backend
serviceAccountName: backend-sa
# Database
serviceAccountName: database-saPrefer Role over ClusterRole when possible:
# Good: Namespace-scoped
kind: Role
metadata:
namespace: production
# Avoid unless necessary
kind: ClusterRolespec:
automountServiceAccountToken: false
containers:
- name: app
image: nginx:1.25 # Doesn't need API accessmetadata:
name: my-app-sa
annotations:
description: "ServiceAccount for my-app with read-only access to pods"
owner: "platform-team"
permissions: "pods:get,list,watch"Review ServiceAccount permissions:
# List ServiceAccounts
kubectl get serviceaccounts --all-namespaces
# Check permissions
kubectl auth can-i --list --as=system:serviceaccount:default:my-app-saFor external access, use time-bound tokens:
kubectl create token my-app-sa --duration=8hkubectl get pod my-pod
# Error: serviceaccount "my-app-sa" not foundSolution: Create ServiceAccount first:
kubectl create serviceaccount my-app-sa# Inside Pod
kubectl get pods
# Error: pods is forbiddenSolution: Check and fix RBAC:
# Check permissions
kubectl auth can-i get pods --as=system:serviceaccount:default:my-app-sa
# Create Role and RoleBinding
kubectl create role pod-reader --verb=get,list --resource=pods
kubectl create rolebinding read-pods --role=pod-reader --serviceaccount=default:my-app-sa# Inside Pod
ls /var/run/secrets/kubernetes.io/serviceaccount/
# Directory not foundSolution: Enable token mounting:
spec:
automountServiceAccountToken: true # Ensure this is true# ServiceAccount in namespace A, Pod in namespace B
# Error: serviceaccount not foundSolution: Ensure same namespace:
# ServiceAccount
metadata:
namespace: production
# Pod
metadata:
namespace: production
spec:
serviceAccountName: my-app-sakubectl get serviceaccounts
kubectl get sa # Short form
kubectl get sa --all-namespaceskubectl describe serviceaccount my-app-sakubectl get serviceaccount my-app-sa -o yamlkubectl auth can-i --list --as=system:serviceaccount:default:my-app-sa# Kubernetes 1.24+
kubectl create token my-app-sa
# Pre-1.24
kubectl get secret <sa-token-secret> -o jsonpath="{.data.token}" | base64 --decodekubectl delete serviceaccount my-app-saWarning
Warning: Deleting a ServiceAccount will cause Pods using it to lose API access. Ensure no Pods are using it before deletion.
In episode 32, we've explored ServiceAccount in Kubernetes in depth. We've learned how ServiceAccounts provide identity for Pods, enable API authentication, and work with RBAC for fine-grained access control.
Key takeaways:
/var/run/secrets/kubernetes.io/serviceaccount/ServiceAccount is fundamental to Kubernetes security and access control. By understanding ServiceAccounts and RBAC, you can implement proper authentication and authorization for your applications, ensuring secure, controlled access to cluster resources.
Are you getting a clearer understanding of ServiceAccount in Kubernetes? Keep your learning momentum going and look forward to the next episode!
Note
If you want to continue to the next episode, you can click the Episode 33 thumbnail below