In this episode, we'll discuss Kubernetes NetworkPolicy for fine-grained network traffic control. We'll learn how to implement network segmentation, zero-trust networking, and best practices for securing Pod communication.

In the previous episode, we explored Affinity and Anti-Affinity, which control Pod placement across nodes. Now we'll dive into NetworkPolicy, which controls network traffic between Pods and external networks.
Note: Here I'll be using a Kubernetes Cluster installed through K3s.
By default, Kubernetes allows all traffic between Pods (flat network model). NetworkPolicy lets you implement network segmentation and zero-trust networking. Think of NetworkPolicy like firewall rules for your cluster - without it, all Pods can talk to each other. With it, you can restrict communication to only what's necessary.
NetworkPolicy is a Kubernetes resource that defines how Pods communicate with each other and with external networks. It operates at Layer 3 (IP) and Layer 4 (TCP/UDP) of the OSI model. It doesn't inspect application-level protocols or content.
By default, Kubernetes has no network restrictions:
This is convenient for development but dangerous for production. NetworkPolicy changes this behavior.
A NetworkPolicy consists of:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-web
spec:
podSelector:
matchLabels:
app: web
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
app: client
ports:
- protocol: TCP
port: 80Ingress policies control incoming traffic to Pods.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-all-ingress
spec:
podSelector: {}
policyTypes:
- Ingress
ingress:
- {}An empty ingress rule allows all traffic.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-from-frontend
spec:
podSelector:
matchLabels:
app: backend
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
app: frontend
ports:
- protocol: TCP
port: 8080This allows traffic to app=backend Pods only from app=frontend Pods on port 8080.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-from-monitoring
namespace: production
spec:
podSelector:
matchLabels:
app: web
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
name: monitoring
ports:
- protocol: TCP
port: 9090This allows traffic from any Pod in the monitoring namespace.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-from-external
spec:
podSelector:
matchLabels:
app: api
policyTypes:
- Ingress
ingress:
- from:
- ipBlock:
cidr: 203.0.113.0/24
except:
- 203.0.113.5/32
ports:
- protocol: TCP
port: 443This allows traffic from the CIDR block 203.0.113.0/24 except 203.0.113.5.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: multi-ingress
spec:
podSelector:
matchLabels:
app: database
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
app: backend
ports:
- protocol: TCP
port: 5432
- from:
- podSelector:
matchLabels:
app: migration
ports:
- protocol: TCP
port: 5432
- from:
- namespaceSelector:
matchLabels:
name: monitoring
ports:
- protocol: TCP
port: 9090Multiple rules are combined with OR logic - traffic matching any rule is allowed.
Egress policies control outgoing traffic from Pods.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-all-egress
spec:
podSelector: {}
policyTypes:
- Egress
egress:
- {}apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-to-database
spec:
podSelector:
matchLabels:
app: backend
policyTypes:
- Egress
egress:
- to:
- podSelector:
matchLabels:
app: database
ports:
- protocol: TCP
port: 5432This allows app=backend Pods to send traffic only to app=database Pods on port 5432.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-dns
spec:
podSelector:
matchLabels:
app: web
policyTypes:
- Egress
egress:
- to:
- namespaceSelector:
matchLabels:
name: kube-system
ports:
- protocol: UDP
port: 53This allows DNS queries to the kube-system namespace.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-external
spec:
podSelector:
matchLabels:
app: web
policyTypes:
- Egress
egress:
- to:
- ipBlock:
cidr: 0.0.0.0/0
except:
- 169.254.169.254/32 # Block AWS metadata service
ports:
- protocol: TCP
port: 443This allows external HTTPS traffic but blocks the AWS metadata service.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: backend-policy
spec:
podSelector:
matchLabels:
app: backend
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector:
matchLabels:
app: frontend
ports:
- protocol: TCP
port: 8080
egress:
- to:
- podSelector:
matchLabels:
app: database
ports:
- protocol: TCP
port: 5432
- to:
- namespaceSelector:
matchLabels:
name: kube-system
ports:
- protocol: UDP
port: 53A common pattern is to deny all traffic by default, then allow specific traffic.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-all-ingress
spec:
podSelector: {}
policyTypes:
- IngressAn empty ingress list denies all traffic.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-all-egress
spec:
podSelector: {}
policyTypes:
- EgressapiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-all
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress# Frontend can receive traffic from external
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-frontend-ingress
spec:
podSelector:
matchLabels:
tier: frontend
policyTypes:
- Ingress
ingress:
- from:
- ipBlock:
cidr: 0.0.0.0/0
ports:
- protocol: TCP
port: 80
- protocol: TCP
port: 443
---
# Frontend can only talk to backend
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: frontend-to-backend
spec:
podSelector:
matchLabels:
tier: frontend
policyTypes:
- Egress
egress:
- to:
- podSelector:
matchLabels:
tier: backend
ports:
- protocol: TCP
port: 8080
- to:
- namespaceSelector:
matchLabels:
name: kube-system
ports:
- protocol: UDP
port: 53
---
# Backend can receive from frontend and talk to database
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: backend-policy
spec:
podSelector:
matchLabels:
tier: backend
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector:
matchLabels:
tier: frontend
ports:
- protocol: TCP
port: 8080
egress:
- to:
- podSelector:
matchLabels:
tier: database
ports:
- protocol: TCP
port: 5432
- to:
- namespaceSelector:
matchLabels:
name: kube-system
ports:
- protocol: UDP
port: 53
---
# Database can only receive from backend
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: database-policy
spec:
podSelector:
matchLabels:
tier: database
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
tier: backend
ports:
- protocol: TCP
port: 5432apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-monitoring
spec:
podSelector: {}
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
name: monitoring
ports:
- protocol: TCP
port: 9090This allows Prometheus in the monitoring namespace to scrape metrics from all Pods.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-cross-namespace
namespace: production
spec:
podSelector: {}
policyTypes:
- Ingress
ingress:
- from:
- podSelector: {}
namespaceSelector:
matchLabels:
name: productionThis allows traffic only within the production namespace.
Selects Pods by label:
podSelector:
matchLabels:
app: webSelects Pods in specific namespaces:
namespaceSelector:
matchLabels:
name: productionSelects by IP CIDR:
ipBlock:
cidr: 10.0.0.0/8
except:
- 10.1.0.0/16from:
- podSelector:
matchLabels:
app: web
namespaceSelector:
matchLabels:
name: productionThis means: Pods with app=web label in the production namespace.
1. No Layer 7 (Application) Filtering
NetworkPolicy works at Layer 3/4. It can't filter based on HTTP paths or methods.
2. No Egress to Pods in Other Clusters
NetworkPolicy only works within a cluster.
3. Requires Network Plugin Support
Not all network plugins support NetworkPolicy. Flannel doesn't, but Calico, Weave, and Cilium do.
4. No Logging Built-in
NetworkPolicy doesn't log denied traffic by default.
5. Performance Overhead
Complex policies can impact network performance.
Problem: Pods can't resolve DNS.
# DON'T DO THIS - Pods can't resolve DNS
egress:
- to:
- podSelector:
matchLabels:
app: database
ports:
- protocol: TCP
port: 5432Solution: Always allow DNS:
egress:
- to:
- podSelector:
matchLabels:
app: database
ports:
- protocol: TCP
port: 5432
- to:
- namespaceSelector:
matchLabels:
name: kube-system
ports:
- protocol: UDP
port: 53Problem: Pods can't access Kubernetes API.
# DON'T DO THIS - Pods can't access Kubernetes API
egress:
- to:
- podSelector:
matchLabels:
app: databaseSolution: Allow API server access.
Problem: Breaks legitimate traffic.
# DON'T DO THIS - Breaks legitimate traffic
ingress:
- from:
- podSelector:
matchLabels:
app: specific-appSolution: Test policies before deploying to production.
Problem: DNS won't work.
# DON'T DO THIS - DNS won't work
egress:
- to:
- ipBlock:
cidr: 10.0.0.0/8Solution: Explicitly allow kube-dns:
egress:
- to:
- namespaceSelector:
matchLabels:
name: kube-system
ports:
- protocol: UDP
port: 53Problem: Namespace selectors won't work.
# This won't work if namespace isn't labeled
namespaceSelector:
matchLabels:
name: productionSolution: Label namespaces:
kubectl label namespace production name=production# First, deny all
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-all
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
# Then, allow specific traffic
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-web
spec:
podSelector:
matchLabels:
app: web
policyTypes:
- Ingress
ingress:
- from:
- ipBlock:
cidr: 0.0.0.0/0
ports:
- protocol: TCP
port: 80apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: namespace-isolation
spec:
podSelector: {}
policyTypes:
- Ingress
ingress:
- from:
- podSelector: {}
namespaceSelector:
matchLabels:
name: productionLabel Pods, namespaces, and nodes consistently:
kubectl label pod web-1 app=web tier=frontend
kubectl label namespace production name=productionAdd comments explaining policy intent:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: backend-policy
annotations:
description: "Allow frontend to backend communication on port 8080"
spec:
podSelector:
matchLabels:
app: backend
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
app: frontend
ports:
- protocol: TCP
port: 8080Test policies in staging before production:
# Apply policy
kubectl apply -f network-policy.yaml
# Test connectivity
kubectl exec -it pod-1 -- curl http://pod-2:8080
# Check policy
kubectl get networkpolicy
kubectl describe networkpolicy policy-nameTools like Cilium and Calico provide UI for policy management.
Use network monitoring tools to track denied traffic:
# Calico provides policy logs
kubectl logs -n calico-system -l k8s-app=calico-nodekubectl get networkpolicy
kubectl get networkpolicy -n productionkubectl describe networkpolicy allow-webkubectl get networkpolicy allow-web -o yaml# Test if Pod can reach another Pod
kubectl exec -it pod-1 -- curl http://pod-2:8080
# Test DNS
kubectl exec -it pod-1 -- nslookup kubernetes.defaultNot all network plugins support NetworkPolicy:
| Plugin | NetworkPolicy Support |
|---|---|
| Flannel | No |
| Calico | Yes |
| Weave | Yes |
| Cilium | Yes |
| AWS VPC CNI | Limited |
| Azure CNI | Yes |
Check your network plugin documentation for support.
In episode 36, we've explored NetworkPolicy in Kubernetes in depth. We've learned how to implement network segmentation and zero-trust networking by controlling traffic between Pods and external networks.
Key takeaways:
NetworkPolicy is essential for securing Kubernetes clusters. By starting with deny-all policies and explicitly allowing necessary traffic, you can build secure, resilient clusters.