Master Helm Charts from origins to production deployment. Learn why they exist, their evolution, core concepts, and implement a real-world microservices platform.

Helm Charts exist because Kubernetes manifests are verbose, repetitive, and hard to manage at scale. When you deploy applications to Kubernetes, you write YAML files defining pods, services, deployments, and more. But what happens when you need to deploy the same application across multiple environments? Or when you need to share applications with others? Or when you need to manage dozens of interdependent services?
Helm solves these problems by treating Kubernetes applications as packages. It's a package manager for Kubernetes, similar to npm for Node.js or pip for Python. Helm Charts are reusable, templated Kubernetes manifests that can be versioned, shared, and deployed consistently.
In this post, we'll explore why Helm exists, its history, the problems it solves, and how to use it effectively in real-world scenarios.
Before Helm, managing Kubernetes applications meant dealing with several hard problems:
Teams either wrote custom scripts, used kustomize, or manually managed hundreds of YAML files. This was error-prone and didn't scale.
Helm provides:
The philosophy: Kubernetes applications should be as easy to deploy as installing software on your laptop.
Helm started as a side project at Deis (later acquired by Microsoft) in 2015. The initial goal was simple: create a package manager for Kubernetes. Early Helm was experimental and had limitations.
Key characteristics:
Helm v2 became the standard. It introduced:
Helm v2 was powerful but had issues. Tiller required cluster-wide permissions, creating security concerns. The architecture was complex.
Helm v3 was a major redesign addressing v2's problems:
Helm v3 is the current standard and what we'll focus on.
Despite Kubernetes's complexity, Helm remains essential because:
A Chart is a package of Kubernetes manifests. It's a directory with a specific structure containing:
Charts are versioned and can be stored in repositories.
A Release is a running instance of a Chart. When you install a Chart, Helm creates a Release. You can have multiple Releases of the same Chart with different configurations.
Example: You might have three Releases of the nginx Chart - one for dev, one for staging, one for production.
Values are configuration parameters that customize a Chart. They're defined in values.yaml and can be overridden at install/upgrade time.
Values use a dot notation for nested access: image.repository, image.tag, replicaCount.
Templates are Go template files that generate Kubernetes manifests. They use variables, conditionals, loops, and functions to create dynamic YAML.
Example template:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Release.Name }}-deployment
spec:
replicas: {{ .Values.replicaCount }}
template:
spec:
containers:
- name: app
image: {{ .Values.image.repository }}:{{ .Values.image.tag }}Repositories are collections of Charts. They're similar to npm registries or Docker registries. You can add public repositories or host private ones.
Common public repositories:
Hooks are actions that run at specific points in the release lifecycle:
Hooks are useful for database migrations, cleanup, or validation.
Subcharts are Charts that depend on other Charts. They're declared in Chart.yaml and stored in the charts/ directory.
This enables:
When you install a Chart, Helm:
values.yaml and command-line overridesHelm tracks Releases using Kubernetes secrets. Each Release has:
This enables upgrades and rollbacks.
Helm resolves dependencies by:
Chart.yaml dependenciesOn Linux/macOS:
curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bashVerify installation:
helm versionAdd the Bitnami repository:
helm repo add bitnami https://charts.bitnami.com/bitnamiUpdate repositories:
helm repo updateList repositories:
helm repo listSearch for available charts:
helm search repo nginxGet chart information:
helm show chart bitnami/nginxView default values:
helm show values bitnami/nginxInstall with default values:
helm install my-nginx bitnami/nginxInstall with custom values:
helm install my-nginx bitnami/nginx \
--set replicaCount=3 \
--set image.tag=1.25Install from a values file:
helm install my-nginx bitnami/nginx -f values.yamlList releases:
helm listGet release status:
helm status my-nginxView release values:
helm get values my-nginxUpgrade to a new version:
helm upgrade my-nginx bitnami/nginx \
--set replicaCount=5Rollback to previous release:
helm rollback my-nginx 1Remove a release:
helm uninstall my-nginxCreate a new chart:
helm create my-appThis generates:
my-app/
├── Chart.yaml
├── values.yaml
├── charts/
├── templates/
│ ├── deployment.yaml
│ ├── service.yaml
│ ├── ingress.yaml
│ ├── _helpers.tpl
│ └── NOTES.txt
└── README.mdDefine chart metadata:
apiVersion: v2
name: my-app
description: A Helm chart for my application
type: application
version: 1.0.0
appVersion: "1.0"
maintainers:
- name: Your Name
email: your@email.comDefine default configuration:
Create templates/deployment.yaml:
Create templates/service.yaml:
apiVersion: v1
kind: Service
metadata:
name: {{ include "my-app.fullname" . }}
labels:
{{- include "my-app.labels" . | nindent 4 }}
spec:
type: {{ .Values.service.type }}
ports:
- port: {{ .Values.service.port }}
targetPort: http
protocol: TCP
name: http
selector:
{{- include "my-app.selectorLabels" . | nindent 4 }}Lint the chart:
helm lint my-appDry-run to see generated manifests:
helm install my-app ./my-app --dry-run --debugProblem: Values are hardcoded in templates, making charts inflexible.
Why it happens: Developers forget to parameterize configuration.
Solution: Move all configurable values to values.yaml:
replicaCount: {{ .Values.replicaCount }}Not:
replicaCount: 3Problem: Charts don't follow Helm conventions, confusing users.
Why it happens: Developers create charts without studying examples.
Solution: Follow Helm best practices:
_helpers.tpl for template helpersProblem: Pods consume unlimited resources, causing cluster instability.
Why it happens: Resource limits are forgotten during development.
Solution: Always define resource requests and limits:
resources:
limits:
cpu: 500m
memory: 512Mi
requests:
cpu: 250m
memory: 256MiProblem: Chart changes aren't tracked, making rollbacks difficult.
Why it happens: Developers update charts without incrementing versions.
Solution: Follow semantic versioning:
version: 1.2.3
appVersion: "2.0"Increment version for every release.
Problem: Too many subcharts create maintenance burden.
Why it happens: Developers try to make one chart do everything.
Solution: Keep charts focused and simple. Use subcharts for truly reusable components.
Follow semver for chart versions:
version: 1.2.3
appVersion: "2.0.1"Include comprehensive documentation:
apiVersion: v2
name: my-app
description: A production-ready application
home: https://github.com/myorg/my-app
sources:
- https://github.com/myorg/my-app
maintainers:
- name: Your Name
email: your@email.com
keywords:
- app
- productionRun database migrations before deployment:
apiVersion: batch/v1
kind: Job
metadata:
name: {{ include "my-app.fullname" . }}-migrate
annotations:
"helm.sh/hook": pre-install
"helm.sh/hook-weight": "-5"
spec:
template:
spec:
containers:
- name: migrate
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
command: ["./migrate.sh"]
restartPolicy: NeverDefine liveness and readiness probes:
containers:
- name: app
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
livenessProbe:
httpGet:
path: /health
port: http
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /ready
port: http
initialDelaySeconds: 5
periodSeconds: 5Separate configuration from code:
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ include "my-app.fullname" . }}-config
data:
app.conf: |
{{ .Values.appConfig | nindent 4 }}Use helm test for validation:
apiVersion: v1
kind: Pod
metadata:
name: "{{ include \"my-app.fullname\" . }}-test\"
annotations:
"helm.sh/hook": test
spec:
containers:
- name: test
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
command: ["./test.sh"]
restartPolicy: NeverLet's build a practical example - a microservices platform with API gateway, user service, product service, and PostgreSQL database.
┌─────────────────────────────────────────┐
│ Kubernetes Cluster │
├─────────────────────────────────────────┤
│ Ingress (nginx) │
│ ↓ │
│ API Gateway (Kong) │
│ ↓ │
│ ┌──────────────┬──────────────┐ │
│ │ User Service │ Product Svc │ │
│ └──────────────┴──────────────┘ │
│ ↓ │
│ PostgreSQL Database │
└─────────────────────────────────────────┘microservices-platform/
├── Chart.yaml
├── values.yaml
├── charts/
│ ├── api-gateway/
│ ├── user-service/
│ ├── product-service/
│ └── postgresql/
├── templates/
│ ├── namespace.yaml
│ ├── configmap.yaml
│ └── secrets.yaml
└── README.mdapiVersion: v1
kind: Namespace
metadata:
name: {{ .Values.namespace }}apiVersion: v1
kind: ConfigMap
metadata:
name: platform-config
namespace: {{ .Values.namespace }}
data:
api-gateway-url: "http://api-gateway:80"
user-service-url: "http://user-service:8080"
product-service-url: "http://product-service:8080"
database-host: "{{ .Values.postgresql.primary.service.name }}"
database-port: "5432"1. Create chart structure:
helm create microservices-platform2. Create subchart for API Gateway:
cd microservices-platform/charts
helm create api-gatewayRepeat for user-service and product-service.
3. Update Chart.yaml with dependencies:
cd ..
helm dependency update4. Validate chart:
helm lint microservices-platform5. Dry-run installation:
helm install platform ./microservices-platform --dry-run --debug6. Install the chart:
helm install platform ./microservices-platform \
--namespace microservices \
--create-namespace7. Verify installation:
helm status platform -n microservices8. View deployed resources:
kubectl get all -n microservicesScale user service to 5 replicas:
helm upgrade platform ./microservices-platform \
--set user-service.replicaCount=5 \
-n microservicesDeploy new version of product service:
helm upgrade platform ./microservices-platform \
--set product-service.image.tag=1.1 \
-n microservicesRollback to previous release:
helm rollback platform 1 -n microservicesCheck release history:
helm history platform -n microservicesView current values:
helm get values platform -n microservicesView generated manifests:
helm get manifest platform -n microservicesHelm Charts exist because Kubernetes applications need packaging, versioning, and lifecycle management. They transform Kubernetes from a low-level orchestration platform into a package management system.
While Kubernetes is powerful, Helm makes it accessible. It reduces boilerplate, enables code reuse, and provides a standard way to deploy applications.
The key takeaways:
Start with existing charts from Bitnami or other repositories. Learn the structure by examining real-world examples. Then create your own charts for your applications.
For the microservices platform example, you now have a production-ready template. Adapt it to your specific requirements, add monitoring and logging, implement proper secrets management, and you're ready to deploy.
Helm is not just a tool - it's a philosophy: applications should be as easy to deploy as installing software on your laptop.