In this episode, we'll discuss Kubernetes CronJob, a controller for running Jobs on a time-based schedule. We'll learn how CronJob enables scheduled tasks, recurring operations, and automated maintenance in Kubernetes.

Note
If you want to read the previous episode, you can click the Episode 14 thumbnail below
In the previous episode, we learned about Job, which runs tasks to completion. In episode 15, we'll discuss CronJob, which builds on Job to provide scheduled, recurring task execution.
Note: Here I'll be using a Kubernetes Cluster installed through K3s.
If you're familiar with Unix/Linux cron, CronJob in Kubernetes works similarly - it creates Jobs on a repeating schedule. Think of it as your cluster's task scheduler for automated backups, reports, cleanups, and any recurring operations.
A CronJob creates Jobs on a time-based schedule. It uses the same cron format as Unix/Linux systems to define when Jobs should run. CronJob manages the lifecycle of Jobs, creating them at scheduled times and cleaning up old Jobs.
Think of CronJob like a cron daemon in Linux - it runs tasks at specified times automatically. In Kubernetes, CronJob creates Job objects on schedule, which then create Pods to execute the actual work.
Key characteristics of CronJob:
CronJob is designed for recurring tasks that need to run on a schedule:
Without CronJob, you would need to:
Let's understand the key differences:
| Aspect | CronJob | Job |
|---|---|---|
| Execution | Scheduled, recurring | One-time |
| Schedule | Cron syntax | Immediate |
| Job creation | Automatic on schedule | Manual |
| Use case | Recurring tasks | One-time tasks |
| Management | Creates and manages Jobs | Runs Pods directly |
| Cleanup | Automatic with history limits | Manual or TTL |
Example scenario:
CronJob uses standard cron format with 5 fields:
┌───────────── minute (0 - 59)
│ ┌───────────── hour (0 - 23)
│ │ ┌───────────── day of month (1 - 31)
│ │ │ ┌───────────── month (1 - 12)
│ │ │ │ ┌───────────── day of week (0 - 6) (Sunday to Saturday)
│ │ │ │ │
│ │ │ │ │
* * * * *Every minute:
* * * * *Every hour at minute 0:
0 * * * *Every day at 2:30 AM:
30 2 * * *Every Monday at 9:00 AM:
0 9 * * 1Every 15 minutes:
*/15 * * * *Every 6 hours:
0 */6 * * *First day of every month at midnight:
0 0 1 * *Every weekday at 6:00 PM:
0 18 * * 1-5Every Sunday at 3:00 AM:
0 3 * * 0Twice a day (6 AM and 6 PM):
0 6,18 * * *Let's create a basic CronJob:
Create a file named cronjob-basic.yml:
apiVersion: batch/v1
kind: CronJob
metadata:
name: hello-cronjob
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox:1.36
command:
- /bin/sh
- -c
- date; echo "Hello from CronJob!"
restartPolicy: OnFailureThis CronJob runs every minute.
Apply the configuration:
sudo kubectl apply -f cronjob-basic.ymlVerify the CronJob is created:
sudo kubectl get cronjobsOr use the shorthand:
sudo kubectl get cjOutput:
NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE
hello-cronjob */1 * * * * False 0 <none> 10sWait a minute and check the Jobs created:
sudo kubectl get jobsOutput:
NAME COMPLETIONS DURATION AGE
hello-cronjob-28458640 1/1 5s 45s
hello-cronjob-28458641 1/1 4s 5sCheck the Pods:
sudo kubectl get podsView logs from a completed Pod:
sudo kubectl logs hello-cronjob-28458640-abc12Output:
Sun Mar 1 10:00:00 UTC 2026
Hello from CronJob!Control how CronJob handles overlapping executions:
Allows concurrent Jobs to run:
apiVersion: batch/v1
kind: CronJob
metadata:
name: concurrent-cronjob
spec:
schedule: "*/1 * * * *"
concurrencyPolicy: Allow
jobTemplate:
spec:
template:
spec:
containers:
- name: task
image: busybox:1.36
command:
- /bin/sh
- -c
- echo "Starting"; sleep 120; echo "Done"
restartPolicy: OnFailureBehavior:
Prevents concurrent Jobs:
apiVersion: batch/v1
kind: CronJob
metadata:
name: sequential-cronjob
spec:
schedule: "*/1 * * * *"
concurrencyPolicy: Forbid
jobTemplate:
spec:
template:
spec:
containers:
- name: task
image: busybox:1.36
command:
- /bin/sh
- -c
- echo "Starting"; sleep 120; echo "Done"
restartPolicy: OnFailureBehavior:
Replaces currently running Job with new one:
apiVersion: batch/v1
kind: CronJob
metadata:
name: replace-cronjob
spec:
schedule: "*/1 * * * *"
concurrencyPolicy: Replace
jobTemplate:
spec:
template:
spec:
containers:
- name: task
image: busybox:1.36
command:
- /bin/sh
- -c
- echo "Starting"; sleep 120; echo "Done"
restartPolicy: OnFailureBehavior:
Control how many completed and failed Jobs to keep:
apiVersion: batch/v1
kind: CronJob
metadata:
name: history-cronjob
spec:
schedule: "*/5 * * * *"
successfulJobsHistoryLimit: 3
failedJobsHistoryLimit: 1
jobTemplate:
spec:
template:
spec:
containers:
- name: task
image: busybox:1.36
command: ["echo", "Task completed"]
restartPolicy: OnFailureThis CronJob:
Default values:
successfulJobsHistoryLimit: 3failedJobsHistoryLimit: 1Set a deadline for starting Jobs:
apiVersion: batch/v1
kind: CronJob
metadata:
name: deadline-cronjob
spec:
schedule: "0 2 * * *"
startingDeadlineSeconds: 3600
jobTemplate:
spec:
template:
spec:
containers:
- name: task
image: busybox:1.36
command: ["echo", "Task completed"]
restartPolicy: OnFailureThis CronJob:
Use case: If cluster was down during scheduled time, Job can still start when cluster comes back up (within deadline).
Temporarily pause CronJob execution:
apiVersion: batch/v1
kind: CronJob
metadata:
name: suspended-cronjob
spec:
schedule: "*/1 * * * *"
suspend: true
jobTemplate:
spec:
template:
spec:
containers:
- name: task
image: busybox:1.36
command: ["echo", "Task completed"]
restartPolicy: OnFailureWith suspend: true, CronJob won't create new Jobs.
Suspend an existing CronJob:
sudo kubectl patch cronjob hello-cronjob -p '{"spec":{"suspend":true}}'Resume a suspended CronJob:
sudo kubectl patch cronjob hello-cronjob -p '{"spec":{"suspend":false}}'Specify timezone for schedule (Kubernetes 1.25+):
apiVersion: batch/v1
kind: CronJob
metadata:
name: timezone-cronjob
spec:
schedule: "0 9 * * *"
timeZone: "America/New_York"
jobTemplate:
spec:
template:
spec:
containers:
- name: task
image: busybox:1.36
command: ["echo", "Good morning!"]
restartPolicy: OnFailureThis runs at 9:00 AM Eastern Time.
Important
Important: Timezone support requires Kubernetes 1.25 or later. Without timeZone field, schedule uses controller manager's timezone (usually UTC).
To see detailed information about a CronJob:
sudo kubectl describe cronjob hello-cronjobOutput:
Name: hello-cronjob
Namespace: default
Labels: <none>
Annotations: <none>
Schedule: */1 * * * *
Concurrency Policy: Allow
Suspend: False
Successful Job History Limit: 3
Failed Job History Limit: 1
Starting Deadline Seconds: <unset>
Selector: <unset>
Parallelism: <unset>
Completions: <unset>
Pod Template:
Labels: <none>
Containers:
hello:
Image: busybox:1.36
Command:
/bin/sh
-c
date; echo "Hello from CronJob!"
Environment: <none>
Mounts: <none>
Volumes: <none>
Last Schedule Time: Sun, 01 Mar 2026 10:05:00 +0000
Active Jobs: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulCreate 5m cronjob-controller Created job hello-cronjob-28458640
Normal SuccessfulCreate 4m cronjob-controller Created job hello-cronjob-28458641
Normal SuccessfulCreate 3m cronjob-controller Created job hello-cronjob-28458642apiVersion: batch/v1
kind: CronJob
metadata:
name: database-backup
labels:
app: backup
type: database
spec:
schedule: "0 2 * * *"
timeZone: "UTC"
concurrencyPolicy: Forbid
successfulJobsHistoryLimit: 7
failedJobsHistoryLimit: 3
jobTemplate:
spec:
backoffLimit: 2
activeDeadlineSeconds: 3600
template:
metadata:
labels:
app: backup
type: database
spec:
containers:
- name: backup
image: postgres:15-alpine
command:
- /bin/sh
- -c
- |
BACKUP_FILE="/backup/db-backup-$(date +%Y%m%d-%H%M%S).sql"
pg_dump -h $DB_HOST -U $DB_USER -d $DB_NAME > $BACKUP_FILE
gzip $BACKUP_FILE
echo "Backup completed: ${BACKUP_FILE}.gz"
env:
- name: DB_HOST
value: "postgres-service"
- name: DB_USER
valueFrom:
secretKeyRef:
name: db-credentials
key: username
- name: DB_NAME
value: "production"
- name: PGPASSWORD
valueFrom:
secretKeyRef:
name: db-credentials
key: password
volumeMounts:
- name: backup-storage
mountPath: /backup
volumes:
- name: backup-storage
persistentVolumeClaim:
claimName: backup-pvc
restartPolicy: OnFailureThis CronJob:
apiVersion: batch/v1
kind: CronJob
metadata:
name: log-cleanup
spec:
schedule: "0 * * * *"
concurrencyPolicy: Forbid
successfulJobsHistoryLimit: 3
failedJobsHistoryLimit: 1
jobTemplate:
spec:
template:
spec:
containers:
- name: cleanup
image: busybox:1.36
command:
- /bin/sh
- -c
- |
echo "Starting log cleanup..."
find /logs -name "*.log" -mtime +7 -delete
echo "Cleanup completed"
volumeMounts:
- name: logs
mountPath: /logs
volumes:
- name: logs
hostPath:
path: /var/log/app
restartPolicy: OnFailureThis CronJob:
apiVersion: batch/v1
kind: CronJob
metadata:
name: weekly-report
spec:
schedule: "0 8 * * 1"
timeZone: "America/New_York"
concurrencyPolicy: Forbid
successfulJobsHistoryLimit: 4
failedJobsHistoryLimit: 2
jobTemplate:
spec:
backoffLimit: 1
template:
spec:
containers:
- name: report-generator
image: python:3.11-slim
command:
- python
- -c
- |
import datetime
print("Generating weekly report...")
print(f"Report date: {datetime.datetime.now()}")
# Generate report logic here
print("Report generated successfully")
env:
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: app-secrets
key: database-url
- name: SMTP_HOST
value: "smtp.example.com"
resources:
requests:
memory: "512Mi"
cpu: "500m"
limits:
memory: "1Gi"
cpu: "1000m"
restartPolicy: OnFailureThis CronJob:
apiVersion: batch/v1
kind: CronJob
metadata:
name: cache-warmer
spec:
schedule: "0 6 * * *"
concurrencyPolicy: Replace
successfulJobsHistoryLimit: 2
failedJobsHistoryLimit: 1
jobTemplate:
spec:
activeDeadlineSeconds: 1800
template:
spec:
containers:
- name: warmer
image: curlimages/curl:latest
command:
- /bin/sh
- -c
- |
echo "Warming cache..."
curl -s http://api-service/api/warm-cache
echo "Cache warmed successfully"
restartPolicy: OnFailureThis CronJob:
apiVersion: batch/v1
kind: CronJob
metadata:
name: certificate-checker
spec:
schedule: "0 0 * * *"
concurrencyPolicy: Forbid
successfulJobsHistoryLimit: 7
failedJobsHistoryLimit: 3
jobTemplate:
spec:
template:
spec:
containers:
- name: checker
image: alpine:latest
command:
- /bin/sh
- -c
- |
apk add --no-cache openssl
echo "Checking certificates..."
echo | openssl s_client -servername example.com -connect example.com:443 2>/dev/null | openssl x509 -noout -dates
echo "Certificate check completed"
restartPolicy: OnFailureThis CronJob:
sudo kubectl get cronjobssudo kubectl describe cronjob hello-cronjobsudo kubectl get jobs --selector=cronjob=hello-cronjobsudo kubectl get cronjobs -wsudo kubectl get cronjob hello-cronjob -o jsonpath='{.status.lastScheduleTime}'sudo kubectl get events --sort-by='.lastTimestamp' | grep CronJobProblem: Using wrong cron format or invalid values.
Solution: Validate cron syntax before applying:
# Use online cron validators or test locally
# Correct: */5 * * * * (every 5 minutes)
# Wrong: 5 * * * * (at minute 5 of every hour)Problem: Multiple Jobs running concurrently when they shouldn't.
Solution: Set appropriate concurrencyPolicy:
spec:
concurrencyPolicy: Forbid # For tasks that shouldn't overlapProblem: Accumulation of old Jobs consuming resources.
Solution: Set history limits:
spec:
successfulJobsHistoryLimit: 3
failedJobsHistoryLimit: 1Problem: Jobs run at wrong times due to timezone confusion.
Solution: Explicitly set timezone (Kubernetes 1.25+):
spec:
timeZone: "America/New_York"Problem: Missed schedules pile up when cluster recovers.
Solution: Set startingDeadlineSeconds:
spec:
startingDeadlineSeconds: 3600Problem: CronJob Pods consume excessive resources.
Solution: Always set resource limits in Job template:
resources:
requests:
memory: "256Mi"
cpu: "250m"
limits:
memory: "512Mi"
cpu: "500m"Choose based on task requirements:
spec:
concurrencyPolicy: Forbid # For sequential tasks
# concurrencyPolicy: Allow # For independent tasks
# concurrencyPolicy: Replace # For latest-only tasksPrevent Job accumulation:
spec:
successfulJobsHistoryLimit: 3
failedJobsHistoryLimit: 1Handle missed schedules gracefully:
spec:
startingDeadlineSeconds: 3600Make schedules explicit (Kubernetes 1.25+):
spec:
timeZone: "UTC"Prevent Jobs from running too long:
jobTemplate:
spec:
activeDeadlineSeconds: 3600Organize and filter CronJobs:
metadata:
labels:
app: backup
type: database
schedule: dailyNever hardcode sensitive data:
env:
- name: PASSWORD
valueFrom:
secretKeyRef:
name: credentials
key: passwordPrevent resource exhaustion:
resources:
requests:
memory: "256Mi"
cpu: "250m"
limits:
memory: "512Mi"
cpu: "500m"Verify cron syntax and timing:
# Test with frequent schedule first
schedule: "*/1 * * * *" # Every minute for testing
# Then change to production schedule
schedule: "0 2 * * *" # Daily at 2 AMSet up alerts for failed Jobs:
# Check for failed Jobs
sudo kubectl get jobs --field-selector status.successful=0sudo kubectl delete cronjob hello-cronjobThis deletes the CronJob and all Jobs it created.
sudo kubectl delete cronjob hello-cronjob --cascade=orphanThis deletes only the CronJob, leaving Jobs running.
Check if CronJob is suspended:
sudo kubectl get cronjob hello-cronjob -o jsonpath='{.spec.suspend}'Check schedule syntax:
sudo kubectl describe cronjob hello-cronjobCheck Job logs:
# Get latest Job
JOB=$(kubectl get jobs --sort-by=.metadata.creationTimestamp | tail -1 | awk '{print $1}')
# Get Pod from Job
POD=$(kubectl get pods --selector=job-name=$JOB -o jsonpath='{.items[0].metadata.name}')
# View logs
sudo kubectl logs $PODAdjust history limits:
sudo kubectl patch cronjob hello-cronjob -p '{"spec":{"successfulJobsHistoryLimit":3}}'In episode 15, we've explored CronJob in Kubernetes in depth. We've learned what CronJob is, how it builds on Job to provide scheduled execution, and how to use it for recurring tasks.
Key takeaways:
CronJob is essential for automating recurring tasks in Kubernetes. By understanding CronJob, you can effectively schedule backups, generate reports, clean up resources, and automate maintenance tasks without external schedulers.
Are you getting a clearer understanding of CronJob in Kubernetes? In the next episode 16, we'll discuss Node Selector, a simple mechanism for controlling Pod placement on specific nodes based on labels. Keep your learning momentum going and look forward to the next episode!
Note
If you want to continue to the next episode, you can click the Episode 16 thumbnail below