I've written about Helm before and I still use it — for third-party software I'm installing into a cluster, it's genuinely good. But for managing my own application's Kubernetes config across environments, Helm is more machinery than I want. Enter Kustomize, which takes a different approach: instead of templating YAML, it patches it.
The problem Kustomize solves
Staging should run 2 replicas. Production should run 10. Dev uses latest, production pins to a specific tag. Your staging ingress points at staging.example.com, production points at example.com.
The naive approach is three copies of your manifests — k8s/dev/, k8s/staging/, k8s/prod/ — and you keep them in sync manually. Every label change, every new environment variable, every sidecar you add gets applied three times. You will eventually miss one, and the environments will silently drift.
Helm solves this with templating: {{ .Values.replicaCount }} in your deployment, override the value per environment. It works but now you have a templating language inside your YAML. You can't just kubectl apply a raw Helm chart; it has to be rendered first. The templates get verbose. Junior team members get confused by {{- if .Values.ingress.enabled }} blocks. It's a real framework.
Kustomize's answer: keep your base config as real, valid, unmodified YAML and express environment differences as patches that get merged on top. No new syntax. No rendering step that produces something unrecognizable. Everything is just YAML.
How the overlay model works
You have a base/ directory with your canonical manifests, and overlays/<env>/ directories that declare what changes for each environment. Kustomize merges the overlay on top of the base at apply time. The base files stay unchanged; overlays only contain what's different.
Note: as of June 2018, Kustomize is a standalone binary — it's not built into kubectl yet. That comes in kubectl 1.14, which isn't out yet. Install the binary:
$ curl -s "https://api.github.com/repos/kubernetes-sigs/kustomize/releases/latest" \
| grep browser_download_url \
| grep darwin_amd64 \
| cut -d '"' -f 4 \
| xargs curl -O -L
$ mv kustomize_*_darwin_amd64 /usr/local/bin/kustomize
$ chmod +x /usr/local/bin/kustomize
Directory layout
k8s/
base/
kustomization.yaml
deployment.yaml
service.yaml
configmap.yaml
overlays/
staging/
kustomization.yaml
replica-patch.yaml
production/
kustomization.yaml
replica-patch.yaml
image-patch.yaml
The base
k8s/base/deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: web
spec:
replicas: 1
selector:
matchLabels:
app: web
template:
metadata:
labels:
app: web
spec:
containers:
- name: web
image: myorg/web:latest
ports:
- containerPort: 8080
envFrom:
- configMapRef:
name: web-config
resources:
requests:
memory: "128Mi"
cpu: "100m"
limits:
memory: "256Mi"
cpu: "250m"
k8s/base/configmap.yaml:
apiVersion: v1
kind: ConfigMap
metadata:
name: web-config
data:
LOG_LEVEL: "info"
APP_ENV: "base"
k8s/base/kustomization.yaml:
resources:
- deployment.yaml
- service.yaml
- configmap.yaml
The base kustomization.yaml just declares what files make up the base. That's it.
Staging overlay
k8s/overlays/staging/kustomization.yaml:
bases:
- ../../base
namePrefix: staging-
commonLabels:
environment: staging
patchesStrategicMerge:
- replica-patch.yaml
configMapGenerator:
- name: web-config
behavior: merge
literals:
- APP_ENV=staging
- LOG_LEVEL=debug
k8s/overlays/staging/replica-patch.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: web
spec:
replicas: 2
The patch file only needs to contain the keys you want to change. Kustomize does a strategic merge — it understands Kubernetes resource structure and merges intelligently rather than just overwriting. The rest of the deployment (resource limits, container config, labels) stays exactly as defined in the base.
Production overlay
k8s/overlays/production/kustomization.yaml:
bases:
- ../../base
namePrefix: prod-
commonLabels:
environment: production
patchesStrategicMerge:
- replica-patch.yaml
- image-patch.yaml
configMapGenerator:
- name: web-config
behavior: merge
literals:
- APP_ENV=production
- LOG_LEVEL=warn
k8s/overlays/production/replica-patch.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: web
spec:
replicas: 8
k8s/overlays/production/image-patch.yaml — pin to a real tag in production instead of latest:
apiVersion: apps/v1
kind: Deployment
metadata:
name: web
spec:
template:
spec:
containers:
- name: web
image: myorg/web:3.2.1
Applying it
$ kustomize build k8s/overlays/staging | kubectl apply -f -
configmap/staging-web-config created
service/staging-web created
deployment.apps/staging-web created
$ kustomize build k8s/overlays/production | kubectl apply -f -
configmap/prod-web-config created
service/prod-web created
deployment.apps/prod-web created
The namePrefix directive prepends the environment name to every resource, so staging and production can coexist in the same cluster without colliding. The commonLabels block adds the environment label to every resource and every pod selector — useful for filtering with kubectl get pods -l environment=staging.
Before applying, you can preview exactly what Kustomize will generate:
$ kustomize build k8s/overlays/production
Or diff against what's currently deployed:
$ kustomize build k8s/overlays/production | kubectl diff -f -
That last command is genuinely useful. Before any production deploy I run it and verify I'm applying what I think I'm applying.
Image tag updates
A common CI pattern: your pipeline builds a new image, pushes it with the commit SHA as the tag, and needs to update the deployment. Kustomize has a built-in image transformer for this:
# in your overlay kustomization.yaml
images:
- name: myorg/web
newTag: "abc1234"
Or via CLI:
$ kustomize edit set image myorg/web:abc1234
This is cleaner than maintaining a separate image patch file that you rewrite on every build.
Namespace transformation
If you want each environment in its own namespace, add this to the overlay kustomization.yaml:
namespace: staging
Kustomize will set the namespace on every resource in the build. Combined with namePrefix, it's easy to keep environments cleanly separated.
Kustomize vs. Helm, honestly
Helm wins when:
- You're distributing software as a reusable package (like a Prometheus stack, an ingress controller, a Kafka cluster)
- You need helm rollback and chart versioning
- You're consuming third-party charts from the chart repository ecosystem
Kustomize wins when:
- You're managing your own app's config across environments
- You want to stay in plain YAML without a templating engine
- You want kubectl diff to show you exactly what changes before you apply
- You don't want to explain {{ toYaml .Values.resources | indent 10 }} to someone at code review
For my own services, I use Kustomize. For everything else I install into the cluster — cert-manager, the metrics server, Prometheus, ingress-nginx — I use Helm. They're not competitors, they solve different problems.
The thing I appreciate most about Kustomize is that the base files are valid, unmodified Kubernetes YAML. You can kubectl apply -f k8s/base/ directly in a pinch. With Helm, the templates are only useful after rendering. That simplicity has real value when you're debugging something at 2am and want as little indirection as possible.
