Deploying to Kubernetes Before Helm Existed
I've been running Kubernetes in production at DoubleHorn since late 2015. We're a multi-cloud startup — AWS, GCP, and some on-prem — and Kubernetes looked like the right abstraction layer to tie all of it together. The problem was that in 2015 the ecosystem around Kubernetes was essentially nothing. No Helm. No Tiller. No package manager. You had kubectl, YAML manifests, and your own ingenuity.
What that meant operationally was: you applied manifests with kubectl apply -f, and when you needed to update a running deployment to a new image version, you ran kubectl set image. That was the workflow. Everything else you had to build yourself.
Here's what I built.
The core problem
Our deployments each had a deployment.yaml with a hardcoded image tag:
containers:
- name: myservice
image: 123456789.dkr.ecr.us-east-1.amazonaws.com/myservice:1.3
Every time you built a new image, you had two options: edit the YAML file and re-apply it, or use kubectl set image to update the running deployment in place without touching the manifest. I went with set image for day-to-day deploys because editing a hardcoded tag in a YAML file and committing it every single time felt worse than a surgical command. In retrospect, the right answer was templating — but that's what Helm eventually gave us.
The deploy script
I wrote deploykube to wrap the full deploy cycle: build the Docker image, figure out the latest ECR tag, increment it, push the new image, and update the Kubernetes deployment. It depended on two small Python helpers and the AWS CLI.
#!/bin/bash
# deploykube: Build a Docker image, push to ECR, and deploy to Kubernetes
# Usage: deploykube <image-name> <deployment-name>
#
# Dependencies: aws CLI, docker, kubectl, get_current_tag_from_json, increment_tag_version
# The ECR registry base URL — update this for your account
ECR_BASE="<your-account>.dkr.ecr.us-east-1.amazonaws.com"
log_action() {
# Write a timestamped entry to the deploy log
echo "$(date) - $1" >> /tmp/deploy.log
}
get_current_tag() {
# Query ECR for the highest semver tag on the given image
local docker_image="$1"
local images
images=$(aws ecr list-images --repository-name "$docker_image")
local tag
tag=$(echo "$images" | get_current_tag_from_json)
if [ -z "$tag" ]; then
log_action "Unable to find latest tag for $docker_image"
log_action "Deploy service: FAILED"
exit 1
fi
log_action "Found latest tag $tag for $docker_image"
echo "$tag"
}
increment_tag() {
# Increment the minor version of a semver tag (e.g. 1.3 -> 1.4)
local tag="$1"
local new_tag
new_tag=$(echo "$tag" | increment_tag_version)
if [ -z "$new_tag" ]; then
log_action "Unable to increment tag version"
log_action "Deploy service: FAILED"
exit 1
fi
log_action "Incremented tag to $new_tag for $docker_image"
echo "$new_tag"
}
tag_image() {
# Tag the locally-built image with the ECR path and new version
local docker_image="$1"
local tag="$2"
local full_tag="$ECR_BASE/$docker_image:$tag"
log_action "Tagging $docker_image:latest as $full_tag"
if docker tag "$docker_image:latest" "$full_tag"; then
log_action "Tagged successfully"
echo "$full_tag"
else
log_action "Tagging failed"
exit 1
fi
}
push_image() {
# Authenticate with ECR and push the tagged image
local docker_image="$1"
# aws ecr get-login produces a docker login command — pipe to sh to execute it
aws ecr get-login | sh
if docker push "$docker_image"; then
log_action "Pushed $docker_image to ECR"
else
log_action "Push failed for $docker_image"
exit 1
fi
}
deploy_k8s() {
# Update the running Kubernetes deployment to the new image
local deployment="$1"
local image="$2"
# kubectl set image updates the container image in-place, triggering a rolling update
if kubectl set image "deployment/$deployment" "$deployment=$image"; then
log_action "Deployment $deployment updated to $image"
else
log_action "kubectl set image failed for $deployment"
exit 1
fi
}
# --- Main ---
docker_image="$1"
deployment="$2"
if [ -z "$docker_image" ] || [ -z "$deployment" ]; then
echo "Usage: deploykube <image-name> <deployment-name>"
exit 1
fi
log_action "Starting build for $docker_image"
if docker build -t "$docker_image" .; then
log_action "Docker build: SUCCESS"
tag=$(get_current_tag "$docker_image")
new_tag=$(increment_tag "$tag")
image=$(tag_image "$docker_image" "$new_tag")
push_image "$image"
deploy_k8s "$deployment" "$image"
log_action "Deploy complete: $deployment at $new_tag"
else
log_action "Docker build: FAILED"
exit 1
fi
The aws ecr get-login | sh line deserves a note: that command is now deprecated. The modern equivalent is:
aws ecr get-login-password | docker login --username AWS --password-stdin <your-account>.dkr.ecr.us-east-1.amazonaws.com
The old form returned a full docker login command as a string and you piped it to sh to execute it. It worked, but passing credentials through a shell string is not ideal. The new form pipes just the password token directly.
The helper scripts
get_current_tag_from_json is a Python script that reads the JSON output of aws ecr list-images from stdin, finds all tags matching the pattern \d+\.\d (our versioning scheme was 1.0, 1.1, etc.), and returns the max by treating the tags as floats. It's short enough to inline here:
#!/usr/bin/env python
# get_current_tag_from_json: read ECR list-images JSON from stdin,
# return the highest tag matching \d+\.\d (treated as a float)
import sys
import json
import re
data = json.load(sys.stdin)
tags = [
t['imageTag']
for t in data.get('imageIds', [])
if 'imageTag' in t and re.match(r'^\d+\.\d$', t['imageTag'])
]
if tags:
print(max(tags, key=float))
increment_tag_version reads a version like 1.3 from stdin and returns 1.4. When the minor version hits 9, it bumps the major:
#!/usr/bin/env python
# increment_tag_version: read a version like "1.3" from stdin, write "1.4"
import sys
tag = sys.stdin.read().strip()
major, minor = tag.split('.')
minor = int(minor)
major = int(major)
if minor >= 9:
major += 1
minor = 0
else:
minor += 1
print(f"{major}.{minor}")
And ksetimage is a one-liner I kept around for manual use when I didn't want the full build cycle — just force a specific image onto a deployment:
#!/bin/bash
# ksetimage: manually set a deployment's image to a specific ECR tag
# Usage: ksetimage <deployment-name> <image:tag>
kubectl set image deployment/$1 $1=<your-account>.dkr.ecr.us-east-1.amazonaws.com/$2
Why kubectl set image instead of re-applying the manifest
The short answer: the manifest had the old tag hardcoded. If I ran kubectl apply -f deployment.yaml after a build, nothing would change — Kubernetes would see the same spec it already had. I'd have to edit the file first, which meant either a commit or a dirty working tree.
kubectl set image deployment/myservice myservice=<ecr-url>/myservice:1.4 is a surgical in-place update. Kubernetes receives a patch to just the image field, triggers a rolling update, and the manifest on disk stays out of sync — but you were going to deal with that in your next config pass anyway. Not great, but it worked for a small team moving fast.
The problems with this approach
The fragility is obvious in hindsight:
- No rollback management. There's no release history. If version 1.4 was bad, you ran
kubectl set imageagain with1.3. If you'd forgotten what version was running before, you checked the deploy log in/tmp/deploy.log. Yes,/tmp. - No templating. Staging and production had separate deployment YAMLs with the image tag hardcoded differently in each. Keeping them in sync was manual.
- Tag-as-float is fragile. The scheme of
1.0,1.1, ...,1.9,2.0works until you have two services versioned at1.9and1.10— except1.10as a float sorts below1.9, so your "latest tag" logic breaks. I kept the minor version capped at 9 to avoid this, which is its own kind of absurdity. - ECR auth on every push. The
get-logincall is a round trip to AWS on every single deploy. Not a problem at our scale, but it added latency and a failure mode.
When Helm arrived
Helm showed up in late 2016 / early 2017 and solved all of this properly. Templates with {{ .Values.image.tag }} replaced hardcoded tags. helm upgrade --install gave you atomic releases. helm rollback gave you actual history. Tiller (pre-Helm 3) had its own problems — a cluster-wide daemon with too much privilege — but the core value proposition was immediately obvious.
For roughly 18 months of early Kubernetes, though, shell scripts were the package manager. If you were running k8s in production before Helm stabilized, you wrote something like this. It got the job done.