Blog
December 1, 2016 Marie H.

Docker cron jobs with Supervisord

Docker cron jobs with Supervisord

Photo by <a href="https://unsplash.com/@steve_j?utm_source=cloudista&utm_medium=referral" target="_blank" rel="noopener">Steve Johnson</a> on <a href="https://unsplash.com/?utm_source=cloudista&utm_medium=referral" target="_blank" rel="noopener">Unsplash</a>

Yeah, this is the post you just spent hours looking for. I know this because I just did the same thing. I was running production Kubernetes clusters and had a use case for a cron job but could not get it to work for anything — this was pre-Kubernetes CronJob support.

Anyway, here's how to run cron inside a Docker container using Supervisord to manage both your app process and cron as sibling processes.

The problem

Docker containers are designed to run a single process. The standard workaround is Supervisord — a process manager that runs as PID 1 and keeps everything else alive under it.

Dockerfile

FROM tiangolo/uwsgi-nginx:python3.11

# Install app dependencies
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY . /app
WORKDIR /app

# Install cron
RUN apt-get update && apt-get install -y cron && rm -rf /var/lib/apt/lists/*

# Create log file for cron output
RUN touch /var/log/cron.log

# Write the crontab
# The 'env - `cat /docker-env`' trick injects container env vars into the cron shell,
# since cron runs with a stripped environment and won't see your Docker ENV values.
RUN echo '* * * * * root env - `cat /docker-env` python /app/some_script.py >> /var/log/cron.log 2>&1' \
    > /etc/cron.d/my-cron
RUN echo '' >> /etc/cron.d/my-cron
RUN chmod 0644 /etc/cron.d/my-cron

# Copy Supervisord config
COPY configs/supervisord.conf /etc/supervisor/conf.d/supervisord.conf

# On container start: dump the current environment to /docker-env,
# then hand off to supervisord
CMD env > /docker-env && /usr/bin/supervisord

supervisord.conf

[supervisord]
nodaemon=true

[program:uwsgi]
command=/usr/local/bin/uwsgi --ini /app/uwsgi.ini
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
autostart=true
autorestart=true

[program:nginx]
command=/usr/sbin/nginx
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
autostart=true
autorestart=true

[program:cron]
command=/usr/sbin/cron -f -L 15
user=root
autostart=true
autorestart=true
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0

The env var trick explained

Cron runs with a minimal environment — it does not inherit the ENV variables you set in your Dockerfile or pass with docker run -e. The env > /docker-env in the CMD captures the full container environment at startup, and the env - \cat /docker-env`prefix in the crontab re-injects those variables into each cron execution. Without this your script will fail silently becauseDATABASE_URL,AWS_ACCESS_KEY_ID`, or whatever else it needs simply isn't there.

Modern alternative: Kubernetes CronJob

If you're running this on Kubernetes today, skip all of this and use a native CronJob resource instead:

apiVersion: batch/v1
kind: CronJob
metadata:
  name: my-cron
spec:
  schedule: "* * * * *"
  jobTemplate:
    spec:
      template:
        spec:
          containers:
          - name: my-cron
            image: my-app:latest
            command: ["python", "/app/some_script.py"]
            envFrom:
            - secretRef:
                name: my-app-secrets
          restartPolicy: OnFailure

Kubernetes handles retries, history, and env vars natively. The Supervisord approach is still useful if you're stuck on plain Docker or need both a long-running app and a cron job in the same container.

Tell your friends...