Blog
October 15, 2017 Marie H.

Migrating from Docker Swarm to Kubernetes

Migrating from Docker Swarm to Kubernetes

Photo by <a href="https://unsplash.com/@guerrillabuzz?utm_source=cloudista&utm_medium=referral" target="_blank" rel="noopener">GuerrillaBuzz</a> on <a href="https://unsplash.com/?utm_source=cloudista&utm_medium=referral" target="_blank" rel="noopener">Unsplash</a>

I liked Docker Swarm. I want to be upfront about that. The docker stack deploy workflow is genuinely pleasant — if you already know Docker Compose, you're basically already there. For small, single-team clusters it worked fine. But we made the switch to Kubernetes this year and I'm not going back. Here's the honest story.

Why We Switched

Not because Kubernetes is objectively better at everything. It isn't. It's more complex, the learning curve is steep, and Swarm was working for us. The actual reasons:

  1. Ecosystem momentum. Tooling is converging on Kubernetes. Helm, Istio, the hosted options (GKE is production-grade; EKS is coming) — all Kubernetes-first. Swarm tooling was stagnating.
  2. RBAC. We needed real role-based access control for a multi-team cluster. Swarm's access model is essentially all-or-nothing.
  3. Declarative API. Everything in Kubernetes is a resource you declare and the control plane reconciles toward. This makes GitOps workflows natural in a way Swarm just doesn't support.
  4. Stateful workloads. StatefulSets and PersistentVolumeClaims are more mature than Swarm's volume story. We have a few databases running in-cluster.

The learning curve was painful. Give yourself a month before you feel productive.

Concept Mapping

Swarm Kubernetes Notes
Service Deployment + Service Swarm combines scheduling and networking; k8s splits them
Stack Namespace Rough equivalent — a logical grouping of resources
Secret Secret k8s secrets need base64 encoding
Config ConfigMap Direct equivalent
Overlay network CNI plugin + Network Policy More flexible and more complex
Node constraint nodeSelector / nodeAffinity More expressive in k8s

The Deployment/Service split is the biggest mental adjustment. In Swarm, one docker service create gives you running containers and a DNS-addressable endpoint. In Kubernetes, a Deployment manages your pods and a separate Service object gives them a stable network address. You always need both.

A Real Migration: docker-compose.yml → Kubernetes Manifests

Here's a Swarm stack file we were running:

# docker-compose.yml (Swarm stack)
version: '3.3'

services:
  web:
    image: myapp:1.4.2
    ports:
      - "80:5000"
    environment:
      - FLASK_ENV=production
      - DB_HOST=db.internal
    secrets:
      - db_password
    deploy:
      replicas: 3
      update_config:
        parallelism: 1
        delay: 10s
      restart_policy:
        condition: on-failure
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:5000/health"]
      interval: 30s
      timeout: 10s
      retries: 3

secrets:
  db_password:
    external: true

And here's the equivalent Kubernetes Deployment and Service:

# deployment.yaml
apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: myapp
  namespace: production
spec:
  replicas: 3
  selector:
    matchLabels:
      app: myapp
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 0
      maxSurge: 1
  template:
    metadata:
      labels:
        app: myapp
    spec:
      containers:
        - name: web
          image: myapp:1.4.2
          ports:
            - containerPort: 5000
          env:
            - name: FLASK_ENV
              value: "production"
            - name: DB_HOST
              value: "db.internal"
            - name: DB_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: myapp-secrets
                  key: db_password
          livenessProbe:
            httpGet:
              path: /health
              port: 5000
            initialDelaySeconds: 15
            periodSeconds: 30
            timeoutSeconds: 10
            failureThreshold: 3
          readinessProbe:
            httpGet:
              path: /health
              port: 5000
            initialDelaySeconds: 5
            periodSeconds: 10
---
# service.yaml
apiVersion: v1
kind: Service
metadata:
  name: myapp
  namespace: production
spec:
  selector:
    app: myapp
  ports:
    - port: 80
      targetPort: 5000
  type: LoadBalancer

Create the secret separately:

$ kubectl create secret generic myapp-secrets \
    --from-literal=db_password=supersecretpassword \
    --namespace production

What About Kompose?

Kompose is a tool that converts Compose files to Kubernetes manifests automatically. It works, mostly, but I'd use it as a starting point rather than a final output. What it gets wrong:

  • It doesn't split probes into livenessProbe and readinessProbe properly.
  • The generated YAML uses older API versions you'll want to update.
  • It can't translate Swarm-specific deploy keys (update policy, placement constraints) into the right Kubernetes constructs.

Run Kompose, review everything it generates, and treat it like a first draft.

$ kompose convert -f docker-compose.yml
INFO Kubernetes file "web-deployment.yaml" created
INFO Kubernetes file "web-service.yaml" created

Networking: Overlay vs CNI

Swarm uses an overlay network. You create a named overlay, attach services to it, and they can talk to each other by service name. Simple.

Kubernetes uses a CNI plugin (Flannel, Calico, Weave, etc.) and has a more nuanced model. Within a namespace, services resolve by name. Cross-namespace requires the full FQDN: service.namespace.svc.cluster.local. You also get Network Policies, which let you restrict which pods can talk to which. Swarm has no equivalent.

For storage, Swarm volumes are mostly host-mounted or NFS. Kubernetes PersistentVolumeClaims abstract away the backing storage (EBS, GCE PD, NFS, Ceph) and let pods request storage by size and access mode. StatefulSets + PVCs is genuinely the right model for databases in-cluster.

Gotchas

Environment variables. Swarm uses KEY=value strings. Kubernetes uses the name/value dict. Don't copy-paste; the env var just won't be set and you'll spend time debugging.

Readiness probes. Swarm healthcheck maps to livenessProbe, but you also want readinessProbe. That's the signal to the Service that a pod is ready to receive traffic. Without it, pods get traffic the moment the container starts, before your app has finished initializing.

Secrets are not encrypted at rest by default. They're base64-encoded in etcd. Enable etcd encryption at rest or use an external provider (Vault, SSM). This caught people on my team off guard.

Rolling updates. maxUnavailable: 0 and maxSurge: 1 gives you the Swarm default behavior — bring up a new pod before taking one down. Don't leave these at their defaults if you care about zero downtime.

Wrapping Up

The migration is more work than the concept mapping suggests because you're also absorbing kubectl, RBAC, Ingress controllers, PersistentVolumes, and Helm all at once. Budget time for it. The payoff is a platform with active development, first-class hosted options, and an ecosystem that's all pulling in the same direction. Swarm worked fine; Kubernetes scales better as your requirements grow.