Blog
October 24, 2022 Marie H.

Migrating CI/CD from Jenkins to GitHub Actions

Migrating CI/CD from Jenkins to GitHub Actions

Migrating CI/CD from Jenkins to GitHub Actions

[Updated March 2026: GitHub Actions is now the de facto standard for new projects. Jenkins is still dominant in enterprises with existing investment, but if you're starting fresh or have the runway to migrate, the operational delta is significant. The patterns below hold up.]

I've led CI/CD migrations from Jenkins to GitHub Actions for multiple teams at Innova Solutions clients. The migration is usually the right call for teams that don't have dedicated infrastructure engineers to maintain Jenkins. Here's how to think through it and do it cleanly.

Why Migrate

Jenkins is infrastructure. Someone has to maintain the Jenkins controller, manage plugin updates, handle disk filling up on the agent, deal with Groovy syntax errors in Jenkinsfiles. In teams of 5-10 engineers, that someone is usually whoever drew the short straw, doing it on top of their actual job. Plugin conflicts are real and painful. The upgrade cycle for Jenkins core + plugins is its own project.

GitHub Actions is serverless from your perspective. GitHub runs the infrastructure. Your CI config is YAML in the repo. Native GitHub integration — pull request status checks, secret management, environment deployments — is built in rather than bolted on with plugins.

The downside: you're in GitHub's execution model. Long-running builds, complex pipeline logic, and anything requiring persistent state across steps is harder. I'll come back to this.

Mapping the Concepts

Jenkins GitHub Actions
Jenkinsfile stage job or step
Parallel stages jobs with strategy.matrix or independent jobs
Shared library reusable workflow or composite action
Credentials GitHub Secrets
Agent with label runner with label
when directive if condition on job/step
post block always-run steps, if: always()

A Real Migration

Here's a Jenkins pipeline with parallel test stages, Docker build, and ECR push:

// Jenkinsfile (before)
pipeline {
    agent { label 'docker' }
    stages {
        stage('Test') {
            parallel {
                stage('Unit') {
                    steps { sh 'go test ./...' }
                }
                stage('Lint') {
                    steps { sh 'golangci-lint run' }
                }
            }
        }
        stage('Build') {
            steps {
                sh 'docker build -t $ECR_REPO:$BUILD_NUMBER .'
                sh 'docker push $ECR_REPO:$BUILD_NUMBER'
            }
        }
    }
    post {
        always { junit 'test-results/**/*.xml' }
    }
}

The equivalent GitHub Actions workflow:

# .github/workflows/ci.yml
name: CI

on:
  push:
    branches: [main]
  pull_request:

jobs:
  unit-test:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3
      - uses: actions/setup-go@v3
        with:
          go-version: '1.19'
      - uses: actions/cache@v3
        with:
          path: ~/go/pkg/mod
          key: ${{ runner.os }}-go-${{ hashFiles('**/go.sum') }}
      - run: go test ./... -v 2>&1 | tee test-results.txt
      - uses: actions/upload-artifact@v3
        if: always()
        with:
          name: test-results
          path: test-results.txt

  lint:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3
      - uses: golangci/golangci-lint-action@v3

  build-push:
    needs: [unit-test, lint]
    runs-on: ubuntu-latest
    if: github.ref == 'refs/heads/main'
    permissions:
      id-token: write  # for OIDC auth to AWS
      contents: read
    steps:
      - uses: actions/checkout@v3
      - uses: aws-actions/configure-aws-credentials@v2
        with:
          role-to-assume: arn:aws:iam::123456789:role/github-actions-ecr
          aws-region: us-east-1
      - uses: aws-actions/amazon-ecr-login@v1
      - name: Build and push
        run: |
          IMAGE=${{ secrets.ECR_REPO }}:${{ github.sha }}
          docker build -t $IMAGE .
          docker push $IMAGE

The parallel test stages become parallel jobs. The needs: key in build-push expresses the dependency on both tests passing. actions/cache replaces Jenkins workspace caching for Go module cache.

Note the OIDC authentication to AWS — this is better than storing static AWS credentials as secrets. GitHub generates a short-lived token for the run, AWS validates it. No secret rotation, no credential exposure.

Reusable Workflows vs Composite Actions

Two mechanisms for reuse:

Reusable workflows are full workflow files that other repositories call:

# In repo A, .github/workflows/build-go.yml
on:
  workflow_call:
    inputs:
      go-version:
        required: true
        type: string

jobs:
  build:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3
      - uses: actions/setup-go@v3
        with:
          go-version: ${{ inputs.go-version }}
      - run: go build ./...
# In repo B
jobs:
  build:
    uses: myorg/shared-workflows/.github/workflows/build-go.yml@main
    with:
      go-version: '1.19'

Composite actions bundle steps into a single action, reusable within or across repositories:

# .github/actions/setup-go-build/action.yml
name: Setup Go Build Environment
runs:
  using: composite
  steps:
    - uses: actions/setup-go@v3
      with:
        go-version: ${{ inputs.go-version }}
    - uses: actions/cache@v3
      with:
        path: ~/go/pkg/mod
        key: ${{ runner.os }}-go-${{ hashFiles('**/go.sum') }}

Reusable workflows replace Jenkins shared libraries at the pipeline level. Composite actions replace Jenkins shared library steps. Use reusable workflows for standardizing the shape of your CI pipelines across teams; use composite actions for bundling common setup steps.

Self-Hosted Runners

The equivalent of a Jenkins agent with VPC access. When your CI needs to reach internal systems (private artifact registry, internal APIs, on-prem resources):

jobs:
  deploy:
    runs-on: [self-hosted, linux, internal-network]

The runner is a process you run on a machine in your network that polls GitHub for jobs. It registers with your repository or organization. It's significantly simpler to operate than a full Jenkins agent — no Remoting JAR, no Java version management, just a runner binary and the runs-on label.

I typically run these as Kubernetes jobs in the target cluster or as long-running deployments using the actions-runner-controller Helm chart.

Environments and Deployment Gates

GitHub Environments give you approval gates without plugins:

jobs:
  deploy-production:
    environment: production  # requires reviewer approval in GitHub UI
    runs-on: ubuntu-latest
    steps:
      - run: ./deploy.sh production

Configure the production environment in your repo settings to require specific reviewers before jobs targeting it can run. This replaces Jenkins' input step / manual promotion stage, and it's integrated with GitHub's UI so the approver gets a notification and can approve from GitHub directly.

What Jenkins Does Better

Long-running builds with checkpointing. Jenkins can pause a pipeline, wait for input, resume days later. GHA jobs time out after 6 hours (72 hours with a workaround) and don't persist state across runs elegantly.

Complex conditional pipeline logic. Jenkinsfiles are Groovy — you can write arbitrary logic. GHA YAML conditions (if:) are limited. For complex multi-branch, multi-condition pipelines, Jenkinsfiles can be cleaner.

Fine-grained plugin ecosystem. Jenkins has plugins for everything. GHA Marketplace has grown enormously but some enterprise integrations still have better Jenkins support.

For teams on GitHub with mostly standard build/test/deploy pipelines, GitHub Actions wins clearly. For teams with complex, long-running, stateful pipelines and dedicated Jenkins expertise, the migration may not pay for itself. The key question: are you spending engineering time maintaining Jenkins infrastructure? If yes, migrate.