Blog
July 11, 2022 Marie H.

Jenkins Multi-Branch Pipelines: A Practical Setup

Jenkins Multi-Branch Pipelines: A Practical Setup

Photo by <a href="https://www.pexels.com/@vito-gorican-10954628" target="_blank" rel="noopener">Vito Goričan</a> on <a href="https://www.pexels.com" target="_blank" rel="noopener">Pexels</a>

Jenkins Multi-Branch Pipelines: A Practical Setup

Before Multi-Branch Pipelines, the Jenkins setups I inherited usually had one of two patterns: a single pipeline job per service that always built from main, or a sprawl of manually created jobs — one per branch — that teams forgot to delete and that cluttered the Jenkins UI with years of stale artifacts. Neither is sustainable past a handful of teams. Multi-Branch Pipelines fixed both problems at once when we standardized on them across our Jenkins instance.

What Multi-Branch Pipeline Actually Does

The core mechanic is simple: Jenkins scans a repository, discovers all branches and pull requests, and automatically creates a pipeline job for each one. Each job runs the Jenkinsfile from its own branch. When a branch is deleted, Jenkins eventually cleans up the corresponding job. When a new branch is pushed, Jenkins discovers it on the next scan (which can be triggered by a webhook) and creates a job automatically.

This means the lifecycle of pipeline jobs mirrors the lifecycle of branches. New feature branch? Pipeline job appears. PR merged and branch deleted? Pipeline job eventually disappears. No manual job creation, no stale jobs accumulating.

Configuration

When you create a Multi-Branch Pipeline job in Jenkins, the key settings are:

Branch Sources. This is where you configure the Git host — GitHub, Bitbucket, GitLab, or plain Git. For GitHub you'll need a credential configured with a GitHub token that has repo scope (for private repos) or public repo read access. This credential is used for both code checkout and for posting build status back to the PR.

Branch Discovery. You can discover all branches, only branches with PRs, or only named branches matching a filter. For most teams I use: all branches (so any branch can have a pipeline) but with a filter that limits intensive stages like deploy to specific branch patterns.

Build Strategies. Two settings matter here: trigger builds on push (via webhook) and periodic scanning as a fallback for missed webhooks. I set the scan interval to 15 minutes. If your webhook is reliable this rarely fires, but it catches the cases where a push happened during a Jenkins restart or a network blip.

Orphaned Item Strategy. How long to keep jobs for branches that no longer exist. We kept 7 days. Long enough for post-merge debugging, short enough that deleted feature branches don't accumulate indefinitely.

The when Directive for Branch-Gated Stages

Not every stage should run on every branch. Testing should run everywhere. Deployment to staging should run only on main. Deployment to production needs both a green build and a human approval. The when directive handles all of this:

pipeline {
    agent any

    stages {
        stage('Build') {
            steps {
                sh 'make build'
            }
        }

        stage('Test') {
            steps {
                sh 'make test'
            }
            post {
                always {
                    junit 'test-results/**/*.xml'
                }
            }
        }

        stage('Deploy to Staging') {
            when { branch 'main' }
            steps {
                sh 'make deploy-staging'
            }
        }

        stage('Deploy to Production') {
            when {
                branch 'main'
                expression { currentBuild.resultIsBetterOrEqualTo('SUCCESS') }
            }
            input {
                message "Deploy to production?"
                ok "Yes"
                submitter "platform-team,release-managers"
            }
            steps {
                sh 'make deploy-production'
            }
        }
    }
}

On a feature branch, only Build and Test run. On main, all four stages are eligible. The production deploy has an additional expression check — I only want the input prompt to appear if the build actually succeeded, not just when the branch matches. If the test stage failed but somehow didn't mark the build as failed, the expression check catches it.

The input block with submitter restricts who can approve the deployment. Only members of the platform-team or release-managers groups in Jenkins can click the approval button. Without this restriction, any developer with Jenkins access could approve a production deployment.

PR Builds and Status Checks

Multi-Branch Pipeline automatically creates jobs for PRs, named something like PR-47. Two environment variables tell you you're in a PR context:

  • CHANGE_ID: the PR number (e.g., 47)
  • CHANGE_TARGET: the target branch the PR is merging into (e.g., main)

These let you gate behavior. If you have integration tests that need to run against the target branch's database schema, you can use CHANGE_TARGET to determine which schema version to use. You can also use them to label artifacts — tagging a Docker image with pr-47 instead of a build number, making it easy to find the image for a specific PR during review.

For the status check integration: in your GitHub repository settings, add a required status check named after your Jenkins job. PRs can't be merged until Jenkins posts a green status. This is the mechanism that makes the CI gate real rather than advisory.

stash/unstash for Inter-Stage Artifacts

When pipeline stages run on different agents — and in a properly scaled Jenkins setup, they often do — files written in one stage aren't automatically available in the next. stash and unstash handle this:

stage('Build') {
    agent { label 'build-agent' }
    steps {
        sh 'make build'
        stash name: 'app-binary', includes: 'dist/**'
    }
}

stage('Test') {
    agent { label 'test-agent' }
    steps {
        unstash 'app-binary'
        sh 'make test'
        stash name: 'test-results', includes: 'test-results/**'
    }
}

stage('Deploy to Staging') {
    when { branch 'main' }
    agent { label 'deploy-agent' }
    steps {
        unstash 'app-binary'
        sh 'make deploy-staging'
    }
}

Stashes are stored on the Jenkins controller and cleaned up at the end of the build. They're not meant for large artifacts — a built binary is fine, a 2GB Docker image tar is not. For large artifacts, use S3 or an artifact repository like Nexus and pass the artifact URL between stages.

If you're building Docker images, push the image to a registry in the build stage and reference the image tag in subsequent stages, rather than stashing the tar. The registry is the artifact store for container images.

The Thin Jenkinsfile Pattern

When you combine Multi-Branch Pipelines with a shared library, you can reduce every service's Jenkinsfile to almost nothing:

@Library('platform@v2.1.0') _

runServicePipeline(
    service: 'payment-service',
    dockerImage: 'my-registry/payment-service',
    slackChannel: '#payments-alerts'
)

That's a complete, functional Jenkinsfile. The runServicePipeline function in the shared library handles build, test, Docker push, staging deploy, and production deploy with approval — all the standard stages. The service, dockerImage, and slackChannel parameters tell it what to call things and where to send notifications.

A new team gets a working pipeline with a CI gate, staging deploy on merge to main, and manual approval for production by copying three lines into a Jenkinsfile and filling in their service name. We onboarded seven services in a single afternoon using this pattern.

The shared library owns the pipeline structure. Teams own their build and test commands via their Makefile or equivalent — make build, make test, make deploy-staging. The convention means the library doesn't need to know anything about how a service builds itself.

Maintaining It

A few operational habits that have saved me time:

Keep the Jenkins controller's scan logs for Multi-Branch jobs visible. When a new branch isn't getting picked up, the scan log usually tells you why — credentials expired, webhook misconfigured, or the repository hit a rate limit on the GitHub API.

Watch the orphaned item cleanup. Seven days is right for most teams, but for compliance-sensitive projects you may need a longer retention to satisfy audit requirements around what ran when.

Test your webhook configuration after any Jenkins URL change, Jenkins restart, or GitHub organization move. Webhooks are the silent dependency that nobody checks until a PR sits green for an hour with no Jenkins status.

Version-pin your shared library in every Jenkinsfile. @Library('platform@v2.1.0') not @Library('platform@main'). Main moves. A breaking change that merged last night shouldn't cause your 2 AM on-call incident's hotfix pipeline to fail in a new and interesting way.