Blog
January 14, 2019 Marie H.

Jenkins Declarative Pipelines: A Practical Guide

Jenkins Declarative Pipelines: A Practical Guide

Photo by <a href="https://unsplash.com/@hamburgmeinefreundin?utm_source=cloudista&utm_medium=referral" target="_blank" rel="noopener">Wolfgang Weiser</a> on <a href="https://unsplash.com/?utm_source=cloudista&utm_medium=referral" target="_blank" rel="noopener">Unsplash</a>

Jenkins Declarative Pipelines: A Practical Guide

When I joined a project at IBM, I inherited a pile of Jenkinsfiles written in the older scripted syntax — raw Groovy, node { } blocks, inconsistent stage naming, and error handling scattered across the file or missing entirely. Refactoring them into declarative format was one of the first things I did, and it made a real difference in readability and maintainability.

Here's what I learned.

Scripted vs. Declarative

The scripted pipeline format is essentially Groovy code with Jenkins DSL calls mixed in. It's flexible but has no enforced structure. The declarative format, introduced in Jenkins 2.x, wraps everything in a pipeline { } block with a defined schema. The engine validates the structure before running anything, which catches typos and misconfigurations early.

Scripted:

node {
  stage('Build') {
    sh 'mvn clean package'
  }
}

Declarative:

pipeline {
  agent any
  stages {
    stage('Build') {
      steps {
        sh 'mvn clean package'
      }
    }
  }
}

The declarative version looks more verbose for a trivial example, but the structure pays off as pipelines grow.

Basic Structure

The top-level keys are agent, environment, options, parameters, triggers, stages, and post. Every declarative pipeline needs at least agent and stages.

pipeline {
  agent any

  options {
    timeout(time: 30, unit: 'MINUTES')
    disableConcurrentBuilds()
    buildDiscarder(logRotator(numToKeepStr: '10'))
  }

  environment {
    APP_NAME = 'my-service'
    DEPLOY_ENV = 'staging'
  }

  stages {
    stage('Build') {
      steps {
        sh 'make build'
      }
    }
    stage('Test') {
      steps {
        sh 'make test'
      }
    }
  }

  post {
    always {
      junit 'test-results/**/*.xml'
    }
    failure {
      slackSend channel: '#builds', message: "Build failed: ${env.JOB_NAME} #${env.BUILD_NUMBER}"
    }
    success {
      echo 'Build passed.'
    }
  }
}

The options { timeout(...) } block saved us from hung builds that would block the executor queue indefinitely. Set it and forget it.

The post Block

post runs after all stages complete. It supports always, success, failure, unstable, aborted, changed, and fixed conditions. I used it for:

  • Archiving artifacts (archiveArtifacts)
  • Publishing test results (junit, publishHTML)
  • Sending Slack notifications on failure
  • Cleaning up Docker images

The changed condition is useful for "back to green" notifications — it only fires when the build status changed from the previous run, so you're not spammed on every passing build.

Parallel Stages

Running tests in parallel was one of the bigger wins. The syntax is clean:

stage('Test') {
  parallel {
    stage('Unit Tests') {
      steps {
        sh 'make test-unit'
      }
    }
    stage('Integration Tests') {
      steps {
        sh 'make test-integration'
      }
    }
    stage('Lint') {
      steps {
        sh 'make lint'
      }
    }
  }
}

Each parallel branch runs on whatever agent is available. If you need them on specific agents, each inner stage can declare its own agent block. Be careful about shared workspace state if you're running parallel stages on the same node — they share the workspace directory.

Deployment Gates with when

The when directive lets you conditionally run a stage without wrapping everything in a Groovy if block:

stage('Deploy to Production') {
  when {
    branch 'main'
    not { changeRequest() }
  }
  steps {
    sh './deploy.sh production'
  }
}

You can compose conditions with allOf, anyOf, and not. changeRequest() returns true when the build is a pull request — useful for blocking production deploys from PR pipelines.

Parameterized Builds

pipeline {
  agent any

  parameters {
    string(name: 'VERSION', defaultValue: 'latest', description: 'Docker image tag to deploy')
    booleanParam(name: 'SKIP_TESTS', defaultValue: false, description: 'Skip test stages')
    choice(name: 'ENVIRONMENT', choices: ['staging', 'production'], description: 'Target environment')
  }

  stages {
    stage('Test') {
      when {
        expression { !params.SKIP_TESTS }
      }
      steps {
        sh 'make test'
      }
    }
  }
}

Parameters are accessible via params.PARAM_NAME. They show up in the Jenkins UI as form fields when you trigger a manual build.

Secrets with withCredentials

Never put secrets in environment { } blocks in plaintext. Use the credentials binding plugin:

stage('Deploy') {
  steps {
    withCredentials([
      usernamePassword(
        credentialsId: 'dockerhub-creds',
        usernameVariable: 'DOCKER_USER',
        passwordVariable: 'DOCKER_PASS'
      ),
      string(credentialsId: 'slack-token', variable: 'SLACK_TOKEN')
    ]) {
      sh 'docker login -u $DOCKER_USER -p $DOCKER_PASS'
      sh './deploy.sh'
    }
  }
}

Jenkins masks the secret values in the build log. The variables are scoped to the withCredentials block.

Shared Libraries

The real payoff at IBM came from extracting common logic into a shared library. Once you have a vars/ directory in a shared library repo, you can call those vars from any Jenkinsfile:

@Library('my-shared-lib@main') _

pipeline {
  agent any
  stages {
    stage('Build') {
      steps {
        myBuildStep(image: 'node:18', command: 'npm run build')
      }
    }
  }
}

This cut our Jenkinsfiles from 200+ lines down to 30-40 lines for most services, and bug fixes to the shared library propagate across all pipelines automatically.

The script { } Trap

The one thing that tripped up my teammates: declarative pipelines don't let you use arbitrary Groovy inside steps. If you need a loop or conditional logic that the declarative syntax can't express, wrap it in a script { } block:

steps {
  script {
    def services = ['api', 'worker', 'scheduler']
    services.each { svc ->
      sh "docker build -t myapp/${svc}:${env.BUILD_NUMBER} ./${svc}"
    }
  }
}

Use script { } sparingly. If you find yourself reaching for it often, that logic probably belongs in a shared library function or a shell script, not in the Jenkinsfile.


Updated March 2026: Most greenfield projects I see now use GitHub Actions or Tekton. Both are worth learning. But Jenkins declarative pipelines are still the standard in a large number of enterprise environments, and the concepts — stage ordering, parallel execution, deployment gates, secret binding — translate directly. If you're working in GitHub Actions, on.push.branches, jobs.<job>.if, and jobs.<job>.strategy.matrix map pretty cleanly to what's covered here.