Structuring Jenkins Shared Libraries for Enterprise CI/CD
Early in my time at IBM I inherited a Jenkins setup where 20 teams each had their own Jenkinsfile. Some were 300 lines. Some were 40. They all did roughly the same thing — build a Docker image, push to ECR, update a Kubernetes deployment — but each team had reinvented the wheel slightly differently. When we found a security issue in the Docker build step, I had to open 20 pull requests. When we wanted to standardize Slack notifications, 20 pull requests. When we updated the ECR push to use OIDC instead of long-lived credentials, you guessed it: 20 pull requests.
The fix was Jenkins Shared Libraries, and it's one of the best investments we made in that platform.
The Structure
A shared library is a Git repository with a specific directory layout:
my-shared-lib/
├── vars/
│ ├── buildAndPush.groovy
│ ├── runTests.groovy
│ └── notifySlack.groovy
├── src/
│ └── com/
│ └── myorg/
│ ├── SlackNotifier.groovy
│ └── DeploymentConfig.groovy
├── resources/
│ ├── com/myorg/deploy-template.yaml
│ └── com/myorg/default-config.json
└── vars/
The three directories serve different purposes. vars/ contains global variables — every .groovy file in vars/ becomes a function you can call by filename from any pipeline that loads the library. src/ contains proper Groovy classes with package declarations, useful for more complex logic you want to organize as objects. resources/ holds static files you can load with libraryResource().
The vars/ Pattern
This is where you put the things teams will actually call. A file vars/buildAndPush.groovy defines a call() method, which is what gets invoked when a Jenkinsfile calls buildAndPush(...).
Here's a realistic one we used for Docker builds and ECR pushes:
def call(Map config = [:]) {
def image = config.image ?: error("image required")
def registry = config.registry ?: env.ECR_REGISTRY
sh "docker build -t ${image} ."
docker.withRegistry("https://${registry}", 'ecr:us-east-1:aws-credentials') {
docker.image("${image}").push(env.BUILD_NUMBER)
docker.image("${image}").push('latest')
}
}
Teams call this as buildAndPush(image: 'my-service', registry: '123456789.dkr.ecr.us-east-1.amazonaws.com'). If they don't pass a registry, it falls back to a global environment variable we set at the Jenkins controller level. The error() call causes the pipeline to fail with a clear message rather than silently using a null value.
To load the library in a Jenkinsfile there are two ways. At the top of the file with an annotation:
@Library('my-shared-lib@main') _
Or inline, which lets you compute the version dynamically:
library 'my-shared-lib@main'
The underscore after the annotation is required — it's a Groovy import statement that does nothing but satisfies the parser.
Classes in src/
For more complex logic, vars/ functions get unwieldy. This is where src/ Groovy classes come in.
Our SlackNotifier class encapsulates all the notification logic — different message formats for success, failure, and unstable builds, channel routing based on the team name, and a fallback to a central #ci-alerts channel if the team-specific channel doesn't exist:
package com.myorg
class SlackNotifier implements Serializable {
def script
String channel
SlackNotifier(script, String channel) {
this.script = script
this.channel = channel
}
def notifySuccess(String buildUrl) {
script.slackSend(
channel: channel,
color: 'good',
message: "Build succeeded: ${buildUrl}"
)
}
def notifyFailure(String buildUrl, String stageName) {
script.slackSend(
channel: channel,
color: 'danger',
message: "Build failed at ${stageName}: ${buildUrl}"
)
}
}
The implements Serializable is not optional — Jenkins serializes pipeline state to disk between steps, and any object that isn't serializable will cause cryptic errors when the pipeline resumes after an agent reconnection.
We also had a DeploymentConfig class that read from a deploy.yaml file in the repo root, parsing environment-specific settings rather than requiring teams to pass them all as pipeline parameters. Fewer parameters, less room for mistakes.
Version Pinning
This is the part that matters most in regulated environments. In any environment where you need to audit what ran when — financial services, healthcare, anything with compliance requirements — pinning the library version is non-negotiable.
@Library('my-shared-lib@v1.2.0') _
With this, the pipeline behavior is locked to the tested version of the library. It won't silently change because someone merged a bug to main. The @main form is convenient for development but I'd never use it in production pipelines. Tags are immutable (if you don't force-push, which you shouldn't); branch tips are not.
The flip side is that teams need to update their pinned version to pick up improvements. We handled this by adding a Renovate config to the shared library repo that would open automated PRs in consumer repos when a new library version was published. Teams that wanted automatic updates opted in; teams that needed strict audit trails pinned manually.
Testing
This is where a lot of shared library projects fall down. JenkinsPipelineUnit lets you unit test Groovy pipeline code without a running Jenkins instance.
class BuildAndPushTest extends BasePipelineTest {
@Override
@Before
void setUp() throws Exception {
super.setUp()
// Mock the docker global object
helper.registerAllowedMethod('docker', [], { ... })
}
@Test
void testBuildAndPushCallsDockerBuild() {
def script = loadScript('vars/buildAndPush.groovy')
script.call(image: 'my-app', registry: 'my-registry.example.com')
assertJobStatusSuccess()
assertTrue(helper.callStack.findAll { call ->
call.methodName == 'sh' && call.args[0].contains('docker build')
}.size() > 0)
}
}
It's not perfect — some Jenkins-specific APIs are hard to mock faithfully — but it catches the obvious mistakes like null dereferences and incorrect conditionals before you discover them in a running pipeline. We ran these tests on every PR to the shared library repo itself.
Governance
The technical structure is the easy part. The harder question is: who owns the shared library, and how do you manage change?
Our answer: a platform engineering team owned the library. Any developer could open a PR, but merges required review from a platform team member. We used semantic versioning — patch for bug fixes, minor for new features, major for breaking changes. We maintained a CHANGELOG.md and enforced a two-week deprecation window for anything that would break existing callers. Breaking changes went out with a deprecation warning in the old behavior first, then removal in the next major version.
Communication mattered more than I expected. Slack announcements, a monthly "platform update" in the engineering all-hands, and comments in the PRs that updated pinned versions explaining what changed and why. Teams were more willing to update promptly when they understood what they were getting.
The payoff was immediate and real. When we updated the ECR authentication from long-lived IAM user credentials to OIDC-based role assumption, it was one PR to the shared library. Every pipeline got the fix when they updated their pinned version. That's the only justification you need for the initial investment in building this correctly.
