Jenkins World 2017 has ended
Back To Schedule
Thursday, August 31 • 11:45am - 12:30pm
Codifying the Build and Release Process with a Jenkins Pipeline Shared Library

Sign up or log in to save this to your schedule, view media, leave feedback and see who's attending!

Have you ever been asked to migrate old legacy applications/servers/code to a new datacenter? Have you ever had issues figuring out how to replicate the old environment in the new one?

This war story is about our migration from a Jenkins 1.x instance in an old legacy environment that was being decommissioned, into a brand new Jenkins 2.x instance in a new datacenter. We were given a 60-day window to complete the migration of all 150+ Freestyle jobs from the old Jenkins implementation to the new one.

While copying the XML job configuration file from the old server and moving it to the new one would be easy, replicating the hand-built monolithic build servers is difficult. Building and reusing template jobs would be a secondary option, but this would lead to technical debt in the future. Finally, developers were not enthused about constantly replicating basic global configuration changes such as email addresses between jobs.

The DevOps team quickly identified similarities across all jobs and sought to DRY up the code by building a shared library to abstract Jenkins Pipeline configuration and cover a vast majority of jobs. This made global changes much easier and we were able to pare down Jenkinsfiles to less than 10 lines of configuration parameters using wrappers. This saved hours of developers’ time learning Pipeline syntax and Groovy. To address the monolithic build servers, we created Dockerfiles in each app repository with the exact dependencies needed to build that particular app. Now, each repository has a Jenkinsfile that describes how to build, test and release the app, and a Dockerfile along with a Docker Compose file, that builds the environment in which the pipeline runs. This gives us confidence that future Jenkins migrations would be less cumbersome – as long as there was access to GitHub, agents where Docker containers could run and network connectivity to Artifactory where we store RPMs, all the jobs could be picked up and moved over with minimal effort.

There were many resources out already in the Jenkins community about these efforts such as: 
- https://www.cloudbees.com/blog/need-jenkins-pipeline 
...but there was no case study about the successes and failures nor tricks and tips of how to implement this. We may not have done it the right way nor the most efficient way, but by sharing my story at FireEye, I hope to give others a valuable starting point they can cherry-pick ideas from.

This talk concludes with lessons learned of why we were successful, opinions of other Jenkins tools we tried such as Jenkins Job Builder and raw, scripted Pipeline, deployments with Fabric, and our plans to move to Declarative Pipeline in the future.

avatar for Alvin Huang

Alvin Huang

DevOps Engineer, FireEye
Alvin Huang is a DevOps Engineer on FireEye’s FaaS team. Currently, he and his team work to streamline the build and release process for various FaaS applications with Opensource Jenkins, Github and Docker. He is passionate about building CI/CD pipelines using Open Source software... Read More →

Thursday August 31, 2017 11:45am - 12:30pm PDT
Golden Gate A