Onsite training at Jenkins World provides an opportunity to get hands-on, in-depth skills, paired with practical strategies for accelerating application delivery with Jenkins. Training will be delivered by Training Partner sponsors: Infostretch, OSA, and ECS Digital. The course is two full days before the main conference sessions begin (August 28-29), allowing you to maximize your learning opportunities at Jenkins World.
If that isn’t enough to persuade you, here are a few more reasons to sign-up:
Learn from Jenkins experts: Training is delivered by experts from organizations certified to deliver Jenkins training.
Get ready for certification: Brush up on your Jenkins expertise to help prepare for the Certified Jenkins Engineer (CJE) or Certified CloudBees Jenkins Engineer (CCJE) exam. You will get a voucher to take either exam for FREE with your Jenkins World registration.
Take the training - then stick around for the sessions: Jenkins World will encompass 55+ sessions this year across a variety of topics and geared towards Jenkins users of all skill levels, from beginner to advanced.
The specifics:
Course name: Jenkins Certification Training
Topics: Training addresses all the topics covered by the Certified Jenkins Engineer (CJE) and Certified CloudBees Jenkins Engineer (CCJE) certification exams, as outlined in the CJE and CCJE study guides.
Labs: The course makes use of hands-on labs, where appropriate.
Cost: The Jenkins World on-site training add-on is $899 USD with the purchase of a Full Conference Sessions pass.
Capacity: Limited to 134 attendees, filled on a first come, first-served basis.
NOTE: The CJE and CCJE certifications are professional certifications. Depending on your current level of knowledge and practical experience with Jenkins, additional study and practice may be required to pass either certification exam.
CloudBees Jenkins Team makes it easy to create and maintain a stable continuous delivery environment. This workshop will teach attendees what the CloudBees Assurance Program is and the value it provides to CloudBees customers. We will also give attendees a hands-on tutorial of the Beekeeper Upgrade Assistant. Beekeeper is a handy tool that helps CloudBees customers better manage their Jenkins instance.
This training is an introduction to Jenkins and its environment. Learn how to easily create pipelines while putting in place best practices for continuous integration and continuous delivery. Learn how to get help from and contribute back to the Jenkins community.
Following an overview of the Jenkins ecosystem and community, we will cover key concepts and tools related to continuous integration and continuous delivery, such as: source code management, build, test and deployment. The core part of the training will be based on practical examples of pipeline creation, using Jenkins’ Blue Ocean Visual Pipeline Editor. We will also leverage other key components of Blue Ocean such as pipeline visualization, troubleshooting and personalization. We will conclude this training with a more advanced section, introducing the powerful concept of Declarative Pipeline syntax and how it enables everyone in DevOps, regardless of expertise, to participate in the continuous delivery process.
This workshop is based on the material published in The DevOps 2.1 Toolkit: Building, testing, deploying and monitoring services inside Docker Swarm clusters.
The workshop will go through the whole microservices development lifecycle. We’ll start from the very beginning with architecture definition and design. From there, we’ll move on to requirements, technological choices and development environment setup, through coding and testing and then all the way to the final production deployment. We won’t stop there. Once our new services are up and running, we’ll see how to maintain them, scale - based on resource utilization and response time, and recover during failures. We’ll try to balance the need for creative manual work and the need to automate as much of the process as possible.
The goal will be to design a fully automated continuous deployment pipeline. We’ll see how microservices fit into a continuous deployment pipeline, explore immutable containers concepts and, finally, understand why the best results are obtained when those three are combined into one unique framework.
During the workshop we’ll explore tools like Docker, Docker Swarm, Docker Compose, Jenkins, HAProxy and a few others. We'll explore the practices and tools required to run a Swarm cluster. We'll go beyond a simple deployment. We'll explore how to create a continuous deployment process. We'll set up multiple clusters. One will be dedicated to testing and the other for production. We'll see how to accomplish zero-downtime deployments, what to do in case of a failover, how to run services at scale, how to monitor the systems and how to make the system heal itself. We'll explore the processes that will allow us to run the clusters on a laptop, as well as on different cloud providers.
This workshop will introduce developers to the Jenkins plugin ecosystem and terminology. The goal is to provide a cursory overview of the resources available to Jenkins plugin developers. Armed with this information, Jenkins developers can learn how to navigate the project and codebase to find answers to their questions.
This workshop is not going to teach you how to write Java.
Pre-requisites:
Knowledge of Java: needn’t be advanced, but an understanding of Java code concepts and structure is very helpful
Download and install Apache Maven:
From command line execute mvn hpi:run, or mvn -U org.jenkins-ci.tools:maven-hpi-plugin:create
At least JDK7 or later
Have Git installed
Understanding of Jenkins concepts (builds, jobs, configuration, nodes, etc.)
You may also find it useful to read through the plugin tutorial ahead of time. It covers similar concepts.
Materials:
Clone the Git repo from github.com/christ66/JUC-demo
(git clone https://github.com/christ66/JUC-Demo.git)
Do your deployments need to be approved by a manager? Do you need to prove separation of duties? How can you ensure infrastructure is available when teams need it? Should you use Jenkins, or an application release automation solution for deployment? Does automation of continuous delivery processes remove the need for operations?
In this workshop, we will review patterns of common problems and solutions specific to operations teams in a continuous delivery environment. We will then walk through how to address those patterns with Jenkins and Pipeline as Code, including hands-on exercises.
In this session you will learn:
- Cultural and organizational considerations for operations in modern development
- How operations can benefit from the power of Jenkins and Pipeline as Code
- How to create pipelines with Jenkins that address the needs of operations
RSVP Here
Let's talk about the future of Jenkins and how you can help shape it! The contributor summit is the place where the current and future contributors of the Jenkins project get together. This year, the theme is “working together”. Traditionally, most contributors are plugin maintainers focusing on their own plugins, but it’s important to look at Jenkins as a whole, because that’s how users experience it. There is more to the project beyond writing plugins, and even for plugin developers, there are increasing number of common libraries and modules that plugins should rely on. In short, we should work better together.
A few contributors will prepare some presentations to help clarify what that means, and what all of us can do. And in the afternoon, there will be "unconference" sessions to brainstorm and discuss what we can do and how.
Whether you are already a contributor or just thinking about becoming one, please join us for this full day free event. Attending is free, no conference ticket required.
Agenda/Schedule
9:00 – 9:15: Welcome
9:15 – 9:45: Introduction to participating in the Jenkins project
9:45 – 10:45: The last year in review
10:45 – 11:00: Break
11:00 – 12:00: A vision for Jenkins
12:00 – 13:00: Lunch
13:00 - 13:30: Discuss unconference topics
13:30 - 16:30: Unconference tracks/discussions
16:30 - 17:00: Recap and wrap-up
19:30 - 20:30: Please join us for dinner and drinks at the Autodesk Gallery. RSVP HERE
This training is an introduction to CloudBees Jenkins Enterprise. Following an architectural overview, students will go through a series of presentations and demonstrations, allowing them to discover the product's main features. Learn how CloudBees Jenkins Enterprise lets enterprises leverage Jenkins in ways that were not possible before. Learn how to unleash its power for your organization.
The training starts with a section on CloudBees Jenkins Enterprise general concepts and is followed by two main sections:
CloudBees Jenkins Enterprise Administration Fundamentals, covers topics such as CloudBees Jenkins Enterprise components, server roles, CloudBees Jenkins Operations Center, backup and worker recovery.
CloudBees Jenkins Enterprise Common Operations, covers topics such as managing masters, CloudBees Assurance Program, adding a Palace agent template, adding external agents, security, scaling, migrating an existing master to your CloudBees Jenkins Enterprise cluster, using the Docker Registry and destroying the cluster.
We will conclude with a section highlighting some specific aspects of CloudBees Jenkins Enterprise for each of its potential deployment platforms, such as AWS, OpenStack and VMware.
This training will be delivered for one full day (August 29) before the conference sessions begin, allowing you to maximize your learning opportunities at Jenkins World. It is an introduction to Jenkins Pipeline deployed with Docker. Following an overview of Pipeline and Docker, students will go through a real-life Pipeline/Docker-based project, allowing them to discover its main features in the context of a hands-on lab.
Learn about the need for Jenkins Pipeline. Learn about key pipeline use cases, structure and syntax, and execution control.
Get an understanding of how to prepare Docker environments. Learn about key Docker use cases, Docker tools and how to create a deployment lifecycle with Jenkins Pipeline and Docker.
There are two projects that are part of this course:
Project 1 teaches users how to: combine CloudBees pipeline functionality with Docker, implement commonly used steps required for continuous integration/continuous delivery flow and create a deployment lifecycle with Jenkins Pipeline and Docker.
Project 2 teaches users more advanced features of CloudBees pipeline functionality with Docker, and how to create an advanced deployment lifecycle using both.
Note: this is NOT a unique session. The course begins on Monday, August 28 at 9am and continues Tuesday, August 29.
Onsite training at Jenkins World provides an opportunity to get hands-on, in-depth skills, paired with practical strategies for accelerating application delivery with Jenkins. Training will be delivered by Training Partner sponsors: Infostretch, OSA, and ECS Digital. The course is two full days before the main conference sessions begin (August 28-29), allowing you to maximize your learning opportunities at Jenkins World.
If that isn’t enough to persuade you, here are a few more reasons to sign-up:
Learn from Jenkins experts: Training is delivered by experts from organizations certified to deliver Jenkins training.
Get ready for certification: Brush up on your Jenkins expertise to help prepare for the Certified Jenkins Engineer (CJE) or Certified CloudBees Jenkins Engineer (CCJE) exam. You will get a voucher to take either exam for FREE with your Jenkins World registration.
Take the training - then stick around for the sessions: Jenkins World will encompass 55+ sessions this year across a variety of topics and geared towards Jenkins users of all skill levels, from beginner to advanced.
The specifics:
Course name: Jenkins Certification Training
Topics: Training addresses all the topics covered by the Certified Jenkins Engineer (CJE) and Certified CloudBees Jenkins Engineer (CCJE) certification exams, as outlined in the CJE and CCJE study guides.
Labs: The course makes use of hands-on labs, where appropriate.
Cost: The Jenkins World on-site training add-on is $899 USD with the purchase of a Full Conference Sessions pass.
Capacity: Limited to 84 attendees, filled on a first come, first-served basis.
NOTE: The CJE and CCJE certifications are professional certifications. Depending on your current level of knowledge and practical experience with Jenkins, additional study and practice may be required to pass either certification exam.
This workshop will provide a hands-on experience with a turnkey implementation of a scalable Jenkins as a Service solution, based on CloudBees Jenkins Enterprise. The workshop will utilize the same microservice example used in the DevOps 2.1 Toolkit Workshop, walking you through the software development lifecycle using the tools and features provided within CloudBees Jenkins Enterprise. The audience will perform exercises which illustrate the distributed and scalable architecture provided by the CloudBees Jenkins Enterprise cluster.
From quickly provisioning your very own CloudBees Jenkins Enterprise master, to setting up a custom template for built-in, ephemeral and elastic Docker-based Jenkins agents, to dynamically creating Jenkins Pipeline jobs, you will have a true hands-on experience with the features that CloudBees Jenkins Enterprise provides. All of this functionality is derived from the highly-scalable Distributed Pipeline Architecture that serves as the foundation of CloudBees Jenkins Enterprise.
This training is an introduction to Jenkins and its environment. Learn how to easily create pipelines while putting in place best practices for continuous integration and continuous delivery. Learn how to get help from and contribute back to the Jenkins community.
Following an overview of the Jenkins ecosystem and community, we will cover key concepts and tools related to continuous integration and continuous delivery, such as: source code management, build, test and deployment. The core part of the training will be based on practical examples of pipeline creation, using Jenkins’ Blue Ocean Visual Pipeline Editor. We will also leverage other key components of Blue Ocean such as pipeline visualization, troubleshooting and personalization. We will conclude this training with a more advanced section, introducing the powerful concept of Declarative Pipeline syntax and how it enables everyone in DevOps, regardless of expertise, to participate in the continuous delivery process.
The goal of Continuous Testing is to find defects earlier and release software faster. This can be achieved by integrating open-source functional and performance testing tools in the early stages of your software delivery lifecycle.
Join us in this hands-on workshop, where we will teach you to implement Continuous Testing by using multiple tools.
1. Introduction to CI/CD testing,
You will learn the basics of continuous integration and continuous testing.
2. Running a JMeter Performance Test from within a Jenkins project
You will learn how to use JMeter, the most popular open-source load testing tool, and how to run a JMeter test from a Jenkins project
3. BlazeMeter Plugin for Jenkins
You will learn how to use the BlazeMeter SaaS platform and how to run a BlazeMeter test from a Jenkins Project
4. Introduction to Taurus
You will learn how to use Taurus, an open-source test automation framework and hear about its advantages
5. Advanced Taurus and Cloud Scaling
You will learn advanced Taurus modules and how to leverage Taurus to run large scale performance tests in the cloud
Come join us for a fun and useful workshop and release faster.
In this workshop, we will review the definition of DevOps and it’s relationship to continuous delivery (CD). Then we will explore a simple, lightweight model for measuring organizational DevOps maturity. Finally, we will conduct a lab in which we will apply the model to the attendees’ organizations, enabling them to measure their own DevOps maturity and identify practical steps to facilitate a transformation to enterprise CD and DevOps. The session will be highly interactive, including games and group discussion to facilitate shared learning. An Internet-connected laptop is highly recommended.
The workshop is designed to prepare Executives to lead the transformation of their software development processes and provide them with a common understanding of DevOps for the Enterprise. It will start by reviewing the improvements that are possible and highlighting the challenges of typical Enterprise transformations. It will then work through the steps of a successful transformation.
The workshop will focus on the changes that Executives are uniquely positioned to address and will have the biggest impact on the business. Instead of the typical transformation that focuses on improving the effectiveness of individual teams this approach focuses the Executives on engaging their management teams on improving the coordination across teams with DevOps approaches. Furthermore, the approach will focus on the important cultural changes that require Executive leadership.
The workshop will provide an in-depth review of DevOps, the different practices, and the different inefficiencies they were designed to address. It will highlight how and why DevOps principles can and should be different for coordinating the work across large and small teams. The goal is to ensure everyone has a common understanding of the principles, practices, and a framework for discussing their improvements.
At the end of the workshop Executives should have a much better understanding of their role in leading a successful transformation and the steps involved. They will also understand the importance of getting the organization engaged and supporting the changes.
All attendees will receive a free copy of Gary's Book "Starting & Scaling DevOps in the Enterprise" and will have the chance to get it signed by Gary from 5-6 p.m. on Wednesday during the evening event in the expo at the CloudBees booth.
Logs are an invaluable tool when faced with anything from a minor performance hiccup to finding the cause of a major outage and getting your systems back online. Like black box data from an airplane, the data in your logs should be able to tell you when something isn't working properly and should help you figure out where the issues are. In this session, you’ll hear practical advice on some of the quick, practical things that you can do to get more value from your log data including concrete steps you can take before something goes wrong to ensure sure you’re best set up to respond to an issue or incident quickly.
Find out how you can use Docker to improve the security of your Software Development Lifecycle and deployments. Elevate the trust you have in your build components and artifacts using Docker Content Trust, use secrets to inject sensitive data, and a security scanner such as Docker Security Scanning to ensure you're not deploying known vulnerabilities.
You will be able to relate to the DevOps journey at Rosetta Stone. We’ve struggled through organizational change and overcome some of the challenges. We’ve found that some of the hardest organizational changes—like continuous delivery and codified test data—also have the greatest impact. I’ll talk about some of the technical, structural, and people-related areas that have helped us to succeed. We love improving the details of how we work, too, with Jenkins hacks for vim, the command line, and our chat client. I’ll talk about how we do builds and deployments (Docker! Declarative pipelines! A tiny inhouse tool!) and how automated software maintenance makes some maintenance almost fun.
In the long run, running an Open Source project is quite similar to creating a startup business, where you keep investing heavy efforts without a guarantee that you will end up with a large and successful exit. Many open source projects are never meant for a large audience or user group, but rather are intended only for oneself or a few others. Yet some open source projects, by intention, do end up with wide adoption and an active community around it. What makes them succeed? On the contrary, what causes other open source projects to fail and just increase entropy?
With 20 + million open source repositories on GitHub today, it is most likely majority of those projects will end up as “abandonware” i.e. huge amount of time & resources invested that never got traction.
I’ve been consistently contributing to open source projects for 8 years, starting them from scratch or joining existing communities. I found some patterns that help treating your project as a conscious attempt to make something big enough to live its own life. The real world is full of opportunities and threats, and being aware of them will help to make your project intentionally successful
Learn how Sonatype's Nexus platform integrates with Jenkins to help DevOps teams boost application quality by as much as 60%. Nexus empowers teams to choose and manage the highest quality open source components and containers early, everywhere and at scale. Come learn how organizations like companies like Intuit, EDF Group, and Blackboard are benefiting from their Nexus + Jenkins integrations.
71+% of surveyed organizations are currently on the path to switch to DevOps or have already implemented the process, and the benefits of a properly implemented DevOps program are clear – quicker time to customer value, better alignment between businesses and customers, and a better ability to respond to customer input. However, when it comes to DevOps adoption, many teams rush to focus on one specific issue within one area when they would actually benefit more from aligning business, development, testing, and operations up front. Bobby and Kevin have helped hundreds of customers in their DevOps journeys. Join their session to learn about the major problems they encountered, and how BDD can be used to solve those problems.
In this session, Tom will show you how to get insights for your Jenkins master and agent infrastructure, job and build details, including an incredibly efficient way to analyze test results! See the health of your agents to discover hard-to-find issues and fix them before they become a bottleneck for development teams. Examine the audit trail of user logins and job control, including configuration changes — useful for security and compliance.
How do you manage your software as it is progressing from development to production? Do you version and monitor your builds with enough metadata to know what's going on? Can you manage any software language or package type through a single point of universal control? Are you always sure your dependencies are secure before deployment? If you want to trust your software in production, join me and learn how to implement an end-to-end Jenkins pipeline for your software delivery.
If you are using Jenkins to orchestrate application releases, at any point in time, there would be a series of freestyle jobs chained together based on specific criteria to perform builds. To convert these freestyle jobs to pipeline, DevOps engineers will have to manually drill down to each of the 100s or 1000s of jobs, understand the tools, configurations, URLs, and parameters etc. and re-write those in pipeline syntax. This manual effort not only involves converting individual job logic but also requires to ensure that the chain is converted to a single pipeline while keeping a base-rule of 1 chain = 1 pipeline. However, as the number of jobs increases, it becomes extremely difficult to convert the freestyle jobs to coded pipelines.
Infostretch has created a plugin that drastically reduces the effort behind this manual process. Now, DevOps engineers can auto-generate the script for this conversion. The plugin ensures adherence to coding standards, separates complex business logic and standard declaration from execution flow declaration and accelerates the transition process of any new set of applications being on-boarded to pipeline.
Join our 15-minute presentation and demo by Sanil Pillai, Director of Infostretch Labs to:
Learn how the plugin can be used to enforce coding standards in your applications
Understand how auto-generated scripts can be pre-configured to use custom libraries
View a live demo of the plugin auto-converting freestyle Jenkins jobs to coded pipeline
Logs are an invaluable tool when faced with anything from a minor performance hiccup to finding the cause of a major outage and getting your systems back online. Like black box data from an airplane, the data in your logs should be able to tell you when something isn't working properly and should help you figure out where the issues are. In this session, you’ll hear practical advice on how best to get exactly that value from your log data including concrete steps you can take before something goes wrong to ensure sure you’re best set up to respond to an issue or incident quickly. What you log is critical, and we will help provide a better what to think about what your applications should log.
Companies often have engineers dedicated to managing their Jenkins build environments but often we see developers spending time maintaining the underlying infrastructure instead of writing code. There are a number of plugins in the Jenkins ecosystem that can assist with reducing the overhead of your build farms. Unfortunately, these options often introduce additional complexity because they rely on other computing orchestration platforms such as Swarm, Mesos, or even AWS’ ECS container platform. There has to be a simpler way.
This session will show you how to integrate AWS CodeBuild with Jenkins to remove the overhead in creating, managing, and scaling your build farms.
In this session, you will learn:
* How AWS CodeBuild simplifies the management of your Jenkins Build Farms
* How to create a Jenkins pipeline with AWS CodeCommit and AWS CodeBuild
* Best practices for securing your Jenkins pipeline with AWS IAM
* How other AWS customers have integrated CodeBuild and Jenkins into their pipelines
Gone are the days of “stacking and racking" servers and networking gear. Infrastructure is software now and needs to be managed the same way we manage feature and testing code. But how do we take a test-driven approach to cloud resources? What constitutes success or failure, and how do we automate these infrastructure assertions like we do other kinds of tests?
Fugue is a cloud infrastructure management system designed to provision infrastructure and ensure it conforms to business rules, corporate IT policies, and known-good practices. Using a straightforward declarative syntax, teams can treat infrastructure code the same way they manage feature code and tests, and break the build if the infrastructure is wrong.
In this session, we will demonstrate:
Declarative code for provisioning a complete application environment in AWS
How to define a set of success conditions, including region-awareness, resource cost constraints, security group ingress rules, and other criteria
Using a git flow model and Jenkins Pipeline to test different rules per branch
Bringing it all together for on-demand policy-compliant production environment clones for every stage of delivery
How do you use Splunk to measure development productivity and solve for speed and velocity in developer and quality teams? Of course by collecting event data related to productivity! Hear how Intuit and Splunk are using apps like the Jenkins app from SplunkBase to identify areas of improvement around build, test and release of software. See how Intuit improved build times from hours to minutes and how Splunk “builds Splunk with Splunk”, plus how Splunk helps developers collaborate through tools including Git, JIRA, Jenkins, and Slack.
When you handle trillions of dollars in transactions, people take notice. Supporting the non-stop nature of the global economy through these transactions is a critical role of the Federal Reserve Bank of New York. Join this presentation to learn how the Federal Reserve Bank is evolving its legacy infrastructure and processes towards a high-velocity DevOps environment. Colin will share five key insights from his organization's journey of transforming both small and ultra-mission critical systems that are composed of integrated custom and COTS applications. Attendees will learn how Colin and his organization led their planning, budgeting and mindset transformations for one of the world's largest financial institutions. The discussion will detail their DevOps transformation, including how people, processes and technology were affected by this change. Colin will also discuss how different applications have different CI/CD scenarios that impact the Federal Reserve Bank's approach to DevOps.
Have you written a Jenkins plugin, or helped to maintain one, or are planning to write one? If so, you need to understand where the Pipeline feature might fit into your plugin’s design.
The bare minimum of being “Pipeline-compatible” is that the plugin’s features can be used in a way analogous to their use in traditional Jenkins projects. Learn about the critical APIs that make this possible, and the accompanying restrictions needed due to both the “durability” and greater flexibility of Pipeline builds.
More sophisticated plugins can use Pipeline-specific APIs, mainly to define new “steps”. See the options available and the reasons why you would—or would not—need to add this dependency. Learn the advantages and disadvantages of special DSL additions and libraries.
You will also get an overview of plugin features which do not need to be ported to Pipeline because there is already a way to accomplish the goal without them. This can help you judge whether a new development effort will pay off or whether the time would be better spent documenting a different usage mode.
Whatever implementation choice you make, see how the Jenkins test harnesses can be used to prove smooth operation of the result.
Cambia Health Solutions is a family of companies centered around incubating new technology in the healthcare space. With a shift to the cloud, our software needed to be bullet-proof against security defects in a fast-paced DevOps model. The problem: how to build in security along the entire pipeline, keep developers focused on writing great code, and do it all with speed and at scale. We created a complete program, sane security policies and integrated technology solutions that allowed Cambia to keep moving fast, while injecting security at every stage of the SDLC. And… our developers LOVE IT!
If you answer no to any of these questions you need to attend this presentation:
Is your current security process transparent to developers?
Do you know exactly what open source goes into your code, the associated licenses, versions and potential security issues?
Do development teams test early and often in the SDLC instead of waiting until just before production to ask for a scan of their application?
Do you think testing your software for security defects one or two times a year is enough, when changes to your software happen perhaps as often as several times a day?
Jeremy will share a complete picture of secure coding, testing and controls, addressing issues such as:
How to transform from a culture of “test it the week before production” to “test at every stage of the SDLC”
How to automate - without slowing down builds.
How to scale the program in a short time without adding staff.
How to give developers automated and actionable information early and often in the SDLC to enable good coding from design to deploy.
In the age of technology-empowered customers, the timing and quality of your applications can make or break you. This session will demonstrate how ALM Octane can help you manage product and team backlogs, test the deliverables and track their quality as they pass through the CI pipeline in Jenkins, and provide end-to-end traceability of your user stories and tasks. With ALM Octane-Jenkins-HCM (formerly Codar) integration, learn how your Jenkins instance can handover the continuous delivery process to the HPE Hybrid Cloud Management Suite. Watch a short demo of how HCM enables your dev and ops engineers collaboratively design cloud-agnostic deployments, and release your application to public, private or hybrid cloud via customizable lifecycle stages.
Every person who wrote a Jenkins plugin, or a feature included in a plugin or Jenkins core, was trying to let users solve some problem more easily. The trouble is that some of these plugins are old and not so well maintained. Even when they are, every feature your CI/CD system relies on has a not-so-hidden cost: unknown or unaddressed bugs; complex UI with sparse documentation; lack of support for very new or exotic tools or protocols; small user community with little collective knowledge; risk of behavioral changes or regressions during upgrades. Therefore a smart administrator needs to understand the difference not only between critical and nice-to-have- features, but also which behaviors must be implemented in Jenkins as opposed to those which are mere conveniences that could be handled differently. “Building vs. buying” should not be a reflexive choice.
In this talk we will study some Jenkins capabilities from the perspective of whether and how you could get the same effect without them; in other words, the unique value of Jenkins feature code. Can you send mail/chat notifications without a special plugin for that system—and pick the right recipient(s) and message? Can you “check out” an SCM repository into a workspace as part of a Jenkins-neutral build script—or will you be stuck getting tooling, credentials, and changelogs right? When can and should you write Pipeline libraries instead of using a plugin? What is the difference between build “wrappers”, custom “project types”, and custom build “steps” when it comes to using complex tools? Do you need a report plugin tailored to your reporting tool or will a more generic integration serve as well?
Expect to see demonstrations of different approaches to the same problem which illustrate the trade-offs between tighter integration with the Jenkins UI and other features; vs. transparency, flexibility, and self-reliance.
Working in an organization of nearly 400,000 people, not to mention living in a world packed with open source, the potential gain that comes from sharing code and automation practices is something I consider to be both amazing and generally unrealized. I lead a shared services at Accenture for North America and globally we have hundreds of Jenkins pipelines deploying to thousands of environments, with possibly tens of thousands of Jenkins build, test and deployment automation jobs. Sharing and re-use of code, testing, processes and technique has always been a key theme and Jenkins has been central to our approach. The DevOps Platform (https://accenture.github.io/adop-docker-compose/) has been our way to package this up. We open sourced it a year ago to extend the sharing of it, globally. It enables us to package up all of our Jenkins jobs and automation and open them up for mass sharing and collaboration (spanning open source, CloudBees and Jenkins). The result is that we start to innovate and learn even faster, collectively, achieving great things at scale.
DevOps is the new way to form a corporate culture around software delivery, supporting business transformation and success. However, after years of mastering waterfall and other legacy processes, the transition to DevOps requires the reinvention of culture, process and tools, or does it? In this panel session, experts from Accenture, Autodesk, Salesforce, SAP and WatchGuard Technologies will talk about their journey to DevOps, roadblocks encountered along the way, the value of technology and automation, and ultimately the business value their organizations have realized.
Discussion topics include:
Panelist:
Jayne Groll, CEO at DevOps Institute
Betsy Hearnsberger, Senior Engineering Manager at Adobe
Isabel Vilacids, Quality Engineering Manager at CloudBees
Leslie Carr, Infrastructure Engineering Manager at Clover Health
Moderator: Katherine Lam, Snr Product Marketing Manager at CloudBees
Everyone wants to implement continuous delivery. After all, the benefits are too big to be ignored. You can:
Increase the speed of delivery
Increase software quality
Decrease costs
Free people to dedicate time to activities that return value
Those improvements are like music to any decision maker’s ear. Especially if that person has a business background. If a tech geek can articulate the benefits continuous delivery brings to the table, when he asks a business representative for a budget, the response is almost always “Yes! Do it.”
Then a continuous delivery project will start. Tests will be written. Builds will be scripted. Deployments will be automated. Everything will be tied into an automated pipeline and triggered on every commit. Everyone will enter a state of nirvana as soon as all that is done. There will be a huge inauguration party with a vice president having the honor to be the first one to press the button that will deploy the first release to production. Isn’t that a glorious plan everyone should be proud of?
The project starts and, shortly afterward, you hit the first obstacle. But, since you are brave and do not give up that easily, you pass it. Then, not long afterward, another obstacle comes along. And another one after that. And on and on it goes. Half a year later you feel that you are not getting far. You spent your budget. You need to show results even though you cannot see the light at the end of the tunnel. The CTO demands results. The business wants value for the investment. You decide to do the only sensible thing and declare that the project is finished. You are continuous delivery certified even though there is nothing continuous nor are you delivering. Continuous delivery joins other failed projects that are declared a big success. Not only that, you are doing agile but you also practice CD. Veni, vidi, vici. You joined the club of glorified failures. Well done!
Why did your attempt to implement continuous delivery fail? There cannot be one answer that fits all scenarios. However, some prerequisites and steps should be taken in almost all cases. The chances are that you missed some, if not all, of the commandments that we’ll explore in this session.
Unfortunately, with rapid growth comes the pain of integration and adoption of legacy technologies. Under new leadership, ESI has committed to a technology transformation, starting with a move from waterfall to Agile. In order to achieve this transformation, we have had to revamp our development, operations and release processes to make them more efficient, reliable and scalable.
At last count, ESI was using 37 different programming languages spread across mainframe, many flavors of linux, Windows and cloud hosts. We had to find a way to take inventory and apply controls, while making the process faster and more efficient to support our Agile transformation. To do this, we used Jenkins Pipeline (specifically, global libraries) to centralize our implementation, provide the hooks into our other supporting applications and providing a fast and reliable path to production.
Some of the integrations we’ve hooked into our pipeline library and flow include:
Github (for multi-branch pipeline jobs and knowing what we’re releasing)
Jira / HipChat / Confluence (for collaboration)
XL Release (to eliminate Sharepoint and spreadsheets for release management)
Artifactory (not only as a binary repo, but to proxy external resources previously unavailable to our teams)
Ansible (to automate deployments)
Sauce Labs (for test automation)
Graphite (you can’t get better if you don’t measure yourself)
Ever find yourself needing data pipelines to feed a hungry data-driven culture, but not sure where to start, or what features are essential? In this talk, I will demonstrate a baseline data pipeline infrastructure built with Jenkins and Docker EE that checks all the boxes. Data pipelines often exist as that mysterious plumbing buried underground: occasionally inspected, but largely prone to silent failures and the ensuing hot fixes. Join the quest to daylight the infrastructure and benefit!
Change initiatives are challenging in any established company, but they are even more challenging to start in highly regulated industries. There are constant concerns around compliance and security. It’s very important to alleviate these concerns early and often. This was one component of my strategy.
It’s also important to build shared experiences with the leadership of your company, but how do you do this as a low-level DevOps engineer? It takes time and a lot of effort. It also takes a manager who sees the bigger picture. I’ll describe my journey in organizing the company around my vision, from the hatching of the vision with a core team to expanding and solidifying that vision with executive sponsorship.
I’ve learned a lot from this process, and want to help others with my learnings. I’ve pulled out some valuable concepts and principles to focus on, so the talk will be somewhat opinionated, but still encourage ideation in the audience. The audience will walk away with ideas to start their own change initiatives, no matter how big or small.
Learn about the work Microsoft is doing with the Jenkins, Docker, Hashicorp, Chef, Kubernetes and Netflix open source communities to enable DevOps in Azure, using the tools and engines you know and love.
One of the most basic goals that DevOps and Agile try to achieve is better communication. How do we build tools + processes where everybody is kept in the loop.
At GitHub, our hardest working “employee” is Hubot, the open source chat bot. Hubot sits in your company's chat client and drives everything: graphing, monitoring, provisioning, tweeting and even telling some jokes. All of this has enhanced our culture of remote work. Internally we have integrated Hubot with Jenkins in order to drive our CI/CD pipeline.
In this session you will learn:
* How GitHubbers use Hubot to drive culture in a remote-first workforce
* How we use chatops to deploy and interact with GitHub via Hubot
* How we integrated Hubot with Jenkins for our CI/CD needs.
Over the last few years we have seen the rise of a powerful new community rooted in two areas with historically different backgrounds. DevSecOps has taken hold in both the cybersecurity community, as well as in the world of Dev and Ops. But what is really driving the growth here? While tools are important, as in most things DevOps, it comes down to culture. Instilling a culture of “security is everyone’s responsibility”, is the key to a successful DevSecOps program. While this may sound simple, changing cultures is never as easy as it sounds. In DevSecOps, we are dealing with long standing customs and ingrained behaviors that sometimes work against our success.
How do we convince the security team that they are not the only ones who care about security. That giving up a little control in order to automate more, go faster will allow us to be more secure. How do we show the developers, the DevOps team, QA and others that security is not just the job of the security team, but it is their job and responsibility too? Continuous Delivery, must also include continuous security.
Case studies show us that if we can have everyone adopt this mindset, we can do more, faster and more secure! Join us in the panel discussion where we will talk about how to instill this culture in your own organization. Tips that will allow you to be successful with DevSecOps, as well as some discussion of tools that can help. If developing secure code and maintaining secure apps is important to you, don’t miss this session!
In larger organizations, DevOps isn’t just a team activity. With multiple DevOps teams executing on strategic initiatives, enterprise executives must have insight into the status and quality of their portfolios in order to make strategic decisions and course corrections in real time. This session will explore the challenges that enterprise executives face as their organization adopts DevOps practices across the business, and how to achieve the enterprise-wide visibility and collaboration that they need.
Lots of cool things are developing with IoT best practices, but the fact that so many embedded devices require native code binaries complicates many things that are taken for granted in modern DevOps. While, yes, Jenkins has been doing CI for C/C++ for years, it doesn’t have the flexibility and power that may be available in other languages.
However, modern tooling is emerging for C/C++, such as proper dependency managers like the FOSS Conan project for native code (primarily C/C++) binaries. With this development and Jenkins pipelines we can reimagine DevOps in an IoT/Native world.
In this talk we will demonstrate breaking up a C++ application into several binary dependencies as is a best practice in other modern languages, both from OSS and internal and a set of Jenkins pipelines to maintain them and build out the application from individual independent binaries to a final application ready to be distributed to consumer end points delivered as docker containers or natively.
You’re agile, but what about the rest of the organization? What’s your mean time to change? Let’s discuss how to eliminate silos of automation and really achieve end-to-end, zero touch, continuous deployments and testing with absolute visibility to the build, requirements, and test results as we go. From check in to production and performance – zero-touch is achievable with your tools of choice. We’ll show you how.
Not all cloud app development platforms are the same. Despite availability of technology choices, many cloud development platforms constrained developers to sacrifice some aspects of their cloud computing requirements. In this short talk, we will discuss key evaluation criterias to consider for making the right decision.
Perforce’s new solution Helix4Git, powered by the Perforce Helix Versioning Engine, can store multiple Git repositories and assets in a single mono repo. The hybrid approach enables Git scaling for enterprises, versioning and configuration management of multiple Git repositories, and a single source of truth for source code and assets. In this developer tech talk we will explore Helix4Git and how to use it with the Jenkins 'p4-plugin' for continuous integration efficiency.
Continuing to meet business demands in today’s highly competitive markets, we need to be able to create production-like environments at will, for any delivery stage, and with the latest infrastructure and service profiles required for our code, including validating the configuration against business rules and best practices.
In this talk, we’ll discuss the key elements for provisioning policy-compliant development and staging environments as part of Jenkins Pipelines, while eliminating the need to manually provision resources or wait for environments to be created by others. Go fast and get it right in the cloud!
Prezi was stuck with an outdated and unstable legacy CI system that hindered growth and burdened us with an increasing amount of operation work. This is a story of how we found and implemented a new CI system using cloudbees CJE that both accommodates our very diversified needs and pushes us in direction that improves our engineering culture.
Key topics/items
Enables cultural change
Reproducible, team owned and stable CI (Ties well with cultural mindset of prezi)
Lets not build a standard solution, lets build on top.
Extending beyond CJE
Infrastructure tooling and new processes.
In this session we’ll interactively set up a holistic Continuous Delivery pipelines with the use of DevOps tools, based on Cloud, microservices and container technologies. We’ll code and package a web application. Its development versions will be inspected with SonarQube, shipped with Docker, and deployed to Artifactory. Jenkins 2 with its native delivery pipeline features serves as one backbone of our delivery ecosystem. Final releases are published to Bintray, the distribution management platform. Operations (e.g. uploads) on the final releases are gathered by JFrog Firehose Events API. Monitoring with the ELK stack does polish the overall DevOps tool chain. After the interactive session, you know nuts and bolts of setting up a scalable, high-end Delivery Pipeline, using Cloud, microservices, and container technologies.
As development paradigms shift to meet the demand of the consumer, so must testing practices. But how can one keep up with these changes and see into the future of test automation? Join Titus Fortner, solutions architect at Sauce Labs, as he shares how priorities and technologies change as testing evolves, and organizations mature and adopt CD (and other modern paradigms/principles). Drawing from his own experiences, he will start from a strategy perspective and demonstrate how to automate manual processes and bottlenecks. That will be followed by a demonstration of how to isolate problems early by shifting your testing left, once the proper automated testing framework and coverage is in place. Titus will conclude the presentation with a discussion around remediation as the next wave of automated testing. Implementing this as the next step after isolation can drastically reduce time spent on debugging, thus increasing developer efficiency. This session will leave you with a better understanding of current best practices in automated testing, as well as a view into the future of new technologies to help teams deliver better software, faster.
There are no short cuts. Gaining, cultivating and keeping internal support for your DevOps transformation can feel like herding cats – and it often is. At T-Mobile, we learned some valuable lessons that ultimately enabled our team to better support our diverse internal customers and deliver value to the business with higher velocity, security and quality.
At this session, you’ll learn how to:
When building Docker images we often use multiple build steps and Dockerfiles to keep the image size down. Using multi-stage Docker builds we can eliminate this complexity, bringing all of the instructions back into a single Dockerfile while still keeping those images nice and small.
One of the most challenging things about building images is keeping the image size down. Each instruction in the Dockerfile adds a layer to the image, and you need to remember to clean up any artifacts you don’t need before moving on to the next layer. To write a really efficient Dockerfile, you have traditionally needed to employ shell tricks and other logic to keep the layers as small as possible and to ensure that each layer has the artifacts it needs from the previous layer and nothing else. It was actually very common to have multiple Jenkins pipeline steps and/or projects with unique Dockerfiles for different elements of the final build. Maintaining multiple sets of instructions to build your image is complicated and hard to maintain.
With multi-stage builds, you use multiple FROM statements in your Dockerfile. Each FROM instruction can use a different base, and each of them begins a new stage of the build. You can selectively copy artifacts from one stage to another, leaving behind everything you don’t want in the final image and simplifying the both the Dockerfile and Jenkins configurations needed to produce your images.
A strong indicator of quality of an app is its ability to run on multiple devices of different make, model, and manufacturer consistently and predictably. Incremental development models are at the core of every mobile app development team. Often a seemingly benign feature can be disruptive to the overall user experience and functioning of the app if not thoroughly tested. How do you ensure that every code check-in triggers tests for your mobile app against numerous devices and provides detailed analysis of the results?
AWS Device Farm is an app testing service. You can use it to test and interact with your Android, iOS, and web apps on physical phones and tablets that are hosted by AWS. You can use Device Farm for automated testing with a variety of available testing frameworks. You can also use it for manual testing through a remote session.
In this talk, we will learn how easy and powerful it is to integrate AWS Device Farm into your automated build pipeline on Jenkins which enables you to test your mobile applications on more devices, producing a better quality mobile app.
If you've ever developed for the Amazon Alexa platform, you probably have experienced that updating a skill requires a great deal of manual steps. To improve skill development, we've built and open-sourced the Alexia library for NodeJS (https://github.com/Accenture/alexia). This allows us to develop skills that can be managed by a SDLC pipeline, improving consistency and quality of updates to skills. We will demonstrate how simple it can be to update a live skill using the Alexia library.
How does CloudBees deliver software continuously, with high quality? This is the story of how we changed from thinking of quality in the last phases of the SDLC, to having quality baked into our products from the design phase. It's also about our journey from releasing twice a year to releasing on a monthly basis through the power of CloudBees Jenkins Enterprise. Finally, how through all of this we have improved the quality of our products. Prevention vs detection is the key to our success.
We will touch base on some CloudBees best practices such as:
Segregation/separation of duties is a controversial topic in organizations that have dwelled on a legacy hierarchy long enough to confuse separation of duties with separation of departments. These organizations experience severe delays in implementations of Continuous Delivery Pipelines since Pipelines tend to relentlessly gnaw through departments and silos, and rightfully so, and can be seen as posing risk to the business and our customers.
This talk explains the intricacies (what) of Segregation of Duties and why it is critical for governance. Continuous Delivery Pipelines improve speed, quality, and predictability of our product releases and are critical for sustainability. So, trading one for the other is not an option. This session dives into the design aspects of how to model Pipelines to honor Segregation of Duties, while still releasing quality products frequently and predictably to Production.