In the long run, running an Open Source project is quite similar to creating a startup business, where you keep investing heavy efforts without a guarantee that you will end up with a large and successful exit. Many open source projects are never meant for a large audience or user group, but rather are intended only for oneself or a few others. Yet some open source projects, by intention, do end up with wide adoption and an active community around it. What makes them succeed? On the contrary, what causes other open source projects to fail and just increase entropy?
With 20 + million open source repositories on GitHub today, it is most likely majority of those projects will end up as “abandonware” i.e. huge amount of time & resources invested that never got traction.
I’ve been consistently contributing to open source projects for 8 years, starting them from scratch or joining existing communities. I found some patterns that help treating your project as a conscious attempt to make something big enough to live its own life. The real world is full of opportunities and threats, and being aware of them will help to make your project intentionally successful
When you handle trillions of dollars in transactions, people take notice. Supporting the non-stop nature of the global economy through these transactions is a critical role of the Federal Reserve Bank of New York. Join this presentation to learn how the Federal Reserve Bank is evolving its legacy infrastructure and processes towards a high-velocity DevOps environment. Colin will share five key insights from his organization's journey of transforming both small and ultra-mission critical systems that are composed of integrated custom and COTS applications. Attendees will learn how Colin and his organization led their planning, budgeting and mindset transformations for one of the world's largest financial institutions. The discussion will detail their DevOps transformation, including how people, processes and technology were affected by this change. Colin will also discuss how different applications have different CI/CD scenarios that impact the Federal Reserve Bank's approach to DevOps.
Cambia Health Solutions is a family of companies centered around incubating new technology in the healthcare space. With a shift to the cloud, our software needed to be bullet-proof against security defects in a fast-paced DevOps model. The problem: how to build in security along the entire pipeline, keep developers focused on writing great code, and do it all with speed and at scale. We created a complete program, sane security policies and integrated technology solutions that allowed Cambia to keep moving fast, while injecting security at every stage of the SDLC. And… our developers LOVE IT!
If you answer no to any of these questions you need to attend this presentation:
Is your current security process transparent to developers?
Do you know exactly what open source goes into your code, the associated licenses, versions and potential security issues?
Do development teams test early and often in the SDLC instead of waiting until just before production to ask for a scan of their application?
Do you think testing your software for security defects one or two times a year is enough, when changes to your software happen perhaps as often as several times a day?
Jeremy will share a complete picture of secure coding, testing and controls, addressing issues such as:
How to transform from a culture of “test it the week before production” to “test at every stage of the SDLC”
How to automate - without slowing down builds.
How to scale the program in a short time without adding staff.
How to give developers automated and actionable information early and often in the SDLC to enable good coding from design to deploy.
Working in an organization of nearly 400,000 people, not to mention living in a world packed with open source, the potential gain that comes from sharing code and automation practices is something I consider to be both amazing and generally unrealized. I lead a shared services at Accenture for North America and globally we have hundreds of Jenkins pipelines deploying to thousands of environments, with possibly tens of thousands of Jenkins build, test and deployment automation jobs. Sharing and re-use of code, testing, processes and technique has always been a key theme and Jenkins has been central to our approach. The DevOps Platform (https://accenture.github.io/adop-docker-compose/) has been our way to package this up. We open sourced it a year ago to extend the sharing of it, globally. It enables us to package up all of our Jenkins jobs and automation and open them up for mass sharing and collaboration (spanning open source, CloudBees and Jenkins). The result is that we start to innovate and learn even faster, collectively, achieving great things at scale.
Unfortunately, with rapid growth comes the pain of integration and adoption of legacy technologies. Under new leadership, ESI has committed to a technology transformation, starting with a move from waterfall to Agile. In order to achieve this transformation, we have had to revamp our development, operations and release processes to make them more efficient, reliable and scalable.
At last count, ESI was using 37 different programming languages spread across mainframe, many flavors of linux, Windows and cloud hosts. We had to find a way to take inventory and apply controls, while making the process faster and more efficient to support our Agile transformation. To do this, we used Jenkins Pipeline (specifically, global libraries) to centralize our implementation, provide the hooks into our other supporting applications and providing a fast and reliable path to production.
Some of the integrations we’ve hooked into our pipeline library and flow include:
Github (for multi-branch pipeline jobs and knowing what we’re releasing)
Jira / HipChat / Confluence (for collaboration)
XL Release (to eliminate Sharepoint and spreadsheets for release management)
Artifactory (not only as a binary repo, but to proxy external resources previously unavailable to our teams)
Ansible (to automate deployments)
Sauce Labs (for test automation)
Graphite (you can’t get better if you don’t measure yourself)
One of the most basic goals that DevOps and Agile try to achieve is better communication. How do we build tools + processes where everybody is kept in the loop.
At GitHub, our hardest working “employee” is Hubot, the open source chat bot. Hubot sits in your company's chat client and drives everything: graphing, monitoring, provisioning, tweeting and even telling some jokes. All of this has enhanced our culture of remote work. Internally we have integrated Hubot with Jenkins in order to drive our CI/CD pipeline.
In this session you will learn:
* How GitHubbers use Hubot to drive culture in a remote-first workforce
* How we use chatops to deploy and interact with GitHub via Hubot
* How we integrated Hubot with Jenkins for our CI/CD needs.
Lots of cool things are developing with IoT best practices, but the fact that so many embedded devices require native code binaries complicates many things that are taken for granted in modern DevOps. While, yes, Jenkins has been doing CI for C/C++ for years, it doesn’t have the flexibility and power that may be available in other languages.
However, modern tooling is emerging for C/C++, such as proper dependency managers like the FOSS Conan project for native code (primarily C/C++) binaries. With this development and Jenkins pipelines we can reimagine DevOps in an IoT/Native world.
In this talk we will demonstrate breaking up a C++ application into several binary dependencies as is a best practice in other modern languages, both from OSS and internal and a set of Jenkins pipelines to maintain them and build out the application from individual independent binaries to a final application ready to be distributed to consumer end points delivered as docker containers or natively.
As development paradigms shift to meet the demand of the consumer, so must testing practices. But how can one keep up with these changes and see into the future of test automation? Join Titus Fortner, solutions architect at Sauce Labs, as he shares how priorities and technologies change as testing evolves, and organizations mature and adopt CD (and other modern paradigms/principles). Drawing from his own experiences, he will start from a strategy perspective and demonstrate how to automate manual processes and bottlenecks. That will be followed by a demonstration of how to isolate problems early by shifting your testing left, once the proper automated testing framework and coverage is in place. Titus will conclude the presentation with a discussion around remediation as the next wave of automated testing. Implementing this as the next step after isolation can drastically reduce time spent on debugging, thus increasing developer efficiency. This session will leave you with a better understanding of current best practices in automated testing, as well as a view into the future of new technologies to help teams deliver better software, faster.
When building Docker images we often use multiple build steps and Dockerfiles to keep the image size down. Using multi-stage Docker builds we can eliminate this complexity, bringing all of the instructions back into a single Dockerfile while still keeping those images nice and small.
One of the most challenging things about building images is keeping the image size down. Each instruction in the Dockerfile adds a layer to the image, and you need to remember to clean up any artifacts you don’t need before moving on to the next layer. To write a really efficient Dockerfile, you have traditionally needed to employ shell tricks and other logic to keep the layers as small as possible and to ensure that each layer has the artifacts it needs from the previous layer and nothing else. It was actually very common to have multiple Jenkins pipeline steps and/or projects with unique Dockerfiles for different elements of the final build. Maintaining multiple sets of instructions to build your image is complicated and hard to maintain.
With multi-stage builds, you use multiple FROM statements in your Dockerfile. Each FROM instruction can use a different base, and each of them begins a new stage of the build. You can selectively copy artifacts from one stage to another, leaving behind everything you don’t want in the final image and simplifying the both the Dockerfile and Jenkins configurations needed to produce your images.
Segregation/separation of duties is a controversial topic in organizations that have dwelled on a legacy hierarchy long enough to confuse separation of duties with separation of departments. These organizations experience severe delays in implementations of Continuous Delivery Pipelines since Pipelines tend to relentlessly gnaw through departments and silos, and rightfully so, and can be seen as posing risk to the business and our customers.
This talk explains the intricacies (what) of Segregation of Duties and why it is critical for governance. Continuous Delivery Pipelines improve speed, quality, and predictability of our product releases and are critical for sustainability. So, trading one for the other is not an option. This session dives into the design aspects of how to model Pipelines to honor Segregation of Duties, while still releasing quality products frequently and predictably to Production.