Chief Technology Officer of Electric Cloud

Anders Wallgren

Subscribe to Anders Wallgren: eMailAlertsEmail Alerts
Get Anders Wallgren via: homepageHomepage mobileMobile rssRSS facebookFacebook twitterTwitter linkedinLinkedIn

Related Topics: DevOps Journal

DevOpsJournal: Blog Post

Starting and Scaling DevOps in the Enterprise: Get the Map to Guide You on Your Path

Starting and Scaling DevOps in the Enterprise

The original post can be found on the Electric Cloud blog.


This is a guest blog post by Gary Gruver, one of Electric Cloud’s strategic advisors. Gary is the Co-Author of Leading the Transformation, A Practical Approach to Large-Scale Agile Development, and Starting and Scaling DevOps in the Enterprise.

Recently, I had the pleasure of participating in Electric Cloud’s Continuous Discussions (#c9d9) community podcast – discussing scaling Agile and DevOps in the enterprise. The topic is near and dear to my heart, as I’ve spent my entire career – at HP LaserJet, Macy’s.com and consulting with Fortune 500 companies – helping large enterprises take their software delivery to the next level.

I’ve written several books on the subject, the most recent one – Starting and Scaling DevOps in the Enterprise – is an accumulation of everything I have learned from working with large organizations on a constant journey to improve their software development processes.

DevOps is a fundamental shift in how leading-edge companies are starting to manage their software and IT work. Today’s technology leaders realize that they need to embrace DevOps to streamline and scale their software releases, or be left behind. However, implementing DevOps in large enterprises – with silo-based thinking, tightly coupled architectures and cultural inertia – comes with a unique set of challenges.

My latest book provides a concise framework for analyzing your delivery processes and optimizing them by implementing DevOps practices that will have the greatest immediate impact on the productivity of your organization. It covers both the engineering, architectural and  leadership practices that are critical to achieving DevOps success.

Since a 1-hour podcast is clearly not enough time to share all I’ve learned on this topic, I though it fitting to offer you my latest book – for free. Below you’ll find the first chapter of the book, and a link for downloading the full copy.

I hope this will be a helpful resource for you on your DevOps path!

Chapter 1: DevOps and the Deployment Pipeline

Software is starting to play a much larger role in how companies compete across a broad range of industries. As the basis of competition shifts to software, large traditional organizations are finding that their current approaches to managing software are limiting their ability to respond as quickly as the business requires. DevOps is a fundamental shift in how leading edge companies are starting to manage their software and IT work. It is driven by the need for businesses to move more quickly and the realization that large software organizations are applying these DevOps principles to develop new software faster than anyone ever thought possible. Everyone is talking about DevOps.

In my role, I get to meet lots of different companies, and I realized quickly that DevOps means different things to different people. They all want to do “DevOps” because of all the benefits they are hearing about, but they are not sure exactly what DevOps is, where to start, or how to drive improvements over time. They are hearing a lot of different great ideas about DevOps, but they struggle to get everyone to agree on a common definition and what changes they should make. It is like five blind men describing an elephant. In large organizations, this lack of alignment on DevOps improvements impedes progress and leads to a lack of focus. This book is intended to help structure and align those improvements by providing a framework that large organizations and their executives can use to understand the DevOps principles in the context of their current development processes and to gain alignment across the organization for successful implementations.

Part of the issue with implementing DevOps principles and practices is that there are so many ideas out there about what DevOps is, and so many different ways to define it. The most consistent and comprehensive definition I have heard lately is from Gene Kim, a co-author of The Phoenix Project and The DevOps Handbook. He is a great thought leader and evangelist for the DevOps movement. In order to get us all on the same page for our work here, we will use his definition of DevOps:

DevOps should be defined by the outcomes. It is those sets of cultural norms and technology practices that enable the fast flow of planned work from, among other things, development through tests into operations, while preserving world class reliability, operation, and security. DevOps is not about what you do, but what your outcomes are. So many things that we associate with DevOps, such as communication and culture, fit underneath this very broad umbrella of beliefs and practices.

People have such different views of DevOps because what it takes to improve quality and flow at every step, from a business idea all the way out to working code in the customer’s hands, differs for different organizations. The DevOps principles designed to improve this process are a lot about implementing changes that help coordinate the work across teams. The movement started with leading edge, fairly small companies that were delivering code more frequently than anyone thought possible. DevOps was also very successful in large organizations like Amazon where they re-architected their monolithic system to enable small teams to work independently. More recently, DevOps has started being leveraged into large organizations with tightly coupled architectures that require coordinating the work across hundreds of people. As it started scaling into these larger more complex organizations, the problem was that people started assuming the approaches for successfully coordinating the work across small teams would be the same and work as well for coordinating the work across large organizations. The reality is that while the principles are the same for small and complex, the implementations can and should be different.

Most large organizations don’t have that context as they start their DevOps journey. They have different people in different roles who have gone to different conferences to learn about DevOps from presentations by companies with different levels of complexity and different problems and have come back with different views of what DevOps means for them, like when the five blind men describe the elephant. Each stakeholder gives a very accurate description of their section of the DevOps elephant, but the listener never gets a very good macro view of DevOps. So, when they go to create their own elephant, nobody can agree on where to start, and they frequently want to implement ideas that worked well for small teams, but are not designed for complex organizations that require coordinating the work of hundreds of people. The intent of this book is to provide the overall view of the elephant to help large organizations gain a common understanding of the concepts and provide a framework they can use to align the organization on where to start and how to improve their software development processes over time. This is important because if you can’t get people in a large organization aligned on both what they are going to build and what approach they are going to use for prioritizing improvement, they are not very likely to deliver a DevOps implementation that will deliver the expected results. It will potentially have pieces of the different things that the organization has heard about DevOps, but it won’t really help the organization deliver code on a more frequent basis while improving or maintaining all aspects of quality. It is like having the five blind men build an elephant based on their understanding of the animal. It may have all the right parts, but it doesn’t really look like or work like an elephant because they don’t have a good macro view of the animal.To clarify the macro view of DevOps, we will look at how a business idea moves to development, where a developer writes code, through the creation of the environment to how code gets deployed, tested, and passed into production where it is monitored. The process of moving from a business idea all the way out to the customer using a deployment pipeline (DP) was originally documented by Jez Humble and David Farley in their book Continuous Delivery. This book will leverage that framework extensively because I believe it represents the basic construct of DevOps. It captures the flow of business ideas to the customer and the quality gates that are required to maintain or improve quality.

It is my personal experience that creating, documenting, automating, and optimizing DPs in large software/IT organizations is key to improving their efficiency and effectiveness. You already have in place something that you are using to get code through your organization from idea to production, which is your DP. But documenting that so everyone has a common view and optimizing it based on using value stream mapping is a key tool in this process that helps to align the organization. The DP defines and documents the flow of code through the system, and value stream mapping the DP helps to identify bottlenecks and waste and other inefficiencies that can be addressed using DevOps techniques. Improving it will require a lot
of organizational change management, but the DP will help everyone understand what processes are being changed at any one time and how they should start working differently.

The DP for a large organization with a tightly coupled architecture is a fairly complex concept to grasp. Therefore, in Chapter 2, we will start with the simplest example of a DP with one developer and will show the inefficiencies that can occur with one developer. Then, in Chapter 3, we will highlight the DevOps approaches that were designed to address those issues. We will also show the metrics you can start collecting to help you understand the magnitude of your inefficiencies so you can align your organization on fixing the issues that will provide the biggest benefit.

Once the basic construct of the DP is well understood, in Chapter 4 we will show how the complexity changes as you start scaling the DP from one developer to a team of developers. Having a team of developers working together on an application while keeping it close to release quality is a fundamental shift for most traditional organizations. It requires some different technical approaches by the developers, but it also requires a cultural shift that prioritizes keeping the code base stable over creating new features. This will be a big shift for most organizations, but it is very important because if you can’t get the developers to respond to the feedback from the DP, then creating it will be of limited value.

The next big challenge large organizations have after they have had some success at the team level concerns how to scale DevOps across a large organization. They typically approach it by trying to get the rest of the organization to do what they did because of the benefits it provided. This overlooks the fact that the biggest barriers to adoption are not technical, but instead involve organizational change management and getting people to work differently. The key to this adoption is helping the broader organization understand the principles, while providing as much flexibility as possible to allow them to develop and take ownership of their plans. In order to make this adoption of principles as flexible as possible, in Chapter 5 we will cover how to segment the work in large organizations into the smallest pieces possible to enable local control and ownership. For some organizations with loosely coupled architectures, this will result in a lot of small, independent teams where you only have to coordinate the work across tens of people. For other organizations with tightly coupled architectures that require large applications to be developed,
qualified, and released together, this will require coordinating the work across hundreds of people. It is important to start by grouping applications into these types because the things you do to coordinate the work across tens of people will be different than the types of things you do to coordinate the work across hundreds of people. While small teams will always be more efficient and deploy more frequently, the process of documenting, automating, and continually improving DPs is much more important for coordinating work across hundreds of people because the inefficiencies across large organizations are much more pronounced.

In Chapter 6, we will provide a quick overview of the approaches that work well for large organizations with small teams that can work independently. This topic will not be covered in a lot of detail because most available DevOps material already covers this very well. In Chapter 7, we will start addressing the complexities of designing a DP for large, tightly-coupled systems. We will show how to break the problem into smaller more manageable pieces and then build those up into more complex releasable systems. In Chapter 8, we cover how to start optimizing these complex DPs, including metrics, to help focus changes in the areas where they will most help the flow through the system. In Chapter 9, we will review and highlight the differences between implementing improvements for small independent teams and for large complex systems.

Changing how a large organization works is going to take a while, and it is going to require changing how everyone both thinks about and does their actual work. A couple of things are important to consider when contemplating this type of organizational change: first, start where it provides the most benefit so you can build positive momentum, and second, find executives that are willing to lead the change and prioritize improvements that will optimize the DP instead of letting teams sub-optimize their segment of the DP.

Once the DP is in place, it provides a very good approach for transforming how you manage large and complex software projects. Instead of creating lots of management processes to track progress and align different teams, you use working code as the forcing function that aligns the organization. Requiring all the different Development teams to integrate their code on a regular basis and ensure it is working with automated testing forces them to align their software designs without a lot of management overhead.

The move to infrastructure as code, which was spearheaded by Jez Humble and David Farley and involves treating all aspects of the software development process with the same of rigor as application code, provided some major breakthroughs. It requires that the process for creating environments, deploying code, and managing databases be automated with code that is documented and tracked in a source code management (SCM) tool just like the application code. This move to infrastructure as code forces a common definition of environments and deployment processes across Development, QA, and Operations teams and ensures consistency on the path to production. Here again it is working code that helps to align these different groups. Moving to infrastructure as code increases direct communication between Development and Operations, which is key to the success of all sorts of cultural and structural shifts DevOps requires. People no longer log on to computers and make changes that can’t be tracked. Instead they work together on common scripts for making changes to the infrastructure that can be tracked in SCM tool. This requires them, at minimum, to document any changes they are making so everyone can see what they are doing, and ideally it forces them to communicate directly about the changes they are making so they can ensure those changes will work in every stage in the DP all the way out to production. Having to use common code and common tools forces the collaboration. The effect that this collaboration has on efficiency cannot be underestimated. Since the teams are aligned by having to ensure their code works together on a daily basis, management processes do not need to be put in place to address those issues. Software is notoriously hard to track well with management processes. Getting status updates everywhere doesn’t work that well and takes a lot of overhead. It is more efficient if the teams resolve issues in real time. Additionally, it is much easier to track progress using the DP because instead of creating lots of different managerial updates, everyone can track the progress of working code as it moves down the pipeline.

This approach of a rigorous DP with infrastructure as code and automated testing gating code progression is significantly different from the approach ITIL uses for configuration management. Where the ITIL processes were designed to ensure predictability and stability, the DevOps changes have been driven by the need to improve speed while maintaining stability. The biggest changes are around configuration management and approval processes. The ITIL approach has very strict manual processes for any changes that occur in the configuration of production. These changes are typically manually documented and approved in a change management tool with tickets. The approved changes are then manually implemented in production. This approach helped improve stability and consistency, but slowed down flow by requiring lots of handoffs and manual processes. The DevOps approach of infrastructure as code with automated testing as gates in the DP enables better control of configuration and more rigors in the approval process, while also dramatically improving speed. It does this by automating the process with code and having
everything in the SCM tool. The code change being proposed is documented by the script change in the SCM. The approval criteria for accepting the change is documented by automated tests that are also in the SCM. Additionally, you know exactly what change was implemented because it was done with the automation code under revision control. The whole approach puts everything required for change management in one tool with automation that is much easier and quicker to track. It also improves the rigors in the approval processes by requiring the people who traditionally approve the changes to document their criteria via automated tests instead of just using some arbitrary management decision for each change.

This approach provides some huge benefits for auditing and regulatory compliance. Where before the audit team would have to track the manual code changes, approval processes, and implementations in different tools, it is now all automated and easily tracked in one place. It dramatically improves compliance because computers are much better than humans at ensuring the process is followed every time. It is also easier for the auditing team because all the changes are documented in a (SCM) tool that is designed for automatically tracking and documenting changes.

These changes are dramatically improving the effectiveness of large organizations because they improve the flow of value while maintaining stability. Most importantly, though, is that setting up and optimizing a DP requires removing waste and inefficiencies that have existed in your organization for years. In order to improve the flow, you will end up addressing lots of inefficiencies that occur in coordinating work across people. The productivity of individuals will be improved by better quality and faster feedback while they are writing code, but the biggest benefits will come from addressing the issues coordinating the work within teams, across teams, and across organizations. It will require technical implementations and improvement, but by far the biggest challenge is getting people to embrace the approaches and change how they work on a day-to-day basis. These changes will be significant, but the benefits will be dramatic.


As software becomes the basis of competition, how we currently manage software limits the kinds of quick responses that businesses require. This is where DevOps steps in. It is all about improving speed while maintaining all aspects of quality. As businesses embark on DevOps journeys, though, they are finding that there are myriad ideas out there about what DevOps is and how it is defined. As this book will address, most large organizations don’t have a good framework for putting all these different ideas into context as they start their DevOps journey. This makes it difficult to get everyone working together on changes that will improve the end-to-end system. People working in a large organization need to be aligned on what they are going to build and need to find ways to prioritize improvement or else they won’t implement DevOps in ways that will deliver the expected results. As this book will show, documenting, automating, and optimizing DPs in large software/IT organizations improves efficiency and effectiveness and offers a very good approach for transforming how you manage large and complex software projects.

In the coming weeks, I will be sharing additional chapters from the book.
Can’t wait? you can download your free copy now.

More Stories By Anders Wallgren

Anders Wallgren is Chief Technology Officer of Electric Cloud. Anders brings with him over 25 years of in-depth experience designing and building commercial software. Prior to joining Electric Cloud, Anders held executive positions at Aceva, Archistra, and Impresse. Anders also held management positions at Macromedia (MACR), Common Ground Software and Verity (VRTY), where he played critical technical leadership roles in delivering award winning technologies such as Macromedia’s Director 7 and various Shockwave products.