Welcome!

Chief Technology Officer of Electric Cloud

Anders Wallgren

Subscribe to Anders Wallgren: eMailAlertsEmail Alerts
Get Anders Wallgren via: homepageHomepage mobileMobile rssRSS facebookFacebook twitterTwitter linkedinLinkedIn


Related Topics: Application Performance Management (APM), DevOps Journal

Article

Test-Driven Development By @ElectricCloud | @DevOpsSummit #DevOps

Automated software testing continues to play a vital role in enterprise software delivery

Measure Twice - Cut Once: The Benefits of Test-Driven Development

Automated software testing continues to play a vital role in enterprise software delivery and the speed with which software-driven organizations can begin to produce value. You want to ensure quality software, alongside fast time to market. How do you balance the need for speed with the need to test everything to deliver high-quality software to the end user?

At last year's DevOps Enterprise Summit, which Electric Cloud hosts in partnership with Gene Kim, author of The Phoenix Project, Kim asked every speaker to end their presentation with a slide that covered one of the two following topics: "Here's what I don't know how to do" or "Here's what I'm looking for help with." This gave all attendees at the summit insight into the top enterprise IT problems facing the DevOps community at large. It came as no surprise that several of the top challenges revolved around the concept of testing, and more specifically, the focus on better strategies and tactics for creating automated tests for legacy applications.

DevOps and test-driven development
I'm fascinated with the concept of testing in the software development cycle because this is where a lot of the value lies. What good is a killer feature if it doesn't work or a new release if it gets postponed for months as you try to fix various bugs? Streamlining and accelerating testing not only speeds up your feedback loops-and your entire delivery pipeline-but also serves as the most immediate gratification and validation point in your process: you've developed a new feature, and you've proven that your code works!

Talking with customers, it becomes obvious that there's no "one size fits all" in DevOps in general and in software testing in particular. Organizations need to develop the patterns and processes that make the most sense for their teams and their business. For some, test-driven development (TDD) may prove an effective practice to promote both product quality and faster delivery time.

In many developers' minds, TDD might seem like a completely backwards process-the idea that a test dictates how and what you develop might seem topsy-turvy. But in reality, TDD is an incredibly effective development method that can lead to robust, stable, and high-quality releases. I want to share some of the issues I've faced with my work in TDD, along with ways to overcome them in your journey to take advantage of TDD as an effective tool in your development arsenal.

When following a TDD approach, the developer first writes an automated test case to test a new functionality or revision to the code. This initial test case will fail, at first. Then the developer writes the minimum amount of code in order to pass the test. Once the test passes, the developer refactors the code to acceptable standards, to eliminate redundancy. This process is then repeated over and over again as the code evolves and more functionality is added. TDD is therefore an iterative, incremental way to add value to your code while you gain more confidence that the code is viable and working with each new iteration all the way into production.

Pros and cons
Before we dive any deeper, let's talk about the good news and bad news.

Starting with the bad news: approaching software development with a TDD mindset can be intimidating. When you sit down to start developing a new feature, you first need to think of all the things for which you need to build a test, which requires some planning. For less experienced developers, it can be overwhelming. They might feel it's better to dive in to the code first before designing a suitable test. It takes practice and experience to determine the minimum viable test for validating each limited set of functionality that's being developed and expand on that initial test case as your code evolves. It's a matter of finding the right balance.

To validate that your code does what it's supposed to do, you have to make sure you address as many plausible issues as possible as you flesh out the functionality. Then again, you also need to be mindful not to go down a rabbit hole of designing a test for each rare corner case that might never occur. Finally, you don't want to be thrown off track by designing test cases that are so complex that it takes too long to complete the corresponding codebase to test against.

One straightforward way to address these two issues is a trick called "hazard analysis": the process of identifying probable issues, putting them in order of most to least likely, and then beginning the mitigation effort from there. By building a list that lets you decide what's most worthy of your time, you won't be chasing those one-in-a-billion bugs that are next to undetectable. A hazard analysis allows you to stay focused and keep your eyes on the prize. Catching bugs earlier, as part of your development process (rather than having to wait for QA, or worst yet, discovering them in production) is the best, most cost-effective way to ensure that quality code makes it to end users faster.

A mindset shift
While TDD requires developers to invest the time and effort to hone their testing skills, the organization benefits from creating a design of the right procedures to test code more efficiently. Still, beyond the technical hurdles (assuming we're all accomplished engineers), another common hurdle to TDD for developers is psychological-the feeling that they're not actually getting anything done. Traditional development is like building a house-you spend a whole day working, you've laid some foundation, and you can clearly measure your progress. With TDD, you spend the first day (or week, or month) just figuring out how your foundation might crack and where it might need extra support, and it might feel like you have nothing to show for it. That can be a tough feeling to overcome.

Approaching this issue with the right frame of mind can help you get past it. TDD is just like the age-old carpenter's saying, "measure twice-cut once." By spending your time building tests and looking for potential issues (measuring), by the time you actually get to the development of your release (cutting), you can feel pretty confident that your code will work the way it should. And if it doesn't, the development team will have a quick feedback loop through which to discover that. So while the developers who start coding their new feature on day one might feel like they've reached the finish line faster, in the grand scheme of things (especially if any bugs are discovered downstream), fixing them will take much longer when you're trying to stabilize your code for release.

Along those same lines, don't think you won't feel any sense of accomplishment with TDD. Once you have your tests in place and you're working single-mindedly on passing them, you get the satisfaction of knowing your code passes the test and that you're the new best friend of all your QA colleagues who won't need to spend as much time on your code when it reaches the testing stage. For me, watching the light turn green on one of my tests is as much a feeling of accomplishment as launching an entire release.

Another key adjustment in TDD is the fact that it's slow, or at least it feels slow. When you're spending so much time building tests and thinking of possible pitfalls, your actual development process can feel like it's slowed down-especially if you're used to throwing your code over the wall to QA and letting them find the bugs for you. I often warn people who are just starting with TDD to take whatever development time they're used to and then double it. While it may sound like a big investment, you have to think about what you're getting for your additional efforts.

The payoff
When starting your development process by testing first, you won't have to do nearly as many tests at the end of your development cycle-and once you do deploy, things are much less likely to go south. It's much easier to fix something that's broken before it's released, rather than after it's in the hands of the user. Imagine if I told you that never again will one of your releases break once it's out in the wild-doesn't that sound nice? While TDD can't promise that, it significantly decreases the likelihood of your code breaking after you've released it. Hitting the "go" button and knowing that you put a lot of time and effort into preventing any potential snags feels so much better than hoping and praying that nothing goes wrong.

TDD isn't for everyone, and for some people, the shift in mindset can be challenging. But if you can really focus on the "measure twice-cut once" mentality of TDD, you can see some amazing results. I hope you give it a try and see for yourself how TDD can fit into your next development cycle.

To see this article in its original publication visit TechBeacon.com (http://techbeacon.com/measure-twice%E2%80%94cut-once-benefits-test-drive...)

More Stories By Anders Wallgren

Anders Wallgren is Chief Technology Officer of Electric Cloud. Anders brings with him over 25 years of in-depth experience designing and building commercial software. Prior to joining Electric Cloud, Anders held executive positions at Aceva, Archistra, and Impresse. Anders also held management positions at Macromedia (MACR), Common Ground Software and Verity (VRTY), where he played critical technical leadership roles in delivering award winning technologies such as Macromedia’s Director 7 and various Shockwave products.