Stop Hoping for Quality and Just Test It!


As I continue to apply more engineering rigor to the release process in my team, I hear statements referring to engineers being hopeful and hoping things will go well.  Hoping is not the correct way to ship software.  I also hear a lot of statements like “we are confident this will work”.  Confidence, although great to have as an engineer, cannot solely be the main indicator to release high quality software.  What you need to ship high quality software is testing.  It’s having the data that shows you ran appropriate tests and validated not only that your software works when it should, but that it works correctly when it shouldn't, when you take an erroneous path through it, and it fails gracefully if necessary.  My team has incorporated an extra check to make sure we truly are ready to ship our software when we think we are.  We call these extra checkpoints Release Review meetings or Go/No-Go meetings.  Think twice before saying you are confident because it may come across like you are trying to sell the fact that the software is ready to release.  But this is not the place for a sales pitch.  The people needing to give the positive votes in a Release Review meeting don’t just need a statement of confident.  Along with it, they need to see the data to back that up, that proves all the correct items were tested and that proves the software works as expected.  I see many confident and hopeful software engineers working late nights and weekends because their confidence and hope was short-lived and inappropriately placed.  Please don't be one of them.

When we were all learning how to program and how computers work, one of the first things you learn is that the computer, and specifically the software, only does what you tell it to do.  If you incorrectly tell it to do something, it will.  Software can’t figure out your intentions.  It doesn’t have a mind of its own and it doesn’t think “hey, I’m betting my programmer really wanted to do this and not that”.  (Although some day it would be great if it could!)  Hoping your software does something is a programmer's way of assuming the software understands his or her intentions.  The only way to know if your software is doing what you want it to do is to test it.  You can’t hope and you can’t push your confidence as a programmer at it.  You can only test it.  Walk through the customer scenarios.  Walk through the places where things could fail and make sure it fails gracefully or recovers without failing.  Figure out what you are missing, where your program can fail that you haven’t considered, and then test that and see what actually happens.

Where is your program going to fail?  You should always ask that question.  And if your software is large and complex that can be a hard question to answer.  So consider asking yourself where are you taking risks?  And what I mean by this is where are the areas in your software:

  • That have dependencies on code outside what your team owns
  • That have some code that is unstable and is known to produce a lot of defects
  • That have some code that is a bit unknown due to it being legacy software, written by people who no longer work on the team and didn’t comment it well
  • That have some code that is written in a complex way that makes it difficult to understand
  • That don't have enough testing coverage
  • That when released, there is no way to rollback or fix forward your changes if problems occur

Communicating risks is hugely important in understanding the state of your software.  Understanding the risks early leads to people taking action to mitigate them and that leads to better software overall.  Communicating risks within your code is not a sign of weakness.  It's a sign that you understand all aspects of your very complex software system and you have the confidence as an engineer to state where the gaps are.  Good software engineers know how to test their code for quality and how to communicate the risks and gaps in their software ecosystem.

If you have read this far, I'm going to assume this topic interests you so let me ask you to do a little assignment.  When you are at work, look at your feature, user story, or overall ecosystem and come up with 3 risky areas where your software may fail.  Rate the areas as high, medium, or low.  And then determine the best way to mitigate those risks.  Do you need to refactor your code, remove your unnecessary dependencies, or do more testing?  Now go talk to a coworker about the risky areas you uncovered and post your thoughts in the comment section of this blog.  Is it difficult to find 3?  Is it difficult to determine what a risky area in your software is?  Did you find this exercise easy?  I'd love to hear from you.  Please add a comment letting me know if this assignment was helpful.

Comments (5)

  1. EG says:

    Writing software is easy.  One of the easiest things in the world, because like you said – fundamentally, it's just following a list of commands.  Ensuring your software works as you expect is also easy, because an Engineering team has several places to ensure this:

    1.  Develop your new code (whether feature or bug fix) in a branch.  

    2.  Every developer should write unit tests for the code they write.  If they don't think they need a unit test, then replace them with a developer who will do this.  Keep this simple, and use mock objects to reduce complexity.

    3.  Write functional tests and automate them.

    4.  Do not let the developers branch merge into your mainline until 2 & 3 pass consistently.

    5.  Once in your mainline, continue to run the tests in 2 & 3 for every new check-in into  your mainline.

    6.  Always have a plan to revert a change if you run into an issue missed by 2 & 3.

    For my exercise, the biggest areas of risk are:

    1.  User story:  If requirements are not clear and complete before the story gets to the developer, then the developer will finish the story.  Many times this story does not end the way the Product Owner would have ended it.

    2.  Code reviews:  A process has the potential to become automatic or thoughtless.  Be wary of this happening with code reviews.  Many times the reviewer is just 'checking the box' to get to the next stage (and being hopeful as you mentioned) and that is not helpful at all.  

    3.  Unrealistic dates:  Many times a date is determined by means other than how long it takes to write quality code.  Maybe it's a push from management because you're already behind.  Or maybe engineering made a poor estimate and don't want to admit it.  Whatever the reason, forcing something to go out before it's done is one of the riskiest things you can do and almost always results in a problem more costly than just finishing it right the first time.

  2. KS says:

    Great post! I often find myself telling my teams, "Hope is not a strategy". However, reducing surface areas IS a strategy. More often than not, there are multiple factors that directly affect the quality of a software project:

    * Date-Driven Development

    * Incomplete/incorrect requirements

    * Focus on speed over correctness

    * Lack of acceptance criteria

    I could go on, but the Seahawks game is about to come on. 🙂

    My strategy for reducing surface areas generally involves tightening the feedback loops. Ironically, getting working functionality in place and in front of stakeholders sooner rather than later is the key tool even though that may seem to fall into the camp of "focus on speed over correctness". Like in judo, I try to use speed to highlight incorrectness and then address the problems by slowing down.

    All of this comes down to a key agile tenet that has been lost on all the zealots out there. We want to increase visibility to all; the team, stakeholders and customers. With that visibility, we hope to shine a light on the issues in our processes and then take concrete actions to fix them. Often, one of the first and most important actions is increasing and improving the testing effort.

    "Do you need to refactor your code, remove your unnecessary dependencies, or do more testing?"

    Yes.

  3. Kevin Goldsmith says:

    How about also designing your software to be fault-tolerant, so that you anticipate things not working and so handle that without user impact as best as you can? With connected software (like almost everything these days) that should also be part of the pattern around quality…

  4. Ryan says:

    Anita how do you run go/no go meetings when project ship very rapidly, weekly or even daily?

  5. anita george says:

    Thank you everyone for your comments.  Ryan – I have go/no-go meetings set up every Monday, Wednesday, and Friday in the afternoon.  If something is ready to go to production, we review it in one of these meetings.  Some releases don't get reviewed like small data script changes or bug fixes.  If we are lighting up a new feature or changing how the user interacts with our system, then it goes through a review.  By knowing when these meetings are, teams plan accordingly.  Some times, if it's not timed properly (like let's say a team plans to release on Thursday), we'll review it on Wednesday with a conditional go if everything looks good after the final testing is complete.  I hope that helps.

Skip to main content