One of my favorite test quotes comes from a former peer of mine - "development is finite but testing is infinite". He used that statement to reinforce the challenge of the software test function and the need to constantly be looking at the ROI (return on investment) of testing activities. There's always something more to test.
Short of hiring Buzz Lightyear ("to infinity… and beyond!") as a software tester, we need to figure out how to prioritize various testing activities. As noted above, ROI needs to guide the way but that isn't an easy thing to measure.
For me, the first step in test prioritization is assessing product risk. The definition of risk that I use is the following (for a detailed example, go to https://technet.microsoft.com/en-us/library/cc535373.aspx):
- Risk exposure = (probability of the risk event happening) X (impact of the risk event)
So let's think about this in the context of a prime test activity - creating an automated regression test (see Automated Regression Test Qualities for more information). Here's how I think about prioritizing regression tests for a particular software artifact based on risk:
- Probability = the likelihood of a regression happening in this artifact. Some key inputs to assessing probability include churn, complexity, and existing regression coverage of the artifact.
- Impact = the estimated cost of the regression should it happen. Some key inputs to assessing impact include the severity of a bug resulting from the regression (security, data damaging, etc), the relative use of the artifact, and the customer perception. For example, an issue on the welcome screen of the product may not be severe, but the impact may still be high if the screen comes up often and the issue results in a perception of low quality by the customer.
So Regression Exposure = (likelihood of regression) X (impact of regression). While measuring these inputs absolutely is difficult, getting a relative comparison is something that you can work through.
Note that this will provide the return aspect of ROI. Since some tests are easier to create and maintain than others (see Test Automation Pyramid), determining the effort to create the regression test must also be considered for the investment portion of the ROI assessment.
This approach can be applied to other testing activities as well. Great testers are constantly thinking about the risk they are mitigating and the ROI of their activities. In my next post, I will walk through how we applied the regression risk approach to prioritize form based tests during AX'7' development.