Another test pattern template


I’ve been toying with the idea of a simpler pattern for test case design.


My goal in this thought process is to find a way for good testers to formalize the techniques they use as test patterns in a manner that allows them to quickly document a pattern while maintaining some formality in the definition.


My initial draft of such a template is below:




  • Name



    • A memorable name – something you can refer to in conversation


  • Problem



    • One sentence description of the problem that this pattern solves.


  • Analysis



    • Description of the problem area (or one paragraph description of the problem area). This should answer the question of how this technique is better than simply poking around.


  • Design



    • Answers the question of how is this pattern is executed (how does the pattern turn from design into test cases).


  • Oracle



    • What are the expected results (may be included in design)


  • Examples



    • List examples of how this pattern finds bugs


  • Related patterns



    • List any related patterns (if known)

Let’s take something simple that most testers know about and apply it to this template.




  • Name



    • Boundary Value Analysis (BVA)


  • Problem



    • Many errors in software occur near the edges of physical and data boundaries.


  • Analysis



    • Choose test cases on or near the boundaries of the input domain of variables, with the rationale that many defects tend to concentrate near the extreme values of inputs. A classic example of boundary-value analysis in security testing is to create long input strings in order to probe potential buffer overflows. More generally, insecure behavior in boundary cases is often unforeseen by developers, who tend to focus on nominal situations instead.


  • Design



    • For each input value, determine the minimum and maximum allowed value (min and max). Design a set of test cases that tests min, max, min - 1, and max +1 (note that BVA is sometimes defined to include min + 1 and max -1). Test cases should be comprised of the following



      • Input(s) to the component;


      • Partition boundaries exercised;


      • Expected outcome of the test case.


  • Examples



    • For an input field that expects a number between 1 and 10, test 0,1,10, and 11


    • <others deleted>


  • Related patterns



    • Equivalence class partitioning (ECP)

Sometimes, I feel like this list is too long, and moments later find it too short. I am sure I will play with it over time. Binder concluded that every test design pattern has the following unique and essential elements:




  • Fault model: why this approach is better than just poking around.


  • Test model: what facets of the implementation under test should be considered and how they should be abstracted. The test model is closely related to the fault model.


  • Test procedure: How to transform an application model into test cases.


  • Oracle: How to evaluate actual results.


  • Entry criteria: Meeting an entry criteria solves two common problems: the false confidence that can result by skipping component tests and pass system scope tests, and the waste of time that results when components aren't stable enough to test at system scope. Trivially obvious perhaps, but frequently ignored.


  • Exit criteria: An answer to the question "how much testing is enough".

I can’t answer exactly why I didn’t just pick these 6 and forget about it. In my mind, the design element contains entry and exit criteria, and the problem analysis contains the fault and test models, and allows for these elements to be expressed more easily. I do, however, understand that I need to put more thought (and more thinkers) to this problem and see how it grows.


I plan to take a sampling of senior testers (I have them in the classroom, I may as well take advantage of them), and have them generate some patterns based on the discussion above. Once I finish that study I’ll update template and share anything interesting that I discover.

Comments (0)

Skip to main content