So What Should A Test Case Look Like?


What if you had a test case that looked like this? (Assuming a shape-drawing application such as Microsoft Visio…)

Logical.Projects.CreateNewProject();

Point rectangleStart = DataManager.ScenePointProvider.GetNextValue();
Point rectangleEnd = DataManager.ScenePointProvider.GetNextValue();
Logical.SceneElements.CreateRectangle(rectangleStart, rectangleEnd);

Point circleCenter = DataManager.ScenePointProvider.GetNextValue();
Point circleRadius = DataManager.DoubleProvider.GetNextValue();
Logical.SceneElements.CreateCircle(circleCenter, circleRadius);
Logical.SceneElements.SelectAll();

Logical.Fill.Color = DataManager.FullSpectrumColorProvider.GetNextValue();
Logical.Stroke.Color = DataManager.FullSpectrumColorProvider.GetNextValue();

Logical.SceneElements.Copy();
Logical.SceneElements.Paste();

Note that this test case:

  • Says nothing about how its actions are carried out., Simply running this test case a number of times could cause the full set of possibilities to be executed. Further, if this test case is run alongside many other tests that also have nothing to say about how their actions are carried out each action would be executed sufficient numbers of times to allow each different execution method to be used multiple times.
  • Contains no verification. It will not need to change to make the verification more complete or if the expected results of any of its actions change.
  • Contains no references to any UI. It will not need to change regardless of how drastically the UI changes.
  • Will only need to change if the functionality it is testing changes.
  • Is very simple.
  • Is focused on actions a user might take – it looks quite similar to the steps in a help topic, in fact.
  • Writing it tests the spec.
  • Could be written before the code it exercises exists.
  • Could just “light up” once the code it exercises does exist.

Writing one test that stands in for several similar tests, writing that test early in the feature development process rather than late, and modifying that test only when the feature undergoes semantic changes: this is our vision for changing the way we test. Now let me explain the technology we are building to help us reach that vision.

Comments (8)

  1. AndrewSeven says:

    "Now let me explain the technology we are building to help us reach that vision."

    The Friday cliff hanger! 😀

  2. Tester says:

    How is this test confirming for expected results? What if the rectangle isn’t drawn?

    What if the app blows up, where do you test for unexpected failures?

  3. The attributes that you mention are worthy, but shouldn’t the list contain something about the risk that you’re trying to test for, or the theory of error that might be at work?

    I mean: this test could identify a memory leak (especially if run a sufficient number of times); it could run through big sets of values; it could trigger a crash. But suppose that the user that you’re simulating tried to create a rectangle, and then changed her mind and deleted it. Could this test make sure that the SceneElements object can handle the deletion properly, free the memory correctly, display the data in an appropriate way on display resolutions of various sizes? Maybe not–and that would be okay. I see to some degree what the test is supposed to DO. I find it harder to see what the test is supposed to TEST.

    Cheers,

    —Michael B.

  4. Tester: We’ve decoupled (well, mostly anyway) verification from execution, so the test case doesn’t have to worry about verification at all. I’ll post more on that soon!

  5. Michael: It sounds like you are asking "What are you verifying?" One of my next posts will talk about how we’ve (mostly) decoupled verification from execution, allowing the test case to be completely ignorant about what is being verified and what that verification involves. This lets us reuse a single test in multiple contexts: it can be a Build Verification Test by turning off much of the verification, or it can be a comprehensive test by running it multiple times and having Execution Behavior Manager (http://blogs.msdn.com/micahel/archive/2005/05/25/OneMethodToRuleThemAll.aspx) run through the various implementations of each method with all verification enabled. Memory leaks can be caught or not by switching that part of the verification model on or off. Any number of microbehaviors (e.g., drawing a rectangle, then deleting it, then drawing a new one, then moving it to the proper location) could be injected along the way to fulfilling the semantics of an LFM method. What a test case is testing depends largely on its context.

    Many other aspects I think are simply setup details – which is to say context in a different context. <g/> Running the test case at multiple resolutions could be as easy as looping through a series of resolutions, setting the screen to each resolution and then executing this test.

    Separating the semantics of a test (what it does) from its implementation (how it does it) from its verification (what it is checking) can complicate matters when you want to know all of this at once. Maintenance of each piece, however, is much simpler. Also, we have found that once you get your head wrapped around dealing with each of these separately (which takes some doing, for sure!) it is pretty rare to actually need to smush everything back together again.

  6. Visual Studio Team System

    Yesterday marked the one-year anniversary of the public announcement of…

  7. In many of my posts I have alluded to the automation stack my team is building, but I have not provided…

  8. In many of my posts I have alluded to the automation stack my team is building, but I have not provided…