Did You? Did You Really? Loosely Coupled Comprehensive Verification


Verifying that a test case’s actions had the expected result is perhaps the most important part of testing. Every test case does something at least a little differently than every other test case, so the expected results are often a little different. These minute differences make it difficult to factor verification out to shared code and so verification code tends to be embedded in and duplicated across each test case.

Intermixing test case execution code with test case verification code further complicates matters. Initial state data necessarily must be gathered before individual operations are executed. Expected state can be calculated anytime after initial state is recorded to just before actual state is verified. Verification that actual state matches expected state must of course be done sometime after each operation is executed; often immediately after, if subsequent steps in the test case will destroy the current actual state. All of this makes it difficult to differentiate between execution code and verification code.

Separately, the set of properties that are typically verified is nowhere near the complete set that would be necessary for truly comprehensive verification (that is, verifying every property after every operation). The copious amount of work required to do so is generally deemed not worth the trouble. This is especially true since for any particular operation most properties will be unchanged. Experienced testers, though, will recognize that this is exactly how the most insidious bugs are manifest by changes in something that should be completely unaffected by the operation.

We have bypassed these problems by decoupling verification from execution. Loosely Coupled Comprehensive Verification is easy to explain and almost as easy to implement. Just before a test case or LFM method executes an operation, it notifies the Verification Manager that it is about to do so and also provides any relevant details. The test case or LFM method next executes the operation, and then finally it notifies the Verification Manager that it has completed the operation. That’s it as far as the test case or LFM method is concerned!

When Verification Manager is notified that something is about to happen it baselines current state and then works with a set of Expected State Generators to determine the expected state. Upon notification that the operation has completed Verification Manager compares actual state against expected state and logs any differences as failures.

This very loose coupling between verification and the rest of the system makes it very flexible. If the details regarding how a particular expected state is calculated change, the corresponding Expected State Generator is the only entity that has to change. Similarly, if the set of properties being verified changes nothing outside the verification subsystem needs to be modified.

Another benefit we get from this scheme is a dramatic reduction in follow-on failures – failures that occur solely because some previous action failed. Because we baseline expected state before every action it is always relative to the current state of the application, so a previous failure that has no effect on an action won’t fail that action just because the verification code expected that previous action to succeed. This eliminates “noise” failures and allows us to concentrate on the real problem.

Because verification details are decoupled from execution, the set of properties being verified can start small and expand over time. Helping this to happen is the ability to say “I don’t care” what happens to a particular property as a result of a particular operation. Any property with such a value is ignored when actual state is compared to expected state after the operation has completed. Once the expected result is known the tester simply updates the Expected State Generator appropriately and suddenly every test case automatically expects the new behavior.

Comments (11)

  1. Adam says:

    Have you developed a framework and possibly an example of this?

    Thanks,

    adam at dymitruk dot com

  2. Manas Singh says:

    This technique of decoupling of verification from execution makes the Verification Manager responsible for verifying the changes and identifying potential problems. This is cleaner than intermixing testing and verification logic.And also it compares all the properties for "truly comprehensive verification’ which will help in catching those ‘insidious bugs’.

    But it appears to have a very run time overhead as the verification manager has to baseline current state before executing each and every test case and compare the entire state after executing the test case.

  3. Drew says:

    This is all a little vague. I’m just a simple tester, so I don’t deal well with that. Could you explain why this reduces follow-on failures? I would expect some kinds of test failures to result in polluting the machine, resulting in unreliable results in further actions. Or are you saying that you have a priori knowledge of which kinds of failures might be blocking failures for later test cases? Or maybe you’re saying that the test cases actually exercise different paths in the dev code than you may have intended, but (even though you’re not testing what you had meant to) you can tell that you got the correct result given the initial environment? If it’s the latter it seems to me that you could report only one bug found by the test and inadvertantly hide other bugs until that initial blocking bug is fixed. Seems awfully dangerous. Especially near a big milestone (beta, RC, RTM, even an IDX or an RI). How do you deal with that problem?

    Re-reading what I just wrote, I hope I didn’t come off as too confrontational. Not trying to attack you. Honestly just curious.

  4. micahel says:

    Adam: We are developing a framework for this, but nothing I can share just yet I’m afraid. I’ll elaborate with some examples after I complete this series.

  5. micahel says:

    Manas: Baselining state could indeed have a high runtime cost. It all depends on how expensive gathering all that data is. One of the grand things about it, though, is that you can scale the data you gather to balance that cost with the gain you receive from verifying the data.

    Baselining starting state and gathering current state necessarily must happen at runtime, but calculating expected state and comparing the expected and current states could be postponed until after the test case has finished executing. This wouldn’t reduce the total time required, but it does reduce the time for the test to actually execute.

  6. micahel says:

    Drew: A typical scripted test case (for a drawing program like Microsoft Visio, say) goes something like this:

    1) Draw a rectangle. Verify the rectangle appears in the expected location.

    2) Set the rectangle’s fill to be red. Verify its fill turns red.

    3) Move the rectangle. Verify its new location is correct. Verify it is still red.

    If in Step 2 the rectangle actually turns green, Step 2’s verification will fail, but Step 3’s will too even though the step itself succeeded. This is a follow-on failure. You have to take the time to look at this failure and determine that the step didn’t really fail.

    With Loosely Coupled Comprehensive Verification, Step 2’s verification will still fail. Because expected state is re-baselined before every step, however, Step 3 now (automatically, mind you) becomes "Move the rectangle. Verify its new location is correct. Verify it is still *green*." It passes – no follow-on failure this time!

    So there’s no knowledge of what failures are blocking test cases, and no danger of missing missing bugs (none introduced by the use of this technique anyway). Just elimination of one problem keeping us from doing the best testing we can do!

  7. Your Logical Functional Model lets you write test cases from your user’s point of view, test cases that…

  8. In many of my posts I have alluded to the automation stack my team is building, but I have not provided…

  9. I think my team – much of Microsoft, in fact – is going about testing all wrong.

    My team has a mandate…

  10. In many of my posts I have alluded to the automation stack my team is building, but I have not provided…

  11. I. M. Testy says:

    Michael Hunter is a well know tester both inside and outside of Microsoft. Michael writes a testing column