An individual operation can usually be executed via several different user actions. For example, creating a new document can typically be done by one of the following user actions:
- Clicking the File menu, clicking the New submenu, then clicking the New Document menu item.
- Typing Alt+F to invoke the File menu, typing N to invoke the New submenu, then typing N to invoke the New Document menu item.
- Typing Alt to invoke the main menu, repeatedly pressing the left arrow key until the File menu is selected, repeatedly pressing the down arrow key until the New submenu item is selected, pressing the left arrow key a single time to expand the New submenu, repeatedly pressing the down arrow key until the New Document menu item is selected, then pressing Enter to invoke the New Document menu item.
- Invoking the New Document menu item via accessibility APIs.
- Clicking the New Document toolbar button.
- Invoking the New Document toolbar button via accessibility APIs.
- Typing Ctrl+N.
- Executing the scripting object model method that creates a new document.
While these may seem to merely be different avenues of invoking a single operation, in some programs a different code path will be invoked for one or more of these. Even if each form of execution does resolve to a single application operation, each execution path must be tested to ensure it is connected to the correct operation and that each operation is correctly enabled or disabled. (Imagine the havoc that would ensue if the Save command was accidentally hooked up to the Revert All Changes command, so that attempting to save changes threw away everything the user had done!)
Each execution path requires its own test case, but the verification in each test case is identical. Often, the verification code will be copy-and-pasted into each test case. At best, this is a maintenance nightmare which significantly decreases confidence in the overall test. (How do I find every test case I need to update? How do I know whether this test case is doing something slightly different so that I need to tweak the update slightly? How do I find the time to do this for the five hundred affected tests?)
A typical solution to this problem is to factor the duplicated code out to a helper method. However, while factoring out duplication is easy, doing so correctly is much harder. The verification necessary for these tests may be implemented in a shared method, but there are likely many other test cases that require some portion of or a slight variation on said verification. An attempt may be made to parameterize the shared method so that it can be used by these other test cases as well, but this often quickly turns the shared method into a mess of spaghetti code.
Splitting the shared verification out into individual methods that can be called or not as the situation demands can result in each test case calling many different verification methods, eliminating much of the savings gained by (and certainly losing the simplification resulting from) sharing the verification between test cases. So test cases go back to duplicating the verification, which brings us right back to our original problem.
We have decoupled verification from execution in such a way as to enforce correct factoring of verification code.