Test Cases Have Intimate Knowledge Of The User Interface

Test cases do not often make a distinction between the user actions a test case is testing and the steps it takes to invoke those actions. Indeed, most test cases explicitly tie these details together! Because much emphasis is placed on testing every possible execution path, an explicit test is written for each method of execution. Each test knows exactly the series of mouse moves, button clicks, and keystrokes it must replicate in order to invoke the operations it is testing.

In the worst case, the test must keep track of what type of widget each UI component is, how to identify the widget, and where in the UI hierarchy the widget lives. Tools such as UI Automation (the vastly-useful-to-testers accessibility API set in Longhorn) hide much of this detail, but the test still must know (for example) to click the File menu, then the New menu, and then the New Document menu item. The test must also know how to handle every failure possible along the way (e.g., what if the menu does not open when it is clicked; what if the menu does not exist).

Additionally, many of these UI interaction sequences are used by multiple different test cases. Most test cases, for example, need to either create a new document or open an existing one. Embedding detailed UI interaction knowledge into each test case causes immense amounts of test case churn when the smallest part of this sequence changes.

A logical solution to this duplication is to factor these common UI sequences out to helper methods. It is often hard to predict whether a particular set of UI interactions will be used frequently enough to deserve a helper method, so testers must either opt to avoid writing helpers until multiple test cases need to make use of the particular set of functionality or invest time up front building infrastructure that is needless if no other test case happens to ever require it.

Organizing this shared functionality in a logical fashion that is robust to changes in the application’s user interface is problematic as well. Method names should explain what the method does, so the helpers get names like “InvokeFileNewNewDocumentUsingTheMouse” (which name must be changed when the File menu, New menu, or New Document menu item is renamed, or when any of them are moved to a different location) or “InvokeNewDocumentViaTheMenusUsingTheMouse”. Menus and toolbars can often be invoked via a single helper method (e.g., InvokeMenuItem) into which an identifier for the item to be invoked is passed. This however often results in pushing UI information back into the test case (so it can specify, for example, the specific set of menus that must be opened before a specific menu item can be invoked), which is exactly the problem it was intended to solve.

Regardless of how the shared functionality is (or isn’t) organized, a large dependency on the tools used to interact with the user interface is embedded in the test cases and their supporting infrastructure. This makes switching to a different UI automation technique very difficult and rarely attempted.

We have developed a technique for partitioning user interface details from user action details, and for writing test cases in terms of user actions.

Comments (2)
  1. In many of my posts I have alluded to the automation stack my team is building, but I have not provided…

  2. In many of my posts I have alluded to the automation stack my team is building, but I have not provided…

Comments are closed.

Skip to main content