Still cleaning up our automation. The next task I am working on needs a little explanation about how we designed our automation system.
We have test scripts that are designed to simulate a user action to complete a test. When most folks think of test automation, this is the piece they typically imagine. A napkin math test script will enter a number of equations on a page, invoke the napkin math logic to compute the result and then verify the output is correct.
The algorithm for such a test would be:
- Put focus on a page (wherever I want)
- Type 8+3= on a page
- Hit enter
- Verify 11 is computed: 8+3=11
But there is a lot of other test code that needs to be invoked before step 1. OneNote has to be started (actually, the client machine needs to install Office, but that is a whole other topic), logging started, a notebook opened, a section created and a new page can be created.
The component we have to do this work is our "task library." It is a set of methods that complete the tasks they are asked to do - they do NOT perform any testing per se. Here I am defining "testing" as "computing a positive (passing) or negative (failing) result from a test." For this case, the task library starts OneNote, create the notebook and section, starts logging and returns to the script.
The first thing my test will do will be to create a new page. I will invoke a task library command to put focus on an outline of the page (as opposed to the title). The task library will ensure there in an outline on the page for the insertion point and ensure the focus is placed there.
But there is no way the task library can know if the action I want succeeded. It may be pretty obvious what I want (focus to be on an outline) but imagine that I want to run another test that starts like "Password protect and lock a section. Now put focus on a new page." This test should not allow focus on the page (since the section never got unlocked, OneNote should not show a page in the view in which to put focus). So the task library just returns control to the test script after performing the actions and it is up to me, as the script writer, to judge whether the end result of the task library is a success or failure. What succeeds in one test may very well be a failure in another test.
I like to think of the task library as being neutral. It merely attempts to do what the script writer wants to do without judging success or failure. So the task library becomes filled with try/catch statements and other error handling. For instance, the task library must be able to deal with failing to create an outline to hold focus (for the password protected section) and return this state to the test script so the author of the test script can judge whether to allow this part of the test to pass or fail.
This description of two of the components of our automation system took a little longer than I expected. I'll go over the work item I am completing next time. I hope this isn't too ambiguous.
Questions, comments, concerns and criticisms always welcome,