The other bug in our test code for spelling


Last Thursday I posted some snippets of the code we had been using to automate and verify spell checking had been implemented in OneNote correctly.  Before I could check in the code, there was one other bug I needed to verify did not exist, and the code I posted would potentially have let it pass by unnoticed.


When I call the routine to get the dialog named “Microsoft Office OneNote,” the routine gives me the number of the first dialog which has that caption.  In the positive cases (the case in which spell check is functioning normally) this test will pass and work with no errors.  There is a negative test I need to perform, though.  If OneNote somehow launches two spell check dialogs, my test would miss the second dialog.  Or if OneNote decided to spam me with hundreds of these dialogs, I would miss that error condition as well.


So I added a new function very similar to the first which returned the count of the number of dialogs which had the caption “Microsoft Office OneNote.”  I check the number of these dialogs before starting spell check (it should be zero, but does not necessarily have to be), check after I expect spell check to finish, in which case the count should go up by one.  After dismissing the dialog, the count of these should be back to zero, or whatever the initial count was before this all started.


Oh, and the loop breaks out gracefully when there is no Next Window to get, so that’s why the original function failed.


Even after all of these checks, the test can still fail to verify that spell checking was successful.  In the amount of time it takes my code to find the dialog and dismiss it (either by clicking OK or pressing Escape), another dialog from any source can pop up and steal focus.  Such is the nature of UI based automation.  Since all our other scripts are getting migrated to UI-less, we will have a nice set of UI based tests to compare them to since spell check will still have to deal with the UI.  Stability numbers for the two types of tests will be very interesting to see.


Questions, comments, concerns and criticisms always welcome,