Developing high quality test cases

I think it may surprise a lot of people in the industry that Microsoft (and most high visibility software vendors) invests a lot of time and effort into their testing. Quality assurance is so key to the success of our products that every team in the company has personnel whose lives are dedicated to just insuring that quality is in the product. We can debate the success or failure of this effort, from my perspective, even though we find bugs after the fact, and people have issues with our products, with tens of billions of lines of code and trillions of permutations of interoperability’s, that effort has largely been successful.

If you are focusing on testing your product, then you have to do some basics before you can even begin test. The first is having a test plan, followed by a schedule which includes testing, then some automation and unit tests. Each of these are so well known and talked about that entire books, specifications, standards, best practices, etc… have been devoted to them. In fact, so much that I usually go into skim mode when I just run through the list. It’s just too long and it’s hard sometimes to understand the titles or relevance. That should be an indicator of how much software has both evolved and diverged over the past 50 years. I’m pretty sure I can list 20 sub specialties within software development that can be given unique attention without much effort.

So to the topic of developing high quality test cases, where do you begin? You cannot develop test cases without a feature specification, and in fact, I claim if you are testing without a specification, your testing is largely unsuccessful. However, thinking about test cases will tell you more about your tools and automated needs than anything else. Assuming solid unit tests are implemented where do you go?

Here are some questions to ask (it is clearly not an exhaustive list, but should give you ideas):

- Am I testing a kernel mode device driver? If so, do I have specific test cases setup so that Driver Verifier is turned on? Do I define the expected results (almost invariably, bug check, but think about others)? Do I have IOCTLs that are exposed to the application layer and do I test those? Does my driver control hardware, filter, or something else? Network?

- If you are shipping hardware, are you doing internal testing of interfaces via simulation? Are you developing tests to simulate corner case environments, what are the extremes?

- Do I understand and more importantly have the hardware that we are targeting? Platform such as x86 or IA64? Do we have architectural boundaries? Do I know the memory or CPU boundaries? How does it work with other devices, is PnP involved (it probably is)

- Am I user mode testing? Do I use Application Verifier in testing? (If not, I would add test cases for Application verifier).

- Does my feature have any exposed entry points? Do I use fuzz-ing? Are my entry points public or private? LDAP? RPC? DCOM? Win32? COM? Etc…

- If it is a windows application, do I test in safe mode, is safe mode defined, is it relevant? It may be if you need UI in safe mode.

- Do I have UI and do I need to validate all of the controls? Can multiple users select UI in my feature? Do they need to be exclusive? What kind of languages or locales do I support? What globalization issues are my tests going to cover?

- Do I test for accessibility? Are there any accessibility issues with my product? Have I thought about all the dimensions of accessibility?

- Am I testing web pages? Web interfaces? Advanced HTML, Scripts? Java classes? ASP pages?

- Do I validate data? Database tables? Remove Data? How is it accessed? Is it raw, can it be accessed outside my application? Is it formatted, is there a specification for the format?

- What is my security plan? Do I have a threat model and do I validate the known potential boundaries? Am I a secure oriented tester? Have I evaluated the cost of having to release a security update? Am I testing least privileges?

- What are the specific functional items from the specification that I need to validate? Do I have to cover any government standards (an area Microsoft has historically been silent on)? Do I have to meet any other well defined standards as well? This is where your organization must make the hard decisions.

- What types of input do I need to test? Hardware, software, text, keyboard, sound, physical elements? Can I enumerate them all? Can I cover most with a few generic items? That last one is a key to minimizing your test hit.

- What environments will I operate in; do I have clear specifications for these environments? Are there customer requirements to validate?

- Do you know what the product life cycle is and do you have a plan to test updating for bug fixes? Do you test installing new updates? This area is largely untested in virtually every environment I have been exposed to outside of Microsoft. Is there a plan for servicing your feature?

- Do you know how to test versioning, private, public fixes? Have you evaluated how it can go wrong in the cases? Do you have a plan for those scenarios?

- Do you have an uninstall option after install? Do you validate that uninstall works correctly, can you reinstall? Does it work then?

I could write a book on just these questions while adding more, and indeed, there are books that ask and in a general way answer these questions. Evaluating all of these will help you define quality test cases. For each of the above areas, you have some more criteria you need to define, some tests that measure the results, and then some action to make sure that during development and even after you ship, you have clearly built test cases that will bring quality to your feature.

You can “test” your test case by answering these questions:

Do I define the results of the test?

Does the test clearly define what it is testing?

Can the test be modified to produce variations?

Can the test validate both positive and negative inputs?

Will the test always be viable?

Does the test validate the functional specification?

Will the test always report success and/or failure clearly where there is no ambiguity?

In other words, a quality test case has the following properties:

- It is well defined and has unique relevance to your feature

- Is built upon either existing proven tests or itself, will lay the foundation for other tests

- Is based on clearly defined features, functions, or requirements

- Has some property that will clearly demark it from other tests, it is mutually exclusive

- Can be understood both by the engineer and is relevant to his domain knowledge and can be executed by the simplest personnel with full confidence in understanding the results

- Can be run under all predefined architectural boundaries as well as hardware boundaries

- Clearly measures as well as reports a result. There is no ambiguity about the results or the measure.

- Most importantly, on failure, it will generate an actionable result which insures that its failure is addressed – bug, design change, schedule modification, etc...

So, building test cases really comes down to the simplest of concepts. Understanding your feature well enough and with enough detail such that when you look over your test cases, you have a high degree of confidence that you mitigated customer risk and provided your staff with a well defined set of criteria by which they can assume they have done their job. Good engineering is a team effort, the role of you or the person developing test cases is engagement and building confidence in how you test. This will go far in helping bring a high quality product to market.

Test case development by far is non-trivial, requires special knowledge about software, and also about software environments. These can range from hardware to operating system, and are not unique to Windows. It is essential that test personnel be experts in both disciplines in order to be successful. You cannot hope to succeed in testing software without strong knowledge in both software development as well as the execution environment. There are ample well documented examples whereby this was not adhered to and disastrous results ensued.

Here are some books I have read on the subject, of course, each are examples and not my final word on the subject. Also, I am not paid to advertize these and I’m sure that you can find more relevant documentation that will help you if you are struggling with test case development. Note, I mix and match between MS Press and others, so as not to appear biased, however, “How We Test Software at Microsoft” if anything, should open your eyes to the systematic and sometimes frustrating process we go through bringing quality to our products.

Essential Software Testing: A Use-Case Approach by Mike Fornier

How We Test Software at Microsoft by Alan Page, Ken Johnston

Software Test Automation: Effective Use of Test Execution Tools by Mark Fewster

Testing Computer Software by Cem Kaner Jack Falk Hung Q. Nguyen

Testing Object-Oriented Systems: Models, Patterns, and Tools by Robert V Binder

-Hazel

Technorati Tags: XPe,Standard 2009