I find myself getting more and more frustrated whenever I see test code that is best described as quick and dirty. Now, sometimes I do acknowledge that the world is not perfect (has never been and will never be) so I realize that the following is really my personal ideal that might not fully work in the real world. However, I don't see why that should be an excuse for not trying to get as close as possible. Furthermore, I do realize that some of you will consider parts (or all) of this posting to be highly controversial. This is indeed on purpose and meant as an invitation to comment or, better yet, start a conversation.
So here is my personal set of rules for successfully building test infrastructure and tooling (e.g. runtime environments and automation frameworks) and automated tests:
Every test team has a test architect (this can be an additional role for one of the testers on a team, especially if the number of team members is less than or equal to one). The responsibilities of the test architect include:
- The overall architecture of the test infrastructure and automation
- The quality of the code base
- Defining and enforcing the development process used by the test team
Consequently, the test architect has the final say in all matters.
All development starts by specifying the behavior of the software to be created in writing with the kind and scope of the specification depending on the chosen development process. An exception can be made only for projects completely owned by one individual if no other project depends on it. In that case it is acceptable to create the specification in parallel.
All code meets the following criteria:
- It compiles without any warnings with the warning level on the highest setting
- It passes static code analysis (e.g. FxCop) without any warnings
- Suppressing individual occurrences of compiler or static code analysis warnings is acceptable if there is a technical reason for it and the reason is properly documented (preferably in code)
- It is checked into a source control system
- It is reviewed before check in
- Test infrastructure and tooling code must be covered by an automated test suite with code paths that cannot (easily) be covered through automated tests covered through manual test cases or code inspection
- It is commented on the type and member level and additional comments within member implementations should be added for all non-trivial implementations
Additional requirements can be imposed at the architect's discretion.
The reliability of test infrastructure and tooling is proven by the associated test suite. Regression tests are added for all defects found.
Automated tests are reliable when there are no false positives and no false negatives. This implies that rerunning unreliable tests until they pass is unacceptable. Test failures are always tracked down to a product or test defect and fixed accordingly.
Usability is a primary goal for all infrastructure and tooling development. User interfaces are as self-explanatory as possible; the output of all software clearly indicates success or failure. The output ends with a summary for software that outputs large amounts of information like test harnesses or deployment utilities. The summary contains enough information for enabling the user to fix all issues in case of failures. Note that this can be as simple as a URL to a website with step-by-step instructions. Any software that requires more than copying files from one location to another in order to install it or requires more than deleting files in one location in order to uninstall it ships as a proper setup package with uninstall capability.
The documentation of any project contains the following:
- User manual
- Sustained engineering documentation
In addition, the documentation for any software that is extensible or a library includes an API reference and sample projects.
Any legacy code that was written with lower or no quality standards is replaced or refactored.
This posting is provided "AS IS" with no warranties, and confers no rights.