"Testable" is one of those words with a bajillion meanings. Dave Catlett, a Test architect here at Microsoft, defines testability as "[t]he degree to which components and systems are designed and implemented to make it easier for tests to achieve complete and repeatable code path coverage and simulate all usage situations in a cost efficient manner." He also has a much shorter definition: "How easy is it to test?" My own definition is "What do I need in order to be sure I know when my app breaks?"
Everybody wants their app to be testable, but figuring out to to achieve that goal is not a simple matter. How you go about it depends on what kind of application you are building, but not as much as you might think.
If you have a super simple program testability usually just happens. Take attrib, for example: a small set of switches plus the path or file to act on form the entire input set, output is minimal and in a well-defined format, verification is simple, and there aren't any complicated internal algorithms to verify.
The problem, of course, is that most programs aren't this simple. If you handle any amount of user input (regardless of whether it's via a UI or just the command line) your life will be much simpler if you can separate testing the UI from testing your app's internal actions and computations. In many (perhaps even most) applications the UI logic is inextricably intertwined with the kernel logic, and so the UI and kernel testing must be as well because the only way for UI tests to verify their actions is to ask the API or kernel, and often the only way to drive the kernel is via the UI. (A public object model can help here, but it often takes very different paths through the kernel than the UI does.)
The optimum solution is to detangle the UI and the kernel. The kernel becomes just another object model (it may even be your public OM, or your public OM may be used here rather than exposing the raw kernel) and can be tested as such. Testing the UI becomes much simpler as well because you no longer have to figure out how each UI action affects the kernel. Instead, you can wrap that UI around a Mock Object.
Test hooks go far beyond Mock Objects, however. Hooks are vital anytime you need to peer into the details of a process. Exotic, complicated, or just plain lengthy calculations are a perfect example. Mock Objects are great when you're testing whatever consumes these calculations, but you also need to test the calculations themselves. If your system is nice and modular you can drive the calculations from a custom harness (often, this is nothing more than instantiating an instance of the calculator class and putting it through its paces), but if not you need a way to tunnel through the UI and other intervening layers and get a direct line to the calculations. A test hook can be useful here even if you are driving the calculations directly; for example, you may expect specific intermediate results at certain points, or you may want to know how far along the calculator thinks it is, compare that with how far along you think it should be, and compare both with the progress bar the app is diplaying.
Good test hooks are one of those "I'll know it when I see it" things. Basically, if it gives you the data you need when you need it, it's good.
The hardest part of designing a test hook is deciding whether to leave it in the release build or not. Test hooks can be just the thing for figuring out that nasty crash that only happens when your customer prints a particular document to a particular printer when they are connected over their bizarre network configuration. Likewise, allowing your customers to turn on debug logging and send you the results will save you many a field trip. (Most customers are perfectly willing to do this, as long as it's under their control and they can see what they're sending you. Cultivate a good relationship with your users and you'll be amazed what they'll do for you.)
If you do leave your test hooks in, treat them as the feature they are. Test the stuffing out of them just like you would any other feature -- not just functionality testing, but accessibility, internationalization, etc. etc. Don't even think about leaving it in but hiding it -- it will be found.
Examples of test hooks I've encountered in the past include:
- A magic key sequence that displayed a special dialog box chock-a-block full of controls each of which dumped all sorts of arcane data about the application's innards when invoked with various obtuse key chords.
- A simple interface that tunneled directly into the calculation engine, allowing said engine to be driven directly and allowing tests to avoid the overhead of the UI.
- A custom language for driving the UI from test scripts. "Real" user input and this language were both a thin veneer over the application's UI event handlers, which enabled writing reliable "do what the user will do" test scripts whilst avoiding the usual hassles of UI testing.
- A parser that sucked in equations written in psuedo-English form, translated them into the internal format, ran them through the kernel, then compared the actual results against expected results listed in the test script and logged any differences.
- A fully tested but undocumented plugin architecture that allowed test cases to run in the application's process and provided read-only access to pertinent details of the app's internal state.
Most developers are happy to build in whatever test hooks you desire. Just be sure to talk with them about it early, when they're designing the feature. Retrofitting test hooks can be done, but doing so is often much more complicated and expensive than building them in from the start would be.
*** Comments, questions, feedback? Want a fun job on a great team? Send two coding samples and an explanation of why you chose them, and of course your resume, to me at michhu at microsoft dot com. I need a tester and my team needs a data binding developer, program managers, and a product manager. Great coding skills required for all positions.