I was thinking about the role of test this morning. We measure the quality
of the product - but in some sense we have to take an educated guess what "quality"
means. In the end, the customer determines if it is a quality product, and they
can't make that decision until we ship the product. (Well, we do have betas
and previews to help out.) Waiting for summary quality rollups from PSS after
shipping seems like a pretty slow and cumbersome way to have a "quality connection"
between test and the customer.
So what can we do to tighten the customer-test cycle of information? Clearly
looking at beta feedback is critical, but is there anything more "automatic" that
we could do?
How about if we had a sampling profiler that ran automatically when people run the
beta? Sampling profilers are low-impact, and spread over all beta testers, we'd
get solid coverage numbers. Upload and integrate the data, and compare it to
our test code coverage data. Bang! We have an automated method to compare
how we test the product against how customers actually use it. Heck, if we have
internal customers using our stuff, I'd love to get their data too!
I wonder what the next step would be? Maybe learn more about sampling profilers...
I'll see if I can find some time for that...