Is test listening to customers?

 I believe the ultimate arbiter of quality is the customer.  In my imaginary
world, I wish I could get a list of all the bugs customers will find in our product,
go back in time, and fix them before release.  Needless to say, there are a few
small flaws in this plan.

In theory, as experienced testers we anticipate the areas and types of bugs our customers
will find, and proactively test those areas.  While this is certainly better
than just guessing, and is a major focus of test here at Microsoft, I've always harbored
some doubt in my heart.  My inner child (inner tester?) says "how confident does
that make you?  are you so sure of your (collective) testing prowess that you
would put your software in an autopilot or medical equipment?  Have you really
covered *all* the interesting cases?"

So it occurs to me that we can in some small way achieve my imaginary world above. 
We have this thing called "milestones", you see.  These milestones are releases,
and they have customers; both internal and external.  We can and will take the
bugs found from customer use of our M3 bits, for example, and fix them in M4.

More than that, for each of these bugs that test *didn't* find, we can write a new
test.  My team does this in an organized fashion for QFE bugs in COM+; every
hotfix gets a new test, if possible.  It happens at an ad-hoc level in the new
product - if I see a relevent bug filed outside of test, I'll sometimes mail the associated
test owner saying "hey, looks like there might be a test hole here."

But when I sit down and think about what I really want, what really gets me closer
to my imaginary world - I would like to see not just single bugs covered, but entire
classes of failures.  At the end of a milestone, I would like to go through every
single bug filed by a non-tester, every single bug not found by our planned test process,
and ask the question "how do we test, not just for this failure, but for every failure
like it?"

My intuition says that over the course of several milestones, repeating this type
of review could result in substantial changes in our test focus and methodologies;
in good, customer-focused ways.  Of course, you'd have to be willing to bite
the bullet and actually change your testing direction if the review gives you good
reason to do so.

I know that bug analysis, and looking for test holes, is a known technique. 
I haven't heard of it being applied at the milestone level, but that is probably just
my ignorance.  I'd love to get see some examples and hard data where this technique
has been applied - is my intuition correct as to its usefulness?

[update]  My test team actually did something like this for our most recent milestone.
 One of our testers went over the entire set of bugs that were found outside
the test team, and analyzed them into categories.  For the next milestone we’ll
be careful to hit those categories of faults in our testing.  It should be interesting
to see what a similar analysis exercise will show at the end of the *next*
milestone…