Here is what I hope might be a constructive response to Brian Marick's thoughts in his post on Exploration Through Example about fine-grained guidance in exploratory testing.
One heuristic that I've had some success with is a bit more on the lines of security defects - but you might be able to tweak the idea to exploratory testing of conventional defects. One of the problems with security penetration testing is that you can have a bunch of smart people just banging around and coming up with some cool bugs but not having any idea at the end of it how much of the system was tested, any guesses about the relative security of various pieces of the system, knowing what your blind spots might be, etc. The problem I was seeing with some of my security testing was that if you don't have a systematic approach that scales to a large team, it's hard to measure (a really smart tester might not have any results to show after a few days of effort, and its hard for someone else to pick up where they left off), and I'm a believer in the principle of what gets measured, gets done. The tricky thing with coming up with a systematic process in this case is you don't know where the bugs are going to be anyway. However you do know, to a degree, what you consider to be a bug. In security testing, this is generally your "threat" collection. In conventional testing, this might be along the lines of some of the great "taxonomies" that are already out there in the literature.
The idea is as follows: Flush out security threats in a formal "threat modeling" meeting where you are basically listing out the attack targets and what you consider success at that point. This is still too high level to be of use to find bugs, but it is just setting the stage so that you can eliminate duplication. It also engages everyone on the team - developers and managers too, not just testers. Once you have the main threats identified, you start building up a threat tree that represents the paths that those threats might possibly be accomplished. Each one of those needs to be mitigated or fixed so that exploiting the threat becomes impossible. Divide and conquer each threat and basically backtrack and brainstorm all possible ways you could accomplish that threat. Until you can't think of a way to do it, but think it might still be possible. Don't let the fact that you can't think of a way right during the threat analysis meeting stop you from putting it down. You come up with a massive tree of possibilities of how to accomplish your major objective. Testers also have a rudimentary map of where to focus their exploratory efforts. A lot of their actual work is spent trying things out, doing code inspections, getting more details from developers, all sorts of stuff that isn't documented in the threat tree to begin with. By the end of it they either verify that the path is mitigated (with a regression test to make sure the mitigation holds) or that they have done their best to analyze that path and there is no known way to exploit, or they did find an exploit and logged a bug. This is also the window of opportunity in my opinion where white hat insiders participating in the software development lifecycle have the capability to gain the upper hand against black hats because they don't have to black box everything. Even though sometimes they end up doing it anyway because they are scared away from code inspection or talking to devs or whatever. Argh. Anyway, this is the point where all the exploratory fun happens, guided by the overall threat tree framework. There is plenty of documentation out there on threat trees. Once testing is completed, you of course aren't going to be guaranteed that all the bugs are found. But you do have a way to measure progress of the test team, you have status to report to management, and you have a record of what activity happened so that in the future when a bug is found you can look back and see where you failed to identify the bug (and then rapidly sweep through other areas of the product where similar bug patterns might exist). The documentation actually helps people, and serves as a record for future people. It's not documentation that keeps testers from testing, it taps the collective knowledge of the group (that the tester might not initially have) and then allows a tester to focus in and drill on that without being so overly prescriptive that a monkey could do it.
This idea might carry over in some way to what Brian is looking for here. By engaging and surveying the collective intelligence of the group at design time (not just testers) as to what would constitute the worst bugs, and then working your way backwards with design guiding exploratory testing, you can backtrack all the way down from the key threats and create a tree that has the big bug categories at the top and then a multitude of paths and lots of leaf nodes at the bottom representing paths that might enable testers to get from start state to bug state. Breaking through groupthink in some teams, getting people to shift to this mentality has been difficult but possible for security bugs since everyone treats security bugs as a top priority. But in theory, an approach like this maybe tamed down a little could yield enormous bug harvests for conventional testing as well. I have witnessed conventional (not security related) product quality improve from the security design meetings where this happens before any bugs need to be logged, which is the ultimate win even if you don't get credit for it in the bug database or whatever - in the same way that the best way to win a war is to avoid it in the first place.
Come to think of it, it is a technique I have used during bug bashes in a rudimentary way, and I find my bottleneck in those cases when I get to be unleashed like that is typically just being able to enter bugs fast enough. I have Appendix A (a bug taxonomy) from the back of "Testing Computer Software 2nd Edition by Cem Kaner, Jack Falk, Hung Quoc Nguyen in front of me and then blitz through the product with an idea ahead of time of what types of bugs I think I can find. It totally removes any mental block for me to have these bug patterns in mind.
Just some thoughts, sorry didn't have time to polish this better.