One of the many planned testing events we have to complete for many of our areas is a "Bug Bash." While there is plenty of planned testing scheduled, once in a while we all get together as a test team and just slam on a feature for a day.
To get a feature ready for a bash, the tester needs to make sure the feature is stable enough for this type of free form testing. All the planned testing should be completed and known bugs already logged. For instance, a new feature might require new icons. If the bash is held before the designers complete the icons, the ribbon may show an orange dot for a placeholder icon. This doesn't block testing, but it will be logged and made known to everyone at the beginning of the bash. That way if I had wanted to test the UI at different different screen resolutions I would know not bother with the icon - it simply is not ready for testing yet.
We had an icon missing in the Tech Preview of OneNote 2010 that some early adopters may have noticed.
The Hide Page Title icon just wasn't ready. This was not enough to delay the preview, so if you wondered why that dot was there, now you know.
Bug Bashes are generally fun to do. I like being able to simply push a feature to it's limits. Stress cases are run (unplug the machine while making a huge edit!), the UI is pushed (change screen size and DPI constantly) and so on. In depth testing is also fair game - boot up netmon and monitor the network IO during the test. Ensure minimal bytes over the wire - look for errors that crop up and can be avoided - and so on.
For this type of testing, no one cares about duplicated bugs or statistics like that. Normally when I find a bug, I look in our database to see if anyone else has reported it already. This can take anywhere from a few seconds to much longer (think an hour or so in some cases). If I find it already reported, and it's the exact same bug (like the icon being missing), I move on. I may add a different set of repro steps that cause the bug to occur if that is appropriate but the point is that I don’t want to enter duplicated bugs if I can avoid it. That just creates work to deal with it. For bashes, though, I don't care. Everyone is encouraged to simply get the bug logged and then move on to more testing. The goal is to get testing done and not worry about the overhead of reporting procedures.
When the testing is over, the tester who was in charge of the bash wades through all the bugs logged. She looks for duplicates and resolves them to get them off the radar. If the steps taken to reproduce the bug are not clear, she assigns it back to the tester for more info. If the bug report otherwise looks actionable, she hands it off to developers to fix.
We've been doing a few of these recently and have a few more scheduled. This makes it fun to test!
Questions, comments, concerns and criticisms always welcome,