Whether we’re running automated or manual test cases, whenever we come across a failure it is often a good idea to log the issue in the bug database. In the Test Results window after a test run, you can use the list of failures to investigate a failure, rerun a test under the debugger, and finally associate the failure with a work item.
When you activate this feature, the product does a little bit of the work for you by opening a new bug form and filling in some of the fields.
If you have access to a Team Foundation Server, I recommend trying this out. First execute a test case that will fail, publish it, and then execute the “Create Work Item” menu item off of the context menu from the failed case.
public void TestMethod1()
Assert.Fail("Misc. bug in this code");
What you’ll see is a new bug form open with a bug title prefix (in my case “TestMethod1: “). You’ll also see in the Comment and History section the error message text (in my case “Assert.Fail failed. Misc. bug in this code”).
Note: Alternatively, if the bug you want to associate this failure with already exists, you can execute another menu item: Add to Work Item. This helps you add failure information from this case to an existing bug.
The rest, and arguably the real value, comes from you – we’ve just tried to automate some of the process that slows you down.
So, what do you put into the bug? Another way to ask this question is: what will the developer need to see in order to fix the bug in a very efficient manner? What information will increase the likelihood of a bug fix? How can I reduce the number of bugs that get resolved as Not Repro? A process that helps you enter bugs that achieve all those things could be called the Perfect Bug.
The Perfect Bug is a good thing to achieve. Others will more quickly understand it. Management will make quicker, but more informed and accurate decisions for the product. The developer will be less likely to misinterpret the bug and provide the wrong fix. You have supplied the developer with critical information that makes fixing the issue as quick and painless as is possible. Everyone will spend less time on the bug (reading, comprehending, etc).
The absolute perfect bug is not usually attainable. There is obviously a limit to the amount of time you should put into a bug. There has to be a corresponding benefit to the time you put into it. However, we can first focus on entering really good bugs and work our way up.
A high quality bug is easy to read. It is concise, yet contains additional crucial information. How can we communicate so concisely and clearly?
Some of these items are specific rules to remember, but in general it can be wrapped up with a set of principles:
Make it concise and easy to parse.
Considerably large data that would break the previous principle can be included elsewhere and simply referred to.
It should be easy to find by another person (tester looking for duplicate, others looking for bug they once saw).
The bug entry should make the best and most accurate case for fixing.
A perfect bug has…
An accurate and concise title
A bug title should not be too generic. The reader won’t understand what the bug really is. Bad example: App doesn’t work. What app? How doesn’t it work? What is specific about this scenario that causes it to happen?
A bug title should not be too long. The longer the title is, the more the reader has to concentrate; they may have to reread it several times. Bad example: Leave defaults in a new test and run it. Test outcome is "Failed". Should it be "Error" or "Not Runnable" instead? This really could be said in a lot fewer words. Most of that title belongs in the Repro Steps section. Try Defaults for new test yields a ‘Failed’ result.
A bug title should include relevant error messages or crash address. This makes it easier for others who are searching for duplicates. Good example: FileNotFound Exception in ObjectStore.css line 47 when opening file with .xxx extension. If I get this error when testing and search on it, this bug title will immediately pop out at me. Perfect!
A bug title should all words spelled correctly, especially error messages. Take the time to make sure you spell words in your title correctly (do I hear an endorsement for built in spell check?). People looking for duplicates will not find yours and this causes more work for everyone.
Severity and Priority that are accurate
Really think through the severity and priority you set for a bug. Consider other bugs you have entered and compare this bug to those. Know that developers (hopefully) use priority to set the order in which they fix bugs.
If you enter a bug with what may seem like an unusual sev/pri or if the sev/pri are high, it would be very helpful to others if you explain your case for it. Especially do this if you change pri/sev.
As a product group, define what a bug means to be sev 1 and so forth. The work item tracking solution has a feature to show a tool tip if you hover over the labels which can be used to reinforce the definitions. Here are some typical definitions used by Microsoft for your benefit:
Sev 1: Critical Failure. Completely breaks product or large set of features. Unusable. Significant risk or liability if release (security, legal).
Sev 2: Major Impact / Functionality Broken. Breaks major functionality contributes to overall instability in this area, non-fatal assertions. E.g., Statement Completion not active at all or memory leaks. Regression from prior release.
Sev 3: Minor Impact / Functionality Impaired. Breaks major functionality in a minor way or breaks minor functionality completely. E.g., missing item from list for statement completion.
Sev 4: Little / No user impact. Can still use product / features. Minor functionality problems or UI blemishes or other issues that do not impact customers use or perception.
Pri 0: “NOW” Bug. Work stoppage, no work around. Blocking further progress in area or by group. Fix Immediately! 24 hr turn around expected!
Pri 1: Showstopper. This is deeply impacting customers OR internal progress. Worthy of a Service Pack or QFE. Fix Soon! Also, required to fix for RTM.
Pri 2: Important Bug. Required fix for RTM. Can be fixed any time before RTM.
Pri 3: Something we would like to fix but not required to fix to ship the product.
Pri 4: An unimportant bug or request. A bug that will likely not be fixed.
One manager at MS puts bug priority in terms that might resonate better with you for priority:
Pri 1: Will slip the product indefinitely to get this in.
Pri 2: Will slip our date within limits to get this in. Painful cut if not in.
Pri 3: Will not even think about slipping the product for any of these.
If you are unsure about pri/sev, chat with others to level set your expectations.
If you notice that management or others regularly resets your bugs’ sev/pri, ask them to discuss why they see it differently.
Filled in fields (customize your bug form)
These are all examples of custom fields we use at Microsoft. You can also add these to your own custom bug form.
The idea is that the tester indicates how they found the bug, as in via what kind of testing. Was it a
? Automation? Ad hoc Testing? Customer Feedback? Bug Bash? Test Pass
Your organization can use this field for metrics. If you know you found 30% of your bugs via automation, it’s a compelling reason for your organization to invest more heavily in it. If you find that bug bashes result in more bugs then you’ll know to plan more of those. You get the idea.
Can be very helpful with reproducing a bug. Enter OS, proc (32/64-bit), product flavor.
Is this a blocking bug? You should definitely mark the bug as such and explain why in the description.
A blocking bug usually is around one of these:
Precludes a build from being generally testable
Prevents testing of other features in the area
Precludes a build from being generally safe for dogfooding
Breaks defined user scenario
Degrades an feature area quality bar so that it is not meeting expectations
You can add a large, multi-line pane next to Comment and History to hold the repro steps. Unlike Comment and History, this field can be updated at any time, whereas Comment and History can only be appended to.
This is where you can make the biggest difference. Repro steps tell the reader a LOT about the bug. How easy to repro is this? Is the customer likely to run into it? Does it require the planets to align?
Should include three sections: Repro Steps, Results, and Expected Results.
If your actual steps to repro seem long, you will confuse the reader. They will also think the likelihood of the bug impacting most customers is slight. My suggestion is to include two repro steps. Have one be how most customers will hit it. The second one will be how to actually reproduce the bug from start to finish for a developer or another tester. Blurring the line between these two often masks how likely a bug is to be encountered. Make a good case for your bug to be fixed by describing the customer scenario separate from the straight steps to repro.
Extra info in the description
Here is where you can put any other input you have. Your expertise is highly valued. Did you debug into the code a bit and find something relevant? Put that here!
Your bug may include a reference to the offending source. Excellent example: Incorrect permission looked up - uses UPDATE_DATA permission instead of PUBLISH_DATA. The description could say, “Note: This is because <path to file>\Service.cs:80 is requesting "UPDATE_DATA" rather than the publish permission.”
You could include a detailed description of the architectural flaws that led to the problem.
Do you have any root cause analysis? Do you have any proposed fixes and comments about pros/cons of each fix?
Are there caveats to the bug that you think should be known? Is the problem easily avoided or resolved? Does the issue never go away? What about after reopening the solution or restarting the IDE?
This is also where you can comment on your sev/pri rating. If this is a regression, here is where you should mark it. A regression is a set of repro steps that used to work, either in a previous release or previous build that no longer works. Management may be more likely to take a bug if quality has regressed.
Is this issue a crashing bug?
If the OS has crashed (blue screen), do you have a Kernel Dump or a Kernel Mode Debugger attached to a repro machine?
UI crashing bugs should have a call-stack and mini-dump.
ICE (internal compiler error) bugs should have a preprocessed file. If a CHK build is available does the crash repro on CHK, and are there any Asserts hit? When in doubt, keep the debug session active and contact the owning dev.
You could include a list of test cases to run to verify the fix. Oooh, ahhhh.
Still very important… make all of the above readable. Format it in such a way that one can read through the bug and quickly identify relevant information.
There are several different kinds of files that may be very relevant. Perhaps the most helpful is a screenshot of UI that is bugged, especially in localization issues. Often you can say a lot more with a picture than you can accurately convey in text. Indicate that you’ve attached a picture so others know to look in the Files tab.
The picture should be JPG or PNG format. BMP and other formats are generally too big.
Crop the picture to what is needed to show.
If you have a lot of code to include in a repro step, it might be best to simply attach a file and refer to it in the bug Repro Steps.