Categories of Testing at Microsoft

Humans seem to have a natural tendency to categorize. Software Test Engineers are no exception, especially when it comes into breaking down our work items. Different “types” of Testing include the following:

  • Functional Testing

  • Specification Testing

  • Security Testing

  • Regression Testing

  • Automation Testing

  • Beta Testing

A popular Microsoft Test interview question involves asking a candidate to test “Object X”, where that object is something similar to a Salt Shaker. We don’t expect candidates to blindly rattle off categories of testing, of course. How much fun would you have listening to someone listing off categories from their memory? It’s not that much fun, exactly! Interviewing is our change of pace from the daily Microsoft routine, and we’re hoping to watch a great mind solve a problem. Ideally, we’ll feel impressed and entertained during an interview. Hopefully you’ll be inventing great, innovative, and applicable test cases from each of the categories above!

So, here’s the fun part. The list above is intentionally incomplete. There are some important categories of testing that are still missing. Fill in some of the blanks in the comments on this post… I’ll try to throw in a little prize to the people with the best answers!


Comments (14)

  1. Woon Kiat says:

    Usability Testing

  2. Yadong says:

    <ul>Unit/API testing

    <ul>Acceptance testing

    <ul>Stress/load testing

    <ul>Performance benchmark testing

    <ul>Release testing

  3. mal says:

    With respect to the salt shaker you need the Ges(t)alt test and the sa(li)nity check. Other tests in general include localization so that the rest of the world can continue pay inflated royalties. The stress test which allows for continued service while being Slashdotted. Compatibility testing ensuring your own software works with your older and newer software/hardware. Non-destructive testing found by noting the point at which your hardware explodes. Destructive testing – see non-destructive testing. WHQL testing – a vendor specific form of testing. Trial by fire – an open source version of WHQL testing. Revenue – an executives version of testing. Unit testing – one of those buzzwords you hope to learn someday in your spare time – if they’ll ever let you go home.

  4. Andrew says:

    This list could almost be endless, however to get the ball rolling here are a few of the more interesting categories I spend time thinking about in my current job:-

    – performance testing (does the application meet its performance goals, i.e. sustained throughput, instaneous response time, user perceived performance etc),

    – stress testing (does it crash or otherwise act incorrectly under conditions that stretch its capabilities)

    – ‘smoke’ testing (does the application start up and perform basic operations without crashing, i.e. the type of testing most developers do before claiming their feature is complete 🙂 ),

    – interoperability testing (does the application communicate correctly with other implementations of the same or complementary application – as an engineer who works mostly on wireless protocol design & development I spend a lot of time thinking about and doing this type of testing),

    – conformance testing (does the implementation meet an industry standard (both documented and undocumented standards) – again something I think about a lot),

    – static testing (test the code without executing it, i.e. run lint, participate in design reviews & code inspections),

    – maintainability testing (is the code easily maintainable and extensible by somebody other than the original author – the longer I’ve been a software engineer the more this category has risen in importance for me).


  5. AT says:

    In addition to types in article and feedback:

    In case if there is Beta Testing – there must be Alpha and Release Candidate Testing ;o)

    Do not forget about Automated (Batch Verification) testing – writing and understand automated test cases if something trully valuable.

    Thinking about Microsoft with a bunch of products – it’s important Integration (Compatibility) Testing. Sometimes products behaive realy bad if used/migrated different versions like a NT4 vs. Win2000

    And finaly something not available anythere – Documentation testing.

    Checking if user/developer documentation is correct and error-prone.

    Producing complete and correct documentation can allow to workaround or prevent a lot of bugs/misconfigurations.

  6. Brian says:

    Oddly enough, in all the (CSG position) interviews I’ve been through at MS, I’ve never had the saltshaker example come up. I’ve managed to get hit with the "Test a toaster" example plenty of times though.

    A bit of a somewhat related aside: One thing I’ve learned from my interviewing is that it’s rarely a good idea to just rattle off test cases right and left. Although a lot of the "how to survive an interview" docs I’ve read indicate that interviewers will give some consideration to how long you can keep rattling off cases for, they’re also going to want you o show some semblance of organization, lest you come across as being disorganized in your thought. I tend to start answering a question about testing an object like this with stating some basic areas, then trying to put 3-4 in each, and working from there. This keeps you from getting stuck, and helps keep things in order.

    As for areas missed, here are a couple:

    Invalid condition testing: The "try to toast a 2×4 while plugged into a 220V outlet underwater" scenarios. Obviously, you’re not going to want to expend a whole lot of effort on these cases, but you should consider some of these, probably as part of your ad hoc testing, as they tend to be a good way to check the error handling. You do want to stay within reason on these scenarios; there generally isn’t a whole lot of need to test your software’s ability to operate after the server is impacted by a 16-ton Acme Discount Anvil.

    Expected failure points: A good number of the bugs that get filed against the product I’m working on right now are the result of unhandled exceptions coming from interactions of different UI elements. For example, one recent bug was the result of trying to select an item from a list view while a pull-down menu was visible on screen,which resulted in an unhandled exception.

    I could probably think of more, but I’ll leave those to other people.

  7. Greg says:

    Great comments. I’m impressed at the number of areas that I didn’t even think of myself- specifically stating Documentation testing (even though I do this myself : ), WHQL (Windows Hardware Quality Labs, specific to MS vendors), Revenue testing (I guess I’m usually too deep into my software to chat with the execs to learn about this one), and conformance and maintainance testing. These are all great, valid tests. Judging by how much I’ve covered in my first two years as a Tester, there’s still an astounding number of areas that I haven’t ventured into. Very cool.

    Brian- excellent comment about having organized test cases. I’ve interviewed a large set of contractors (CSGs) here at Microsoft, and this is the main factor that sets them apart. A candidate who is organizes their test cases into these areas will come up with more exauhustive test cases, and won’t need to ask questions like "Did I already try that one"? If you’re not organized with your thought process, then you WON’T remember all 100 cases you’ve invented at the end of 20 minutes. That’s an interview no-no.

    Those of you who answered: if you contact me, I’ll forward your name for nomination in the Encarta 2005 Beta. This comes with a couple cool perks, like a free copy of the released version of Encarta, and eligibility for prizes like an Xbox and Office for the best testers. This beta is for 18+ residents of the US, unfortunately- I’ll try and think of something good for the international participants as well.

  8. Randy says:

    Here’s an alphabetical list of testing methodologies that I haven’t seen yet:

    Accessibility Testing: Verifying a product is accessible to the people having disabilities (deaf, blind, mentally disabled etc.).

    Compatibility Testing: Testing whether software is compatible with other elements of a system with which it should operate, e.g. browsers, Operating Systems, or hardware.

    Concurrency Testing: Multi-user testing geared towards determining the effects of accessing the same application code, module or database records. Identifies and measures the level of locking, deadlocking and use of single-threaded code and locking semaphores.

    Conformance Testing: The process of testing that an implementation conforms to the specification on which it is based. Usually applied to testing conformance to a formal standard.

    Conversion Testing: Testing of programs or procedures used to convert data from existing systems for use in replacement systems.

    Dependency Testing: Examines an application’s requirements for pre-existing software, initial states and configuration in order to maintain proper functionality.

    Endurance Testing: Checks for memory leaks or other problems that may occur with prolonged execution.

    Integration Testing: Testing of combined parts of an application to determine if they function together correctly. This type of testing is especially relevant to client/server and distributed systems.

    Recovery Testing: Confirms that the program recovers from expected or unexpected events without loss of data or functionality. Events can include shortage of disk space, unexpected loss of communication, or power out conditions.

    Regression Testing: Retesting a previously tested program following modification to ensure that faults have not been introduced or uncovered as a result of the changes made.

    Scalability Testing: Performance testing focused on ensuring the application under test gracefully handles increases in work load.

    Usability Testing: Testing the ease with which users can learn and use a product.

    Volume Testing: Testing which confirms that any values that may become large over time (such as accumulated counts, logs, and data files), can be accommodated by the program and will not cause the program to stop working or degrade its operation in any manner.

  9. Randy says:

    Course one thing I could have added under Recovery Testing would be: "or buffer overflows" 🙂

  10. Greg says:

    Interesting comment. I’d argue that a buffer overflow is a scenario where you don’t want to recover. What could I mean by that? Well, firstly, do whatever you can to prevent any buffer overflows from entering your app. But, supposing you miss one, which can realistically happen in any piece of software- what should you do when you encounter one? You app can either:

    -Shut down gracefully

    -Surrender the Instruction Pointer to a (cr/h)acker.

    I’d pick shut down gracefully any day of the week.

  11. Mitch Denny says:

    When I’m not coding I am usually teaching folks new to .NET (but not new to programming). One of the key points I get across is that if you have a global exception handler that gets hit then you’re in a bad way and there is nothing left to do but grab as much diagnostics as you possibly can and die off, hopefully gracefully (provided that graceful path doesn’t execute code that makes durable changes), but if not gracefully, then atleast die.

  12. Shraddha says:

    Could you please help me ? I have interview at microsoft tomorrow for STE 2 contrating position. They need stress and performance testing. Could you please send me some stress and performance testing question please? My email address is

  13. Shraddha says:

    Could you please help me ? I have interview at microsoft tomorrow for STE 2 contrating position. They need stress and performance testing. Could you please send me some stress and performance testing question please? My email address is

  14. Greg says:

    Hi Shradda- I hope your interview went well. Some hints for dealing with stress testing and performance:

    Stress testing- this is a great area for your imagination to run wild. Test every extreme you can think of. For the salt shaker example, how does it handle extreme environmental conditions? Repeated abuse over long periods of time?

    Perf- Make sure your performance tests are "repeatable". If your control case is a moving target, how can you measure increases/decreases in Perf? Not possible, exactly. Also, think of ways to help identify bottlenecks in the code. As the rule of thumb goes, 20% of the code is run 80% of the time (I find these numbers to be very conservative in practice).