End The Positive/Negative Schism!

For any specific feature there are of course an infinite number of possible tests. Humans don't generally deal with infinity very well, so we have devised a number of categories that allow us to think in terms of a much smaller set of tests. Categories that immediately come to mind include:

  • build verification / feature verification / exit scenario / basic functionality / comprehensive functionality
  • scripted / ad-hoc / directed / exploratory
  • manual / automated
  • broad / deep
  • positive / negative
  • functional / stress / performance / localizability / accessibility / usability / regression / security
  • dev-written / tester-written / customer-written
  • unit / component / integration / system
  • white box / grey box / black box
  • useful / useless <g/>

The distinction between positive and negative tests splits your tests into "those that check what happens when you do things that should" and "those that verify bad things don't happen when you do things you shouldn't". Or to put it another way, "things that should work" and "things that should fail". For example, a positive test for a Save As dialog would be "Save As using a filename that does not already exist" while a negative test would be "Save As using a filename that does exist". As another example, if I was testing cut-and-paste in Sparkle I would have a positive test for "cut when items are selected" and a negative test for "cut when items are not selected".

Implementation-wise, positive and negative tests aren't really any different. They both put the application into a specific state, do something, and then check that what should have happened did. But test libraries often require different methods to be used for the two scenarios. This seems to be at least partly due to the complexity involved in making helper methods deal with every possible outcome.

Our automation stack removes the need to make this distinction. We don't have positive tests. We don't have negative tests. We just have tests. When I write tests for cut-and-paste I don't have to remember to use CutThatShouldWork when something is selected and CutThatShouldFail when nothing is. I just call Cut in both cases and know that the right thing will happen for both execution and verification.

Decoupling execution from verification simplifies both sides. Execution doesn't need to worry about whether there is something to cut or not but can just focus on the act of cutting. Since we baseline application state at the start of each action, verification can focus on the something-should-change case (e.g., something is selected) and the nothing-should-change case (e.g., nothing is selected) is taken care of automatically. And the test case can ignore both how the execution works and what's involved in verification and just focus on the steps for the test.

Simplicity all around - that's the goal we are reaching for. Let me know how you are achieving it!


*** Want a fun job on a great team? I need a tester! Interested? Let's talk: Michael dot J dot Hunter at microsoft dot com. Great coding skills required.

Comments (1)
  1. Jared says:

    My observation is that the main purpose of distinguishing between positive and negative tests for many groups was that the ‘positive’ tests were considered the minimum level of functionality that could ship. That is, if the system did what was expected for ‘normal’ inputs, it would be of use.

    I think this distinction still has some value for groups developing software for internal use, relying on manual testing. I suspect not everyone can give it up just yet.

Comments are closed.

Skip to main content