Missing States

Every tester forms a model of each application they test. Sometimes this model is explicit, such as when the tester is doing model-based testing. Other times it is unconscious, such as when a tester does not know why they do what they do to find problems.

Similarly, every application functions according to a model. Sometimes this model is explicit, such as when the application is built on a state machine. Other times it is unconscious, such as when it is built by a developer who does not know why they do what they do.

Every customer forms a model of each application they use. Sometimes this model is explicit, such as when the customer explains to someone else their view of why the application works the way it does. Other times it is unconscious, such as when the customer accretes a set of magic steps which get the application to do what they want it to do.

Sometimes these models match up. More often they are wildly different.

The customer's model matching the developer's model tends to be a Good Thing. If the customer's understanding of how and why the application does what it does matches the way the developer designed the application, the customer will know how to make the application do what they want it to do, and they will rarely be surprised by it. The extent to which the customer's model diverges from the developer's model tends to indicate the extent to which the customer will be annoyed and frustrated as they attempt to use the application.

The tester's model matching the developer's model is a Good Thing insofar as it indicates each has a similar understanding of how the application is supposed to function. It can also be a Undesirable Thing insofar as the tester gets trapped by model blindness.

Model blindness occurs when a tester forgets that a model can only contain those states which are expected to happen. Some of the states will be standard behavior; others will be for error conditions and other rare occurrences. All of them have been considered, else they wouldn't be part of the model. While verifying that these expected states and the transitions between them is important, a tester who limits themselves to only these states is, as Shrini reminded me, missing a target rich environment.

This is one reason that cross-feature and integration testing tends to find copious amounts of gnarly issues - while most developers are able to maintain a model of their feature in their head, doing the same for every other feature as well becomes progressively more difficult the more other features there are. So they miss things.

One way to avoid model blindness is to look at only the states, ignoring the transitions, and search for ways to get from any one state to any other state. Another is to look for additional paths for reaching those states. Your favorite get-out-of-the-box-I-am-in technique likely works well here too.

Model blindness can occur even when your model is different from that of your developer. If you are working from a model you have excluded certain items. Unexcluding one or more of these will probably find issues.

Assuming you're working from a model.

Which you always are.

*** Want a fun job on a great team? I need a tester! Interested? Let's talk: Michael dot J dot Hunter at microsoft dot com. Great testing and coding skills required.

Comments (3)

  1. Michele says:

    Preconceived notions about the product being tested, whether from the developers, documentation, or even the tester herself, can truly hinder the process of finding information.  If testers are not careful, they could simply be subconciously providing evidence to fit whatever model they have selected.  

    It is also not a bad idea, if you have the time, to provide yourself with more than one model for testing the same product.  For instance, after receiving a change list on the product in test, I may use a development model for testing, then walk away and defocus.

    I will deliberately change my focus by asking myself questions about how I have been viewing the product.  What information have I been finding?  What am I possibly overlooking?  The questions generally depend on the product.  But I find this exercise very helpful in checking my current model of testing and the bias it may contain.  When I return to the product in test, it is essentially with a new set of eyes.  This provides me with a new model for testing the same product, which enables me to find new information and/or problems with the software.

    Since the models that we use are always linked to our past experiences and possible biased opinions that lie in our minds, it is a good idea to question ourselves and our models as much as we question the product.

  2. Shrini says:


    I am happy to see this post dealing with models. I would have loved to see a post preceding this on the lines of "What is a model (definition, types, modeling techniques)".

    When you say "Every tester forms a model of each application they test" – I would ask “what is a model” to start with. That is probably a great place to start discussion on software models when we talk about testing.

    When you talk about models by developer, customer – you are bringing an interesting dimension to modeling in testing.

    Cem Kaner in one of his presentations mentions that "Testers look for different things for different stakeholders (Stakeholder is some one who is impacted by action/inaction, success/failure of a product/server)"

    Extending that – A tester would not only need to model the software system in his/her own thinking but relentlessly look for other models of the software (of the other stakeholders) and check the software usage patterns, behaviors, claims, anomalies etc.

    That is why tester job becomes interesting, vast and an “open ended search for problems”

    >>> Sometimes this model is explicit, such as when the tester is doing model-based testing.

    Did you mean Finite state model based Test design? When people say “model based testing” (in Harry Robinson’s style) – they actually mean [finite state machine] model based test [design]ing.

    This means – Finite state machine model based test design = Model based testing.

    Note that words in [] are conveniently ignored. This gives an impression that FSMs are only ways of doing explicit or “formal” or mathematical modeling.

    >>> every application functions according to a model.

    Probably it is other way round. How about this rephrase – “A model helps us to understand some observable behavior”. A model in simplistic terms is a “view”, a description or representation. Model is not an absolute thing where as the application is.

    A software application has many views (infinite). A model helps us to comprehend some set of application behavior.

    Every model has boundaries. A good tester creates many models and is aware of boundaries of those models. An important thing about knowing the boundaries is about understanding things that lie outside that specific model.

    When we talk about different models – developer, tester, customer – more than having match between these models (that would be more of confirmatory thing) – but thinking about as many models as possible in a given time frame and check customer and developers regarding those models.

Skip to main content