When you take on a feature to test, how does one prove it is right? Generally you might work off some checklist of buckets or areas like functional, globalization, security, boundary, localization, performance, stress, etc. Then say once you have run all the tests you came up with & made sure they passed, then it must be done right? As alluded to earlier there is also the question of did you build the right feature for the customer?
The below picture has circulated before on how the various components of product development looked at a simple problem.
One might add to this “How the tester tested it”
Part of test’s responsibility is to prove that their feature is correct. This includes not only that it is functionally correct but that it is actually what the customer wants\needs.
Customer Correctness - Involve your customers
I covered this in the previous post about knowing your customer, but it is worth reiterating as it is highly important. Involving your customer as early & as often as possible will help you increase the likely hood of delivering what they really need. There can be obstacles such as maybe you are creating something brand new so your customer has not seen anything like it before or maybe you have competitive requirements to limit disclosures. Yet these are usually not insurmountable, at the bare minimum you can solicit feedback from co-workers on the work that you are doing. Is what you are working on targeted at a less technical customer & all your co-workers live & write code, then what about your business admin?
Functional Correctness - Quod erat demonstrandum (QED)
A typical question in a product release is how do we know that the product is done & ready for release. This can fill up many meetings & even books about the subject. One aspect of this I wanted to focus on was thinking beyond the procedure of manually coming up with a set of test cases, build them, & then run them. Instead, here at Microsoft many testers have degrees in Computer Science\Engineering and many have advanced degrees including Masters or PhDs. As such we should borrow from our math background & think how you might prove the feature is correct (i.e. QED)
This is more aspirational, but I have seen concepts & ideas used in test to attempt to do this exact thing. As such this is one of the hard problems in test. More broadly these are collectively known as Formal Methods and this is one area which MSR has made considerable investment.
Model Based Testing
One example of how you can prove a feature is correct is model based testing. It is an interesting aspect of how you might prove that a system is correct. I am personally still learning more about model based testing & how we can use it in our testing tool belt when developing products. So this is just an intro to the concept for those unaware.
The basic concept is that if you can map out a conceptual model of your product, then you can use an algorithm to walk your model which will then generate many of the test cases for you. Just taking the time to create a model will greatly help in understanding your product and typically cause you to find questions which will then feedback into your dev & PM specs. For example, what happens if we are in X state & then Y happens?
Microsoft Research has done some work around Model Based Testing (MBT) and since has been released as a power tool for Visual Studio 2010 called Spec Explorer. You can even use Spec Explorer to convert your understanding of the spec into a programmatic model and then when the model is explored find if there are invalid transitions that occur based on some constraints helping you find bugs in your spec & design early on.
One key consideration when building out a model of a system is the scope of a given model. If you attempt to model in detail a complex system, then you will run into state explosion where you will have way too many permutation & combinations to be useful. Instead you should focus on modeling some specific area you are testing to minimize this problem.
In order to verify any test result, a tester must develop an oracle to determine if the result of a test should pass or fail. Sometimes these oracles are extremely simple. The simplest being hard coding the answer to each of their tests. Yet this limits the number of tests to the ability of the tester to generate tests & answers to those tests. Those answers must be maintained as the product changes over the course of development. On the other extreme is to perform complex calculations which simulate the product. The problem here is the classic of if test is testing product code who is testing this complex test code?
MBT solves this by using the model you create about your product. Tools like Spec Explorer explicitly capture state in the model and allow for the model to return values, which can then be mapped to actual values which the system will return. So tests generated by Spec Explorer essentially have the oracle embedded in each generated test case. But to do this, the model may have to closely reflect the algorithmic complexity of the system being tested. This can lead to high cost of model building. MBT approaches suggest to solve this by using a collection of models, including the possibility of building composite models (i.e. A more complex model built on top of simpler ones)
There are also other possibilities. Take another example of how one might leverage mathematic ideas is how you can test something as Query & Set calculation in Forefront Identity Manager (FIM). Given a complex problem to test, the test writer needs to create an oracle or way to verify the results that the product is producing. The complexity of tests can be limited by the imagination & ability of the tester to validate their results. Yet if you can leverage something as an oracle to verify the correctness of another item. In the case of query if we can express a dataset in another form & translate our query against that dataset into a different form, then we can create a simple oracle. So taking Query in FIM, we know that the query is given in XPath. Conveniently enough XPath is typically used for querying an XML document. So now if we can represent our dataset as XML, then we could use XPath with .Net as that oracle.
This leverages the concept of transitivity in mathematics. Basically if A=B & B=C then A=C. Or in our case if it is true that .Net XPath returns a valid search result for a XML document and we represent our dataset as a XML document, then this will give us our answer\oracle. This has the advantage of us being able to issue complex tests & leverage existing work to validate our new work. Otherwise we would have to hard code our own test inputs & expected outputs or create a verification algorithm which could reach the complexity of the product’s algorithm.
There are of course some assumptions & limitations here. First we are assuming that the .Net XPath of a XML Document returns the correct answer, but there may be bugs that cause it not to. Secondly we may have data or queries that we are unable to represent appropriately. These are items the tester has to evaluate as they are determining their approach & may choose to do additional testing around the limitations of their approach.
Now that we have Query’s oracle we can use transitivity again and use Query as an oracle for Sets. Sets makes a bunch of complex calculates on the fly. As such we can build a test case generator based on some descriptive language & then just turn around and verify that all the Sets that are calculated on the fly are correct by walking the Set memberships & checking them with a simple query.
Note: I don’t claim credit for the above solutions using in testing Query & Sets in FIM. These were great ideas that other testers on the team developed for dealing with testing these hard problems.
As a tester you should consider not just does your feature conform to the spec that you have been given, but is the spec & design correct to start with. This must include rationalizing it against your customer’s needs. Additionally a tester should move beyond just simplistic functional testing of a complex system but how they can prove the correctness of their feature. Proving correctness should move beyond static validation of 1+1=2 and more into leveraging the testers backgrounds in mathematics and other algorithmic approaches where applicable.
Ultimately when developing your test strategy here are a couple questions you should strive to answer:
- Is this the right feature for my customer?
- To be right it must include solving a customer problem & delivered in a way that they can consume it.
- How am I proving that the feature is functionally correct?
- What are the limitations & assumptions of my approach?
- Am I testing my feature in a way to cover how my customers expect to use it?