I run a virtual (i.e., via Sharepoint and email) tutorial on application design. Recently we started looking at model-based testing. I’m starting out with a simple home-grown vaguely-smart monkey that’s not much more than this:
public void Model()
// Populate the list of actions.
public void Run()
for (ulong iterationCount = 0; iterationCount < 100000; ++iterationCount)
ulong indexOfNextCommand = // Randomly choose an index.
abstract class Command
Private fields for all verified state.
public void Execute()
abstract protected CalculateExpectedState();
abstract protected ExecuteAction();
private void BaselineState()
// Initialize state fields to current state.
private void VerifyActualState()
// Get the current value for each state variable and compare
// against its expected value.
You can see why I call this a vaguely-smart monkey. It doesn’t have any notion of what actions are valid when but rather just flails around trying things. The verification must be intelligent enough to look at the current state of the application and know what the results of its action should be, but I haven’t found that limitation to be a problem.
This is a step up from a dumb monkey, which just randomly pokes around hoping for a crash, but it’s nowhere near as smart as something like AsmL that actually understands the app’s state graph. If you understand the state graph you can generate traversals guaranteed to hit every node and other such interesting walks. My monkey doesn’t give you any guarantees except that it will run for a while and try do do stuff.
As simple as this model is, though, the second command I implemented found a bug in the tutorial code. This is exactly the kind of tool I love: small, and fast to implement, with a large potential impact. If it ends up not finding anything it’s no big deal, because I haven’t put a lot of time into it.
Not long after I did this for my tutorial I found myself needing to test serialization (e.g., save/close/reopen/is everything still correct) of my product feature. I talked with my developer about various scripted tests I could write and pairwise tests I could attempt, but what we really wanted to do was test serialization after random actions. This monkey immediately came to mind. My dev loved the idea, so I took a day to implement it. I had most of it done in a morning, actually, but a few of the commands turned out to be somewhat complicated and took awhile to tune. As a bonus it was simple to tack on undo/redo testing as well.
One day’s effort and now I have a trained monkey banging on my feature for as long as I let it. Using AsmL or some other modeling tool would have saved a little bit of time (since I wouldn’t have had to build the model infrastructure), but the vast majority of the time was spent building the commands, which I would have had to do regardless of the technology driving the whole shebang. I’m generally against building code that someone else has already built — I would much rather spend my time doing a bit of customization to code someone else maintains than building all that code myself — but sometimes doing it yourself is the way to go.
*** Comments, questions, feedback? Can you test? Want a fun job on a great team? Send two coding samples and an explanation of why you chose them, and of course your resume, to me at michhu at microsoft dot com. I need a senior tester and my team needs a dev lead, program managers, and a product manager. Great coding skills required for all positions.