Two Testing Apps


Last week I expressed my love for .Net. I figure this week I should talk about what I was doing that caused those amorous feelings.


Recall that my team is in the early stages of development. We're working on an existing application (Office shared code), so it's not like we're totally starting from scratch. But the new things we have don't quite fit in yet. Because of this the environment I test in isn't very good: it's a cobled together mesh of one-off helper apps, demo-apps and other things that host the production code we're writing but fill in the holes with whatever works for now.


If you've been involved in application development it's likely you've been in a similar place. It's just a reality sometimes in the start before you get all the pieces put together in an end-to-end working system.


Once we get all these random tools together in a space where I could start testing I was excited since that means I can, you know, do my job. But then I got a little less excited, since the stuff I need to test has a huge coverage matrix, and all these tools hobled together is not the most user friendly environment. I think to myself: “an annoying process with tons of needed coverage - that sounds like a good place for automation.”


In my Test Automation Article that I talked about lots of different approaches to the two steps of automation: driving your application and validation. For each I offered a cop-out scenario. A quick and dirty way to do it that mostly worked, but had problems. Well, the automation I wrote last week was a perfect example of a place to use both of those.


I'll explain a bit about the problem space (I can't say too much, sorry.) The eventual goal of our stuff is to render drawings. For now this means I take a current office drawing, run it through a converter program, then run it through a demo app that will draw it with some of the new code. There are lots of random settings on these apps, and some other complexity, but those are the key points.


I decided that I didn't care about the reliability or long term effectiveness of driving this process, since the process will be changing all the time and soon enough we'll just be doing it in the real apps instead of these fake apps. So driving it by doing hard coded command line calls and lots of SendKeys calls would be fine. Yeah, it's fragile, but who cares? I just set it off on a machine and don't touch it until it's done.


The driver application ran through all the collateral I had (old office drawings), converted them, then opened them up in the demo app a few times with various options and took a picture of the output. That's it. It was actually a pretty simple little program (note that the ease of things like file I/O, Windows API calls, and iterating lists in .Net is what allowed it to be a simple little program.) That was fantastic, and allowed me to just have another computer chug away and take care of all those menial steps involved in the conversion and drawing of the test file.


Now I'm stuck with a couple hundred images to look through. Sure, I could use the windows picture viewer. But I wanted a little more functionality then that. So I wrote a little windows app that opens up all the picture files in a directory and lets me scroll though them. The added functionality is I could group the files by type (I used file naming conventions between the two apps to know the type) and have a button to quickly launch the same source file with the same options in the demo app if I saw something that looked wrong. At this point I'm using that fake validation step I talked about. Instead of programatically validating the output of my test automation, I'm doing it by looking at screenshots. But with this system I can cruise through about 100 test cases a minute (I'm only looking for big ugly problems.) If I was doing it all by hand it would take much longer.


You wouldn't belive how helpful these two things have been. Sure, it's not completely automated, but it's saved me a ton of time and perhaps more importantly a ton of frustration. Running through tests is easy now, instead of a huge amount of copying and pasting and modifying command line parameters and all kinds of other crap like that.


I guess the moral of the story is you don't have to build a perfect, pretty, end to end testing system. Sometimes a quick and dirty throw-away app is perfect for the problem.


Chris


Comments (1)
  1. Matt says:

    Hi Chris,

    I have read your blogs on automation, but I am confused on certain topics, below are my questions.

    Disregarding implementation details, how would you design a way to automate the results of your testing program?

    I read your Test Automation Article and the best way to automate the verification would be to use a ‘visual comparison tool’.

    Say you had such a tool. Forgetting implementation, what would this tool do here? Your Automation article says visual comparison tools just compare against a master image, but is that a plausible verification?

    My question is, is it even possible to properly automate verification of complex programs using visual techniques (from say the field of computer vision and pattern recognition)?

    Verification is clearly the hardest part of testing automation. My guess as to the only proper tool that verifies correctly is one that uses a mix of computer vision and artificial intelligence. The computer vision code will find objects within the image, and the ai code will construct a neural net and back prop from the expected positioning of the objects and calculate the offset of the perceived objects.

    FINAL QUESTION:

    Do you think computer vision and ai are the proper computer science concepts to use for automated verification?

    Thanks,

    Matt

Comments are closed.

Skip to main content