QA Gauntlet

Hello.  I’m Rob Huyett, an SDET on the VC Libraries team.  My topic for today isn’t really anything about new technology, or the wonders of Orcas, or anything like that.  It’s really just to give you all a little view of how things work around here.  I’m writing today to talk about one of the tools we use to help keep our test quality high, our QA Gauntlet or “QuAG”.  What exactly is a “gauntlet” tool, you ask?  Well, a gauntlet puts any changes through a battery of tests to make sure that the change is robust and doesn’t break anything.  The name comes from the term “running the gauntlet,” though I’ll leave it up to you whether you’d like to look up the historical origins of the phrase.

QuAG is one of several gauntlets in use around here.  It differs from the others in that it’s been customized to test our tests, rather than test VC itself.  It’s pretty common for us to have to update our tests.  For instance, as VC itself changes, sometimes the tests also need to change to accommodate the new VC behavior.  Other times, the test just simply has a bug that needs to be fixed.  Regardless, we want to ensure that updating our tests doesn’t cause any unexpected problems.  Running all of the required test permutations by hand would be horribly time-consuming, though, and that’s where QuAG comes in.

QuAG consists of a server and about a dozen or so client machines.  The client machines include representatives from each of the three supported architectures (x86, x64, and ia64).  After an SDET makes their changes to a test and is ready to check the changes into source control, he submits the changes to QuAG.  The QuAG server then assigns different test scenarios to different client machines.  The exact number of test scenarios will vary a bit depending on the exact change being made, but a typical breakdown of scenarios might be x86-native, x64-native, ia64-native, x86-pure, x64-pure, etc.

The client machines all do their work in parallel, and the server keeps track of them all.  When all of the clients report that they have finished (which can take anywhere from a few minutes to a few hours, depending on the tests being evaluated and the state of the client machines), the server looks through the results to see if everything is as it should be.  If all is well, then the change is automatically checked in to source control, and an e-mail is sent out informing the team of the checkin.  If something doesn’t meet expectations, then e-mail is sent to the SDET who submitted the test as well as the QuAG admins so that the problem can be investigated and resubmitted.

Of course, all of this takes a quite a bit of work to maintain.  There are over a dozen machines that need to be maintained and kept up-to-date.  The QuAG software consists of a lot of small (and not-so-small) scripts and programs in a variety of languages (mostly batch files and perl scripts).  In fact, one of our goals for once Orcas is out the door is to take a look at QuAG and try to give it a bit of a tune-up to reduce the maintenance overhead.

As always, your comments and questions are appreciated.  Thanks!


Rob Huyett

VC Libraries Team