When To Automate

My team has a goal of 110% automation, but: not every test is worth automating. What's more, the value in automating a particular test case changes depending where in the product cycle you are. Or as I said in a presentation on this topic:

 What should you automate? Everything!

 What should you automate when you are three months from shipping? Much less!

How do you choose? When it comes right down to it you have to make a judgement call. But there are some guidelines that will help you decide:

  • Time Spent v. Time Saved. If a test will be quick and easy to automate and doing so will save you lots of time, then go for it! On the other hand, if a test will be complicated, difficult, or require a large amount of time to automate, and doing so isn't going to save very much time, probably you shouldn't bother. For example, when I was faced with testing a major security-related change on three languages, five operating systems, and seven different file formats, I didn't hesitate to automate the process. On the other hand, render testing is often not worth the trouble.
  • Sanity Spent v. Sanity Saved. If a task is tedious, repetitive, or error prone, then it's worth automating. But if you only have to do it once, maybe not. For example, acceptance testing is always worth automating, but a quick pass through a smallish checklist probably isn't. At a previous job we decided that a one-time pass through several hundred file checking file versions wasn't worth automating; instead we divided the files up amongst us so that each of us only had to check thirty files or so.
  • Likelihood of Breakage. Is the feature being tested fragile? Has it historically had a high regression rate? Is it brand-new code, or being changed a lot, or being changed by many developers? If any of these are true then automation is probably worthwhile. Examples from my time at Visio: After Microsoft bought us we replaced our custom menu/toolbar implementation with the standard Office MSO menus and toolbars. This meant a lot of changes throughout the application, especially in the VBA object model. Automating all that testing was definitely worthwhile. On the other hand, solutions that were in hard maintenance - that is, only being touched to fix certain bugs - tended to not have much new automation written for them.
  • Likelihood of Regressions. Spending time to automate a bug you found is A Good Thing, but take all the other factors into account. Also, consider how likely the fix is to break, and how fast you need to know if it does.

"I don't believe you - of all people - are really saying this!" I am most definitely not saying that 110% automation is not a worthy goal. Having every last test case automated would vastly increase your confidence in your application, and it would really pay off come sustained engineering time. But you know as well as I do that there's never enough time to do as much testing as we would really like, and so we have to prioritize. Deciding what tests to execute manually rather than to automate is just as important as deciding what tests to do in the first place.

Having trouble prioritizing? Try this: If you could only run ten automated tests - and not do any manual testing - what would you automate?

*** Want a fun job on a great team? I need a tester! Interested? Let's talk: Michael dot J dot Hunter at microsoft dot com. Great coding skills required.