Fast Today Or Fast Tomorrow – Your Choice

My trainer has me timing my morning workouts as one way to track my progress. One day I realized that I had rushed through my workout, paying more attention to beating my previous time than to my form. Needless to say, that workout wasn't very effective.

It wasn't until I was talking about it with my trainer that I realized that experience was just like many others I have had and have watched others have. What's the point of rushing to beat time if quality suffers as a result?

Back in the day when I was first learning Windows programming, I specifically hunted down books that explained how to use the wizards because all those nitty-gritty details the wizards glossed over would only slow me down. Until of course I needed to do something the wizards didn't handle (which seemed to never to take very long), at which point I was completely lost, mired in ropes of wizard-generated code I didn't understand.

I find that unit tests save me enormous amounts of time debugging my code and fixing regressions, even factoring in the cost of modifying the tests to keep pace with product changes. I'm not always good at explaining their benefits, however, and so I have seen people skipping unit testing in order to get their code into production faster, only to spend double or triple the time they "saved" debugging that code once it's in production. And I have seen people spewing out huge amounts of test cases which they then proceed to completely ignore, so that the number of test cases failing due to "test issues" (i.e., test case bugs) grows and grows and grows.

One afternoon I was talking with a colleague who was rewriting some code he had written at four in the morning. "That's why I don't write code at four in the morning", I said. "But I have to get my work done", he replied. "So you don't have time to do it right the first time but you have time to do it over?" I asked. "Yes."

I may as well have not done my workout that morning, for all the good it didn't do me. The wizards got me to a skeleton app quickly but slowed me down soon thereafter. The team that ignored unit test failures spent time debugging their test cases later, and the team that ignored their test case failures spent time fighting product bugs. My friend may as well have not written that code seeing as how he had to write it all a second time.

Slow and steady wins the race. Doing it right the first time helps. Keeping it right thereafter helps too. All this "slowness" actually helps you go faster. At least that's what I've found. How about you?

*** Want a fun job on a great team? I need a tester! Interested? Let's talk: Michael dot J dot Hunter at microsoft dot com. Great coding skills required.

Comments (3)

  1. Harris says:


    I enjoyed your (two) posts – this one and "Speed Trap".  My thoughts on the matter were a bit lengthy for a comment so I blogged about it.

    Thanks for keeping things in perspective regardin pace, maintainability, design and testing…


  2. Anutthara says:

    Hi – that was a neat perspective on "faster or better?" In a tester’s life, we are often faced with such choices when building the test fx – do you just want to wrap it up or give it the TLC it deserves and take the extra EE pain. And I must confess, I have moved from the "faster" to the "better" group gradually.

    I agreee with the unit testing part and I suppose it is kinda universally agreed upon (at least in our div) that writing unit tests is really faster and better in the long run.

    But I am faced with these choices in other matters…like say TDD. All the case studies that I have seen so far confess a 15-50% increase in development time, but also claim that their quality is 150-300% better. But, the metrics used to quantify quality are so wild! It is then, that the problem enters a difficult domain. How am I to convince myself or others that expending this extra effort initially will lead to benefits later esp when there is no documented result that proves so? This often presents itself as a difficult choice to make and the temptation is always to follow the faster route unless you are 120% sure that the other route is going to be better.

    On a tangential note, I guess this is where most testers face the greatest amount of confusion. Out of all the metrics that we have – CC, BVTs, bug numbers, OGFs…how do you give one solid quantifier for quality? I would love to hear more about this from you.

  3. Shaun Bedingfield says:

    I think that doing it right the first time is almost always the right way to go.  However, it is not always possible.  Often analysis is incomplete and it is hard or impossible to know that you are creating the wrong code.  In addition, people have to be convinced that the small initial extra time up front really pays for itself and if your company has bought into the "good-enough" software philosophy this can be a hard buy.  People tend to judge productivity on what they see today not tomorrow.

    That said things rarely get better as programmers in a "good-enough" environment start learning to code to the lowest oommon denominator and by the time management has been sold on "quality-first", the programmers are already set on low quality code.  

    As a developer, the best route I can usually see is to keep the overall design on "quality-first" track but be willing to compromise and generate prototype level code where necessary to meet the expectations of early code fast with the idea of just having to throw out this code( ie. The backend is 20% complete but I have to write a user interface on top of it and get the functionality that really isn’t there yet to just work).  Low quality code looks great when your project is small but is a disaster when it becomes large as code tends to do.

    Throwing out an entire system and rewriting it is almost never practical and code tends to live a long time (at least 10 years).  Coding without discipline generates fast code today and then makes later development exponentially harder.  Diseconomies of scale make high quality code necessary.  The bigger the project you are working on the more imporant "quality first" is.  A project that is 50 kloc can be done with a good enough approach.  A project that is several hundred or thousand kloc is hard or imppossible.

    I really would like some good arguments to present to management to eventually convince them of the need for white-box testing.  To me, black box testing just is too little, too late.  It is like calling in an export to fix a problem when it is unmanageable rather than starting to test from the first line of code.  More information earlier increases visibility and lets me know if there is a design flaw that needs to be corrected today rather than forcing me to rewrite 10s of thousands of lines of code tomorrow.  

    Testing is a visibility aid that lets one understand how the system is really working and what it can really handle.  In a world of idealism, it is cold hard reality.  I heard that the people with Sysinternals joined the team and my first thoughts are..  I hope that you can use their products to lend better visibility to what is going on in the test process.  Their code is used a lot to find hotspots or examine behavior of programs and this makes it a good test aid.

    Though it was not asked, here is my opinion on quality. Quality is a complex metric and related to test and not related to test.  To simplify things, look at development as in the waterfall model.  Problems "earlier" in this hierarchy have more cost and are more important.  

    Analysis must be tested by fanning a project out to users and making sure it is what they really need.  Plans should be made to pilot the software and confirm analysis suspiscions.  With the pattern continueing through every level and every level being testable bringing more visibility into a process.

    Ultimately, quality is this.  How well does the product sell and how well am I minimizing development costs while ensuring customer satisfaction.   The customer and the dollar are the final quality metrics.  Any other metric is just a predictor.  The closer a metric reflects the bottom line, the more important it is.  If you want to show that a metric is useful, show how it correlates with the information needed at a high level.  A thousand bugs on a part of the system that isn’t used is less important than ten bugs on a part that is heavily used.

Skip to main content