The Power of Internal Tools

Why is it that people let limitations in internally developed tools prevent progress? Yes, that’s more of a rhetorical question than anything else. But there is a part of me that would love the definitive answer to this question. When a team goes through a reengineering processes they often let the existence and limitations in internal tools drive the newly defined process – why is that? After all it’s just software – shouldn’t it change to support the business needs? Have you really developed the most optimized process if you make compromises due to limitations in the software tools that supposedly support the process?

Here in SQL we’re revamping our development process to make it more Agile like. We have come up with the concept (ok, not very original but work with me) of improvement teams. These teams are responsible for modifying the product in support of a particular scenario – scenarios may have multiple improvement teams building functionality. The basic gist of the improvement is they are responsible for add a particular set of capabilities to the product and once the improvement is complete the code can be checked-in and the improvement team can go on to something else. An improvement is defined as:

  • An improvement is a unit of work which is:
    • Complete and shippable
    • Does no harm to the product
    • Adds value to the product

Sounds pretty cool and very Agile and Scrum like; when the improvement is complete it is completely complete. It’s ready to go and the team can move on to the next improvement. We can ship the product once we believe we have enough improvements complete. Well, most things sound great in theory but when put in practice it all starts to break down.

When I read the above definition I was certain that books-online (BOL) content had to fall within the definition of “complete and shippable”. Well guess what, the tool the BOL authoring team use doesn’t seem to support this process. Don’t ask me what it means to not support the process because I don’t know. I mean, we can author BOL content - we did it for Yukon, so what’s the problem. What I do know is the product is not complete and shippable without BOL content. So how can an improvement be complete and shippable without BOL content? The answer I get is each improvement will need an exception for delivering the appropriate BOL content at a later date. A later date! Doesn’t that mean we’ve changed the definition of an improvement? Shouldn’t we update it to say something like:

  • An improvement is a unit of work which is:
    • Complete and shippable - except for those things the team wants to complete later
    • Does no harm to the product
    • Adds value to the product – but not all of the expected value

It’s a slippery slope. Today it’s BOL and tomorrow it’ll be something else like passing all automated test cases – “gee, it passed 85%, isn’t that good enough to check it in. We don’t think it’ll do any harm…” Maybe I’m just a purist, or maybe idealist is a better word, when it comes to things like this. Either we stick to the definition or we change the definition. But trying to live in a world of exceptions leaves the door open just enough for more and more exceptions to walk on through. And what really chaps my hide is people believe that delivering BOL content with the improvement is the right thing but they claim the internal tools used to author BOL content don’t support this. “Don’t support this”, how can that be? Don’t we own the tools? Can’t we create an interim manual process to handle this – such as the improvement team must have demonstrable proof that BOL content is complete prior to gaining sign-off to check-in.

If we make it painful for the teams to get check-in approvals they’ll go and fix the BOL tooling issue. If we give them exceptions for each and every little problem the problems will never get fixed and over time we’ll end up with a Swiss-cheese, exception driven process; a process that will have to be re-engineered in a year or so.