Seven deadly sins of programming - Sin #1

So, the time has come for the worst sin.

Just to recap - and so there is one post that lists them all - here are the ones that I've covered so far:

Some people have remarked that all of these are judgement calls, and really more a matter of aesthetics than actual sins.

That is true. I didn't include things like "naming your variables i, j, & k" as sins, because I don't think that's a real problem in most of the code I'm likely to have to deal with, and there really isn't much argument over whether it's a good idea or not.

It perhaps would have been better to title this series, "Seven things Eric would really prefer that you don't do in code that he has to work with", but that is both ungainly and lacking the vitality of a post with the term "sin" in it.

It's all marketing, you see - or you would if you were actually reading this post, but given my track record on the last six, it's probably a good idea to cut your losses now and spend your time more productively, like in switching your entire codebase from tabs to spaces (or spaces to tabs...)

When I was a kid, I was fairly interested in WWII. I read a lot of books about it, from general histories about the war, to books on the warfare in the Pacific, to books about the ground war in Europe.

One of the interesting features of the military during that time - one that I didn't appreciate until much later - was how they balanced the desire for advancement in their officer corps vs the need to only advance the most talented and capable. There were really two schools of thought at the time.

The first school advocated an approach where a lower-ranked officer - say, a colonel - would be promoted to fill a vacancy directly, on the theory that it made the chain of command cleaner, and you'd quickly find out if he had "the right stuff".

The second group advocated using "field promotions", in which a colonel would be temporarily promoted to see if he could perform in the job. The theory here was that the service would end up with only the best colonels promoted, and that it was much easier (and better for both the officer and the service) to let a field promotion expire rather than demote an officer already given a full promotion.

Over time, the approach advocated by the second group was borne out as having far better results, and the danger of the first approach was recognized.

Which brings us on our roundabout journey to our final sin:

Sin #1 - Premature Generalization

Last week I was debugging some code in a layout manager that we use. It originally came from another group, and is the kind of module that nobody wants to a) own or b) modify.

As I was looking through it, I was musing on why that was the case. Not to minimize the difficulty in creating a good layout manager (something I did a bit of in a previous life), but what this module does really isn't that complex, and it has some behavior that we would really like to change.

The problem is that there are at least three distinct layers in the layout manager. I write a line of code that says:

toolbarTable.SetColMargin(0, 10);

and when I step into it, I don't step into the appropriate TableFrame. I step into a wrapper class, which forwards the call onto another class, which forward onto another class, which finally does something.

Unfortunately, the relation between the something that gets done and the TableFrame class isn't readily apparent, because of the multiple layers of indirection.

Layers of indirection that, as far as I can tell (and remember that nobody wants to become the owner of this code by showing an any interest in it or, god forbid, actually making a modification to it...), aren't used by the way we use the layout manager. They're just mucking things up...

Why is this the #1 sin?

Well, as I've been going through the sins, I've been musing on how I ranked them. One of the primary factors that I used is the permanence of the sin.

And this one is pretty permanent. Once something is generalized, it's pretty rare that it ever gets de-generalized, and I this case, I think it would be very difficult to do so.

<Agile aside>

This might be slightly different if there were full method-level tests for the component - one could consider pulling out that layer. But even with that, it would be hard to approach in a stepwise fashion - it could easily turn into one of those 3-hour refactorings that makes you grateful that your source code control system has a "revert" feature.

</Agile aside>

Or, to put it another, fairly obvious way:

Abstraction isn't free

In one sense this seems obvious - when you develop a component that is used by multiple clients, you have to spend a little extra effort on design and implementation, but then you sit back and reap the benefits.

Or do you?

It turns out that you only reap the benefits if your clients are okay with the generalized solution.

And there's a real tendency to say, "well, we already have the ListManager component, we can just extend it to deal with this situation".

I've know teams where this snowballed - they ended up with a "swiss army knife" component that was used in a lot of different scenarios. And like many components that do a lot, it was big, complex, and had a lot of hard-to-understand behavior. But developing it was an interesting technical challenge for the developers involved (read that as "fun and good for their careers"...)

The problem came when the team found that one operation took about 4 times as long as it should. But because of the generalized nature of the component doing the operation, there was no easy way to optimize it.

If the operation had been developed from scratch without using the "uber-component", there would have been several easy optimization approaches to take. But none of those would work on the generalized component, because you couldn't just implement an optimization in one scenario - it would have to work for all scenarios. You couldn't afford the dev cost to make it work everywhere, and in this case, even if you could, it would cause performance to regress in other scenarios.

(At this point, I'm going to have to have anybody thinking "make it an option" escorted out of this post by one our friendly ushers. How do you think it got so complex in the first place?)

At that point, you often have to think about abandoning the code and redeveloping in the next version. And in the next cycle, this group *did* learn from their mistakes - instead of the uber-component, they built a small and simple library that different scenarios could use effectively. And it turned out that, overall, they wrote less code than before.

HaHA. I make joke.

What they really did was build *another* uber-component that looked really promising early (they always do), but ultimately was more complex than the last version and traded a new set of problems for the old ones. But, building it was a technical challenge for the developers involved, and that's what's really important...

How do you avoid this sin?

Well, YAGNI is one obvious treatment, but I think a real treatment involves taking a larger view of the lifecycle costs of abstraction and componentization.

It that one general component really going to be better than two custom solutions?

(if you didn't understand the story, look here and and see what rank is above a colonel...)