Performance Problems Survery — Results


OK here's the respones to my highly non-scientific survey 

Your project is "In Trouble."  The Big Boss has just walked into your office and told you that he needs you to save the day.  Which of the following is more likely the culprit of this crisis that has your boss in a panic:

1) A developer failed to code a good algorithm in some module and now a very smart person must come along and replace that module with something much better.

2) The project has far too many layers of abstraction and all that nice readable code turns out to be worthless crap that never had any hope of meeting the goals much less being worth maintaining over time.

And the results are:

Of 27 responses at this writing

  • 8 didn't specify #1 or #2 at all or were balanced
  • 4 went clearly for #1
  • 15 went clearly for #2

Which if you like that means only 4 of 19 votes (21%) were for #1 with 8 abstaining. 

I suppose I shouldn't be too surprised because of how I asked the question but let me say what point I was trying to make and that is this:

Number 1 isn't a crisis, it's normal.  It happens all the time and it isn't or shouldn't be especially a big deal.  In fact if you try to hard to prevent all your #1s you are probably going to waste a whole lot of effort on "Premature Optimization."   Number 2 on the other had is really a crisis, like some other quality factors over-abstracting can render the code fundamentally unsuitable for the job at hand.  No amount of clarify of intent or maintainability can save it.  These are true crises where significant effort is totally wasted and will have to be re-done.  Arguably preventing those kinds of crises is why I have a job at all.

Sometimes people say that "usability" is the key factor driving the long term cost of code development.  I think I would agree with that statement but I may have a different definition of usability than you expect.  I don't mean elegant, simple, easy-to-read, convient etc.  Those things certainly help, but they aren't sufficient.  The literal meaning of the word is closer to what I want -- I have to be able to use it.  Goodness knows Win32 isn't the most elegant thing on the planet but it has a high degree of "fitness for purpose."  Which is to say it's good at what it does.  It couldn't possibly succeed if it was not and its arguably the most successful API set ever created.

So, while a hard-to-understand API does lose some points of "usability," an easy-to-understand API that does its job very poorly isn't worth the magnetic particles it exists on.  Usable first then re-usable was something I quoted a while ago but really I could say usable first or else nothing matters at all.

A bad module can be fixed.  A bad design has to be tossed.  Localized problems aren't the real enemy, they're just the regular bumps along the way.

Comments (11)

  1. James Kovacs says:

    I would hazard to point out that elegant code that demonstrates clarity of intent isn’t the problem here. The fundamental problem is not measuring early and often to insure that your code meets the performance required for the task at hand given the intended deployment environment. I would hate to see people avoiding indirection because they’re too concerned with finding themselves in a "situation #2". That said, indirection for indirection’s sake is also the wrong way to go. It all comes down to "measure, measure, measure".

  2. ricom says:

    What James said 🙂

    You know I could just post blogs once a day that say "measure, measure, measure" but I figure it would get boring.

    Instead I post blogs that can be loosely interpreted as "measure, measure, measure" without actually saying it.

    I think here what I’m saying is elegance is great but not for its own sake.  And Usability isn’t necessarily what you think it is 🙂

  3. ~AVA says:

    When you look at the code you wrote yesterday, it seems usable. When you look at your old code, or code by another person, it does not. Often you start with a guess what these functions do. If you found the function pretending to do the task, you start thinking how to call that candidate function: meet pre-requisites, use results, process errors, and so on. If you’re lucky, you make the function work, and then start performance measurements. Unfortunately, the research process often stops earlier :). So usability as "good interface" and usability as "good runtime behavior" are related to each other and to discoverability. The good behavior hidden behind bad interface often goes unnoticed.

  4. Erik Madsen says:

    I suspect many of the readers of this blog work at companies whose business is to develop and sell software.  I understand why #2 would be more common at those businesses.  I work as a programmer in a corporate IT department at a manufacturing company and can say the greater problem in my experience is #1.

    I rarely see overly academic designs that fall into the second category.  Usually the challenge is getting developers to understand the benefits that OO abstraction can provide, rather than preventing them from over-doing abstraction.

    My vote for #1 means that a more experienced developer must put a design into a module that hardly had one to begin with.  In other words the problem was not abstraction per se.  The problem was poor resource utilization, such as database I/O, regardless of whether it was implemented procedurally or with OO techniques.  The poor resource utilization killed performance.

  5. Coconut says:

    I am wondering if someone has read any real stories with facts about how #2 can stink in performance?

  6. Norman Diamond says:

    > Number 1 isn’t a crisis, it’s normal.  It

    > happens all the time and it isn’t or shouldn’t

    > be especially a big deal.  In fact if you try

    > to hard to prevent all your #1s you are

    > probably going to waste a whole lot of effort

    > on "Premature Optimization."

    The way you phrased #1 originally, I interpreted it as non-working code that needs to be replaced by working code.  That _is_ a crisis in addition to being normal.  It’s a bigger problem than either the problem that you’re describing now as #1, or #2.  By the way to Erik Madsen, the problem of non-working code is just as big at companies whose business is to develop and sell software.

    [About the Win32 API]

    > It couldn’t possibly succeed if it was not and

    > its arguably the most successful API set ever

    > created.

    While half-biting off my tongue, the other half of my question is:  If the Win32 API is the most successful because the .Net API is built on it, then why isn’t the NT Native API also designated the most successful?

    > an easy-to-understand API that does its job

    > very poorly isn’t worth the magnetic particles

    > it exists on.

    I can stop biting off my tongue now because you said it for me.  Thank you.  (Though in the case of WinCE, the particles aren’t magnetic.)

  7. ricom says:

    I wrote:

    "Goodness knows Win32 isn’t the most elegant thing on the planet but it has a high degree of "fitness for purpose."  Which is to say it’s good at what it does.  It couldn’t possibly succeed if it was not and its arguably the most successful API set ever created."

    Which is to say, warts and all it’s pretty good at what it does which is why so many people use it.

    I didn’t mention .NET — though I suppose the fact that we can build .NET on top of it is as good a testimony to "fitness" as any.

    The NT Native API is good at what it does for its customers but few can relate because there are not that many customers for the NtDoThisAndThat family of APIs.  It should be only the Windows developers.

    I think we actually agree on all of this, except maybe the relative value of Win32 which isn’t the point anyway, it’s only an example. 🙂

  8. Rico (who I am currently listening to on a new episode of " Behind the Code ") posted an interesting

  9. Fred says:

    "elegant code that demonstrates clarity of intent (probably) isn’t the problem here. The fundamental problem is not measuring early and often to insure that your code meets the performance required"

    I read #2 to mean that the code worked until the requirements were changed, however slightly. The problem is that changing the elegant, clear code even slightly requires massive effort. Maybe because that’s what I face every day at work. We have code that works, it’s just that no-one really understands how it works so changes are quite tricky.

    But having had to rework two cases of #1 in the last few months, I know which I prefer to do. Even a month of solid work to rebuild a scalable multi-threaded app that wasn’t is better than "squeeze another 5% out of this mess of, uh, ‘stuff’".

  10. DAL says:

    re: Rule#1 "Measure Measure Measure…".

    I’d sure like to see you write some posts about tools and techniques about measuring code. e.g. performance counters, instrumenting code, high-perf logging, interpreting results, etc. I know you’ve written about many aspects of this in the past but we need more (write a book!) Most people want to do the right thing but not everyone knows what the right thing is.

    For example, I just got involved in a Win32 project that’s been in development for 7 years (count ’em, 7) and I was asked look at a particular scenario/use case because the results were "too slow". The first questions I asked were "what wouldn’t be too slow?" and "how are you measuring performance?". The answer to the first was "…don’t know" and to the 2nd was "we’re not". So I just spent two weeks instrumenting the code. Next I start looking at code paths, modules, etc.

    IMO part of the problem is that people don’t know what tools/techniques are available or the tools are too hard to use, so it never makes it into the code.

Skip to main content