Premature standardization

I used the phrase 'premature standardization' in an earlier post today. I'm rather pleased with it, as it is a crisp expression of something that has vexed me for some time, namely the tendency of standards efforts in the software space to transform themselves into a research effort of the worst kind - one run by a committee. I have certainly observed this first hand, where what seemed to be happening was not standardization of technologies that existed and were proven, but instead paper designs for technology that might be useful in the future. Of course, then I was an academic researcher so was quite happy to contribute, with the hope that my ideas would have a better chance of seeing the light of day as part of an industry standard than being buried deep in a research paper. I also valued the exposure to the concentration of clever and experienced people from the industry sector. But now, as someone from that sector developing products and worrying everyday about whether those products are going to solve those difficult and real problems for our customers, I do wonder about the value of trying to standardize something which hasn't been tried and tested in the field, and, in some cases not even prototyped. To my mind, efforts should be made to standardize a technology when:

  • There are two or more competing technologies which are essentially the same in concept, but different in concrete form
  • The technologies are proven to work - there is sufficient evidence that the technologies can deliver the promised benefits
  • There is more value to the customer in working out the differences, than would be gained through the innovation that stems from the technologies competing head-to-head

Even if all these tests come up positive, it is rarely necessary to standardize all aspects of the technology, just that part which is preventing the competing technologies to interoperate: a square plug really will not fit in a round hole, so my French electrical appliance can not be used in the UK, unless of course I use an adaptor...

If we apply the above tests to technologies for the development of DSLs, I'd say that we currently fail at least two of them. Which means that IMHO standardization of metamodelling and model transformation technologies is premature. We need a lot more innovation, a lot more tools, and, above all, many more customer testimonials that this stuff 'does what it says on the tin'.

Comments (5)

  1. Interesting…

    I’m sure you’ve put something into words that many people have been thinking for some time.

  2. Ralf Haug says:

    Very good comments, couldn’t agree more.

  3. You do raise a good point. And certainly, modeling could use a little head to head in some respects (as you noted, metamodeling and model transformation and generation are excellent examples).

    Heck, even UML just as a modeling language is young and will evolve still. I think the standardization effort did help bring much needed standard notation for certain things in play. (We don’t need 5 different looking class diagrams, still don’t)

    Of course, the fear here may be similar when UML went to the OMG, that a big company (back then, Rational) would force a standard on everybody else, like it or not.

    This time, it’s looking like IBM and MS. Competition is good, but I wonder if smaller vendors and third-party solution providers are afraid of getting pushed out. One thing about standards processes is that good ideas can at least be heard, regardless of how is saying them.

    Also, some people do use standardization committees to prolumgate new ideas. Look at all the stuff in the W3C around web services, semantic web, etc. And in some cases, that is required, when you need agreement on the "playing field" to make any progress.

Skip to main content