Two approaches to standardization


This is an important topic that I’ve been meaning to blog about for
months, but can’t summon the energy to write the dissertation-length
post that would do it justice.  I think I’ll take the typical
blogosphere approach of throwing out random thoughts as they occur.



Today’s random thought was triggered by Sun starts thinking about the Next Java. Microsoft delivers it : “on the other side of the fence, you have Microsoft, which used its own process
to find a heir to the very successful C#:  work behind closed doors and when
done, publish it and tell the world “this is it“.”  
It’s quite true that there are two polar opposite approaches to
standardization in the software industry today.  One might be
called “design by committee” (apologies to Jonathan Marsh),
where one starts with a cross-industry group of experts, takes various
existing technologies as raw inputs, and over time creates a new
specification that somehow balances the various contending technical
perspectives and business realities.  As Michael Rys and I have
discussed previously, XQuery is a rather good example of this processes
working slowly but ultimately effectively. There are of course plenty
of examples of it working quickly and badly (I would argue that
Namespaces in XML is an example), slowly and badly (XML Schema, W3C
XLINK …).  There might be some “quickly and well” examples out
there, but I can’t think of any!



The other extreme, I suppose, is “work behind closed doors, publish it
and tell the world”.  That’s not viable in today’s world, however,
and that’s not what Microsoft does even with critical specs such as C#.
The more viable approach is to start  behind closed doors,
then gradually work outwards: get internal stakeholders on board the
vision and have them provide feedback and suggestions for improvement;
take the result to selected external stakeholders and get their
feedback; publicize a draft spec and get widespread feedback; and
finally take a fairly well-polished result to a standards body for
bug-checking and ratification. Only at that stage does anyone say “this
is it.”  C# 3.0 is at something like the third stage – getting
feedback from external stakeholders.  I can attest that the
feedback we heard at PDC, from MVPs, from bloggers, etc. is being
incorporated into the LINQ / XLinq features previewed a couple of
months ago.



The second approach is definitely what is in favor at Microsoft these days, and the process has to some extent been documented by IBM.  I can certainly appreciate the cynicism of
those who only see the process from the outside, but I’ve not heard of
qualified people excluded from the public review process.  My
sense, however, is that it is the best hope for producing specs via a
process that is reasonably open — no single company can impose its own
view or break existing specs for nefarious business reasons — but also
has someone ultimately responsible for ensuring the conceptual
integrity of the result. 



Comments (4)

  1. As someone who does work in standards bodies, I understand a lot what is wrong with them. They are slow, they tend to compromise, and there is a lot of pressure by influential-yet-ignorant people to adopt things (WS-RF) that dont make sense.

    However, I dont think that the WS-* process is the right one, no matter how much MS and IBM proclaim its value. Case in point. WS-Addressing.

    Spec status: nearly at 1.0

    number of tests: 0 as of 1/oct

    probability of an implementation complying with the spec when there are no formal tests for compliance: 0

    probably of WS-* interop given that WS-A interop is so low: 0

    All those interop festivals may help the vendors make sure their implementations are vaguely consistent, but the cost of participating is steep and does nothing to ensure that any of the specs actually meet the original standard.

    Standards need to go test driven.

    Also, there is a third way of developing. Open Source. A single codebase is developed in purpose, with tests. No need for committees of architects. No need for multiple codebases. One codebase, whose unit tests provide the formal definition of behaviour.

  2. Bill Donoghoe says:

    Isn’t XML 1.0 an example of "quickly and well". I have often wondered why this worked. My thoughts on the factors at play are:

    a. the ability of the editor of the specification to "get on with it".

    b. the fact that the marketing divisions within software companies hadn’t discovered "standardisatiion" (cynical I know); and

    c. The significance of the standard was underestimated by nearly everyone (not important so it was left to those who were only interested in the result and not other agendas).

  3. mikechampion says:

    Rick Jelliffe http://www.oreillynet.com/pub/wlg/8313 calls this the "donut line method of standardization" (conversations in the donut line at PDC presumably substitute for a rigorous standards review process). That seems to miss a point that I admittedly did not make clear: Once a spec is given the imprimatur of a standards organization, it is the *organization* that owns the standard, prevents the originator from exploiting an ownership, and generally prevents the kind of stuff that happened in the Bad Old Days. The process we prefer where specs are pretty well cooked and field-tested *before* being submitted to a standards group is not how things have worked in the Web standards world, but require few if any changes to the standards orgs or their formal processes.

  4. Rick Jelliffe says:

    "That seems to miss a point that I admittedly did not make clear"

    I would like to address some other points that you also did not make: your unstated recommendation that all kittens should be drowned, in particular: MS is going too far, in my opinion. Not to mention your unstated announcement of Windows Live’s policy that everyone must have a first name beginning with "M" and a funny red hat. These are hardly credible, and I never would have raised these issues except for their admitted non-presence!