Domain Specific Modelling. Is UML really the best tool for the job?

This is a reaction to a recent posting by Grady Booch on his  blog (May 21st 2004, “Expansion of the UML“). Before honing in on particular statements, here's a good chunk of the posting to set some context:


"I was delighted to see today's report that Sun has announced support for the UML in their tools. This comes on the heels of Microsoft telegraphing their support for modeling in various public presentations, although the crew in Redmond seem to trying to downplay the UML open standard in lieu of non-UML domain-specific languages (DSL). […] There is no doubt that different domains and different stakeholders are best served by visualizations that best speak their language - the work of Edward Tufte certainly demonstrates that - but there is tremendous value in having a common underlying semantic model for all such stakeholders. Additionally, the UML standard permits different visualizations, so if one follows the path of pure DSLs, you essentially end up having to recreate the UML itself again, which seems a bit silly given the tens of thousand of person-hours already invested in creating the UML as an open, public standard."


Let's start with the statement "There is no doubt that different domains and different stakeholders are best served by visualizations that best speak their language". This seems to imply that a domain specific-language is just about having a different graphical notation - that the semantics that underpin different DSLs is in fact the same - only the notation changes. This view is further reinforced by the statement "but there is tremendous value in having a common underlying semantics model for all such stakeholders".


How can one disagree with this? Well, what if the semantic model excludes the concepts that the stakeholders actually want to express? If you examine how folks use UML, especially those who are trying to practice model driven development, you'll see that exactly the opposite is happening. Model driven development is forcing people to get very precise about the semantics of the models they write (in contrast to the sometimes contradictory, confusing and ambiguous semantics that appears in the UML spec). This precision is embodied in the code generators and model analysis tools that are being written. And, surprise, surprise, there are differences from one domain to another, from one organization to another. Far from there being a common semantics, there are significant and actual differences between the interpretations of models being taken. And how are these semantic differences being exposed notationally? Well, unfortunately, you are rather limited in UML with how you can adapt the notation. About all you can do is decorate your diagrams with stereotypes and tagged values. This leads to (a) ugly diagrams and (b) significant departure from the standard UML semantics for those diagrams (as far as this can be pinned down).


I speak from experience. I once worked on the Enterprise Application Integration UML profile, which has recently been ratified as a standard by the OMG. The game here was to find a way of expressing the concepts you wanted, using UML diagrams decorated with stereotypes and trying to align with semantics of diagrams as best you could. It boiled down to trying to find a way of expressing your models in a UML modelling tool, at first to visualize them, and then, if you were brave enough, to get hold of the XMI and generate code from them. So in the EAI profile, we bastardized class diagrams to define component types (including using classes to define kinds of port), and object diagrams were used to define the insides of composite components, by representing the components and ports as objects, and wires as links. Now, you can hardly say that this follows the "standard" semantics of UML class diagrams and object diagrams.


And this gets to the heart of the matter. People are using UML to express domain-specific models, not because it is the best tool for the job, but because it saves them having to build their own visual modelling tool (which they perceive as something requiring a great deal of specialist expertise). And provided they can get their models into an XML format (XMI), however ugly that is, they can at least access them for code generation and the like. Of course, people can use XML for this as well, provided they don't care about seeing their models through diagrams, and they are prepared to edit XML directly.


So, rather than trying to force everyone to use UML tools, we should be making it easy for people to build their own designers for languages tailored to work within a particular domain or organization's process, not forgetting that these languages will often be graphical and that we'll want to get at and manipulate the models programmatically. And this does not preclude vendors building richer designers to support those (often horizontal) domains where it is perceived that the additional investment in coding is worthwhile. Indeed, Microsoft are releasing some designers in this category with the next release of Visual Studio.

Comments (13)
  1. Rod says:

    Give it up

    You are not Booch

  2. Stuart is indeed not Grady, but his points are valid. UML has a good set of semantics for describing O-O programs. It also has a notation, 90% of whose symbols are unrelated to the semantics they represent (the Actor stick figure being the most obvious exception). That, along with its ubiquity, is the reason people try to use it to model entirely different things – with predictably unsatisfactory results.

    In domain-specific modelling, most of the symbols will represent something recognizable from the domain – giving benefits Booch too accepts. The main point though is in the rules – e.g. how are you allowed to connect these symbols – and their semantics: what does a given symbol or connection mean in the system.

    Most views of DSM (some in Microsoft appear to differ) include the idea of generating full code automatically from the models, using domain-specific code generators. Industrial experience has consistently shown such use to be 5-10 times faster than current ‘standard’ practices.

    The challenge is to make building DSMs, and their tool support, as quick and easy as it should be. Microsoft’s envisaged Whitehorse SDK, Eclipse’s EMF+GEF, and metaCASE tools all offer ways to do this, the last even without any coding.

  3. David Webber says:


    Exactly on the money. Check out VisualScript as an environment that allows you to build models and then create XML.

    I’ve just used it to do BPSS models – and for the exact reason that UML cannot handle the richness and precision needed. And especially *context*. Context is the missing part to UML IMHO. These newer XML standards we’re building over in OASIS have context mechanisms at the heart of them.

    Anyway – check out the VisualScript model for BPSS – view the JPG, the XML, and then download and try the model. There’s also a tutorial on using the model.

    Enjoy, DW

Comments are closed.

Skip to main content