Leonardo has an interesting post over at the himalia site, wondering whether abstractions in general (and DSLs in particular) move around the testing burden away from DSL users onto DSL authors. There's certainly a lot of implementation testing truth here, although experience tells me that many DSL authors won't test a very high number of language variants against their code generators - some room for more tooling there I think.
This reminds me of a conversation I was having with Peter Provost at TechEd about the notion of test-driven modeling and what it might mean as a process for creating models. I'm inclined to think we need an abstraction of the outcomes we want that we can use to express tests against the model of the implementation. We might even then manage to have a set of constraints between the two that would give us early warning that a particular implementation model wouldn't meet the outcomes without actually going down as far as code.
Of course the next observation is that in some cases, a model of the outcomes should be enough to create a complete implementation without needing a separate model of the implementation. however, I think in most cases you need a more mature product line that we've all got to have that lack of human input into the implementation choices. it certainly should be able to generate a skeleton implementation though.
It's interesting to compare how an outcomes model might differ from a detailed model of business requirements and non-functional requirements. Would they be the same or would the outcomes contain more of the assumptions about implementation styles than the requirements model would? In practice, I've often found that business requirements have been reverse engineered from an undocumented mental model of what an implementation could look like, so it'd be interesting to see those assumptions exposed in formally expected outcomes. It would certainly have flagged up a lot of erroneous assumptions a lot earlier in the process on quite a few projects I've worked on.