M is for… M

MM is for M:  How cool is that?

You really can’t start talking about M without first mentioning Oslo, the code-name for Microsoft’s model-driven development infrastructure.  Oslo was formally introduced at Microsoft PDC back in October 2008, and its overarching goal is to reduce the disconnect between developer intent and implementation.   

In model-driven development, the model is the application, versus just a design artifact.  You build a model via some tools, and you execute that model via a runtime environment, thus enhancing the transparency, flexibility, and productivity of the development process.  Consider XAML (eXtensible Application Markup Language) for instance.  XAML provides a model – or more precisely a domain specific language or DSL – for describing user interfaces.  And this XAML can be consumed by the Windows Presentation Foundation (WPF) or Silverlight runtime to ‘execute the model.’

Oslo itself comprises three main components:

Repository – a SQL Server database to store models and metadata,

Quadrant – a visual design tool, and

M – a declarative language for modeling.

M, the topic of interest for this post, is a family of declarative, textual languages for building and working with models.  Within this family are

MSchema – for describing constraints and types,

MGraph – for defining instances of structured data, and

MGrammar – for defining DSLs.

To get started, you’ll want to download the Oslo SDK; the current version is the January 2009 CTP.  In addition to providing assemblies, samples, and documentation, the SDK sets up the SQL Server 2008 repository and provides a tool called Intellipad for authoring M documents.  You can also work with M and Oslo in Visual Studio, via the “M” Project template installed by the SDK.

To make this all a bit more real, I’ll work with a simple domain (here, universities I’ve attended) to create a model, add some data, and develop a simple domain-specific language.  In the process, I’ll touch on M’s three variants.



Let’s get started by defining the model with MSchema.  Below I’m using Intellipad , which ships with the Oslo CTP, to define my model.


The model includes two types, one representing the school and one representing a state.  On lines 8 and 19, collections of these types (referred to as extents) are defined; the * indicates one or more occurrences.  In line 19, the list of schools is further constrained to those universities with State values in the States collection, thus creating a relationship not unlike a primary-key/foreign-key relationship in a database.  In fact, it actually does correlate to just such a relationship as you can see when bringing up the T-SQL preview:


The preview shows how the model will be instantiated within the SQL Server repository.  When using Intellipad you can see the SQL dynamically updated in this view as the M model is modified, thus giving you some insight into how the model is translated into relational database concepts.

To load the model into the repository, I first compile the model into a binary M image file using the M compiler, which ships with the SDK.  The m command has a number of additional switches as well which can be used to verify syntax, generate T-SQL scripts, and reference metadata from other image files.

 M command line

Now that I have an image file, univ.mx, I can use the mx command to load the model into the repository.  The command below loads the metadata and model into the default server and the tempdb database, but there are additional command-line parameters available to specify the destination.  mx can also be used to export an image file from an existing schema in the database.


MX command line


View of RepositoryUsing SQL Server Management Studio, I can check out the schema that was added to the the database.  In this case, I used the basic TSQL10 transformation option (specified via the target parameter of the M compiler).  If I had used the /target:Repository option to create the image file, additional views and triggers would have been created to support the Oslo repository design pattern.



So, now that I have a model in the repository, I need to add some data.  To do so I’ll use MGraph inside of Intellipad to populate a couple of states as well as the universities I’ve attended.  Here you can see the SQL statements, namely INSERTs, that are generated from the model.


To update the repository, I’ll again use the m and mx commands:

m and mx command line

As a result, I now have data populated in my repository, as you can see from the results of the select query in SQL Server Management Studio below.



While I was able to populate my model with data, the format of that data in MGraph was a bit obtuse.  Actually, it looks a lot like JavaScript Object Notation (JSON), and not something that really connotes the domain of the data.  What I’d like to be able to do is define a grammar that’s a bit more user friendly (read: no curly braces and commas!) to people that have domain knowledge but not necessarily implementation knowledge (e.g., analysts versus developers).

In the past, many of us have written grammars in Backus-Naur Form (BNF) and used tools like lex and yacc to define grammars and parsers to accomplish a similar goal.  Here I’ll use MGrammar.

First, though, I’ll create a file that provides examples of my DSL, for instance, something like the following.  Note, the data really isn’t important at this stage.

DSL sample

In Intellipad, I next create the start of my grammar by creating a module definition and saving it as a file with the mg extension.  That extension indicates my intent to Intellipad to create a grammar, allows me to select the MGrammar Mode>Tree Preview to browse to my example file, and splits my Intellipad view into four panels

  1. my example,
  2. the MGrammar specification,
  3. a preview of the syntax tree for my grammar, and
  4. an errors view.

Intellipad in MGrammar mode

With the example and grammar specification side-by-side, you can tweak either to arrive at the desired results.  The third pane of the right provides a representation of how your grammar will parse the DSL into your domain model.  This particular grammar could use a bit more tweaking, for instance, note the extraneous quotation marks.  That’s a bit beyond the scope of a blog post though, so for more information, I refer you to the links at the end of the article, and in particular to Shawn Wildermuth’s series on Textual Domain Specific Languages for Developers

Now that I have a grammar (the .mg file), I can create an mgx file from it and use that image file to convert other instances of my DSL into MGraph files.  In other words, I’m using the grammar file to translate files written in my DSL to the underlying MGraph format, which then allows them to be added into the repository just as I did before.  The difference now though is that the source document is in my domain-specific language, not one with curly braces and commas.

So assume I have a document (unc.txt) with the following line:

"University of North Carolina" in "Chapel Hill",
"North Carolina" has 28136 students.

Using the mg and mgx command line utilities, I can transform my file in the DSL to an MGraph format:

m and mgx command line

At this point, I’m at the same place I was in the MGraph section above where I used the m and mx commands to generate the requisite SQL statements to update my model in the repository.  Note, here I will have some consistency problems though, because of the extraneous quotation marks delimiting my state values.  In the new record for the University of North Carolina, the state will be interpreted as "North Carolina", with quotes, whereas the current database has the quote-less version.  The remedy here, of course, is to not gloss over the need for tweaking the grammar as I did a few paragraphs ago 😉



This posting turned out to be much more of a tome than I expected, and we haven’t even talked about how to ‘execute’ the model!  If you’re left feeling a bit confused, and wondering how and why all these steps are a good thing, don’t despair.  The “Oslo” concept is really in its nascent stages, and we can expect the tooling and methodologies to be refined and streamlined as the technology matures.  If you like to be on the bleeding edge, then you may already have tinkered around with Oslo and its related technologies, if not, and you have ‘real work’ to do, then just put it on your radar and check back from time to time to keep your finger on the pulse of where Microsoft is going with this model-driven development paradigm.

To further whet your appetite, here’s some more information on Oslo and M:

Comments (0)

Skip to main content