D3: Release 0.0010

OK folks, after way too much delay, I’ve finally gotten all my ducks in a row and begun the process of rewriting DPMud.  Since my goal is to completely rebuild it from scratch, the process is going to take a while, and I intend to let you all look over my shoulder a bit as I go.  So don’t expect too much from this first release—all we have is a basic solution with a very simple model and some unit tests.  You can download the release from: d3-0.0010

First, let’s take a look at a few decisions that came up along the way, and then we’ll dig into some specific code areas.

Decisions

  • Source Control and Licensing: It is very important to me to make this project as relevant and useful as possible for folks trying to build applications using the EF.  So we decided to release the code via MS-PL which is a very permissive license.  I’m no lawyer, but my understanding is that this basically means you can copy the source and use any part or all of it in your own projects.  Originally it was my hope that I could just put the project on codeplex and use it for source control so that anyone interested could enlist and just sync down changes to their local machine as the project progresses, but there were various hang-ups with this approach, and in the end I decided to just use local source control and then periodically push releases up to code gallery which has a simple site system and where everything is automatically licensed with MS-PL.
     
  • Version Numbers: One of those mundane but required decisions along the way is how we will create version numbers for the releases.  Here’s what I decided: Until we have the thing up and running, everything is version 0.something.  The last four digits of the version are in the form MMDD where MM is the number of months since I started releasing (so 00 is June ‘09), and DD is the day of the month for that release.  I haven’t done it yet, but that allows me to have a very simple algorithm to automate version stamping and such as I make subsequent releases.
     
  • EF Versions and Dependencies: One critical area of decisions was around what EF features do I take dependencies on—particularly when we’re in the state where EF4 is not yet complete.  In the end I decided that the main principle was to avoid taking dependencies on EF features that aren’t yet available outside the team so that you can stay up with me if you like.  This means, for instance, that I’m not going to take a dependency on foreign-key support even though I have access to an internal build where it’s working great.  When FKs are publically available, we’ll take a look at the project at that point and evaluate whether or not it’s worthwhile to refactor the app to take advantage of them (almost certainly we will do that refactoring).  Similarly, I’m not yet going to take a dependency on self-tracking entities because it isn’t yet available.  It will become available much sooner than FKs, though, so I will plan on taking advantage of it soon.  A harder question was code-only.  The first CTP of it will also be available soon (in the same download with self-tracking entities), but I already know of some areas where I’ll need more control over the model and database generation than code-only will support in its first CTP, so we’ll have to wait on that refactoring until a later CTP of code-only.  So the basic approach will be to use EF4 beta 1 with model-first to make it efficient to design the model and generate the database and code from that model.

Initial Model

Like every other part of the system, we’re going to start with a simple model and expand.  Here’s a first look:

D3ModelDiagram

A few things to notice:

  • Each entity has an Id property which is a server-generated integer.  
     
  • Each of these entities is a “real” thing which players will interact with in the virtual environment of the game, and as such we have given them all a few common properties including Name and Description.  Eventually we’ll define an interface for these common properties so that we can reason about the commonality of these objects.  We could have an abstract base class for each of these entities, but we want them each in their own entity sets, and the designer doesn’t support more than one entityset with the same base type, so using an interface gets the key aspects we want while allowing us to still use the designer (we don’t really need implementation inheritance here anyway).
     
  • The two associations between Room and Exit have each had one of the navigation properties removed so that Room only has one collection of “Exits” (it doesn’t also have a collection of entrances, so to speak), and Exit has only the TargetRoom navigation property.  This makes our entities follow the intended model where exits go only one way.  If you want two rooms to be connected in a symmetrical fashion, you need two exits one going from room A to room B, and another one going back from room B to room A.  This is important because it means that we can easily give different names to the exits (from room A you go East to room B, but from room B you have to go West to get back to room A), and it means that you don’t always have symmetrical paths (when you jump off a cliff you can’t necessarily get back to the top of the cliff using the same way you got down, etc.).
     
  • We don’t yet have version fields for concurrency checking.  That will come later, because it requires some additional customization to the database generation.
     
  • Item has a relationship with Actor and one with Room, but logically an item can only be related to one of them at a time (and it must always be related to one or the other so that items have location).  These constraints we will represent in the database rather than the conceptual model.  They aren’t currently enforced, but they will be once we make additional customizations to database generation.

Customizing “Model First”

The feature we call model first shows up in the entity designer (aka Escher) as the option “Generate Database Script from Model” on the context menu when you right click on an open part of the designer surface.  Out of the box this will generate a SQL DDL script that you can use to create a database for persisting the entities in your model.  It also automatically generates the SSDL and MSL which corresponds to that model.  This is great because it means that I can concentrate on the thing that is important to my program, the conceptual model.  If I make a change to the model, once that change is complete, I’m done.  I don’t have to also modify the database and the mapping to reflect that change.  Since I’m also generating entity classes from the model, I get a nice “DRY” (Don’t Repeat Yourself) development experience even though various parts of the system do have repetition, I don’t have to engage in that repetition myself.  Naturally we want to have an even more DRY experience, and code-first will lead us there for folks who want to do everything in code, but for the design-time experience this is not half bad.

Naturally, though, there are times where I want more control over the database generation process, which is why there are customization hooks built-in.  The process of generating the database is orchestrated by a customizable workflow which has a couple of activities that manage generating an MSL and SSDL for the CSDL and then generating SQL from that SSDL.  This second step (SSDL->SQL) is further customizable because it is accomplished with a T4 template.   Eventually we will take advantage of these customization mechanisms to allow us to automatically generate rowversion columns for concurrency, constraints that keep item relationships straight and other things.

All of this seemed somewhat academic, though, for the first few phases of the project until I started thinking through my testing strategy.  Eventually we’ll test parts of the system using fakes and mocks and the like, but we also need integration tests that send data all the way down to the database and back, and I wanted to start writing those tests right away and deliver them with my initial stab at the model.  To make that work I really wanted APIs for creating and dropping the database somewhat like we can get from L2S so that my tests can create a temp version of the database (separate storage but same schema) and test pushing data in and out of it.  Eventually this will also be useful for deployment, import/export of data, version-to-version upgrades and the like.

So, I decided to customize the db generation process now by modifying the SSDL->SQL template to generate a C# method for creating the database schema rather than a SQL script.  The process to set things up for customization looks like this:

  1. Copy the workflow xaml file from %ProgramFiles\Microsoft Visual Studio 10.0\Extensions\EntityFrameworkTools\Workflows\DbGen.xaml and the SQL gen template from %ProgramFiles\Microsoft Visual Studio 10.0\Extensions\EntityFrameworkTools\Templates\SsdlToSql10.tt to my project. 
  2. I renamed the template to SsdlToCode.tt and modified the xaml file’s TemplatePath to just have that name rather than the full path to the original template. 
  3. Both files I added to my project in VS, but because they aren’t actually used at compile time (they are just used by the designer when I choose the menu option to generate the database) I made sure that the Custom Tool property for each of them is blank and the Build Action property is set to None.
  4. I clicked on a blank part of the Entity Designer surface and then set the “Generate Database Script Workflow” property to just DbGen.xaml (just the file name because the xaml file is in the same project and directory with the EDMX).

Now when I choose the option to generate the database it runs the workflow as specified in my project which then uses the template from my project rather than the default versions.

Next, I modified the template file to output C# code rather than SQL.  You can find the template file in the release and do your own diff against the default template, but the basic approach was to wrap each batch of SQL statements up to a “GO” in an ado.net command with a C# @”” string and execute it.  I eliminated a chunk of the template that would drop existing tables because my approach is to drop the whole database and recreate it, and I put all of these execute statements into a method on an internal partial class that I added to my model DLL.  That way most of that class is defined outside the template but this one method can be defined by the code generated from the template.  Then I further created a partial class file for my generated EF context and added methods to check for existence of the database, drop the database, and create it (using the generated method).

One current limitation in the designer is that the generate database command always by default creates a file with a name based on the name of the EDMX file and with SQL for the extension.  I’ve already started discussions with the designer folks about adding some customization hooks here, but for now you just have to remember that whenever you generate the database you should rename the file.

All of this enabled me to write the following simple test:

 [TestMethod]
public void DropAndCreate()
{
    using (var ctx = new D3Context("name=D3TestContext"))
    {
        ctx.DropDatabase();
        Assert.IsFalse(ctx.DatabaseExists());
        ctx.CreateDatabase();
        Assert.IsTrue(ctx.DatabaseExists());
    }
}

 

Not only is it simple to perform these operations, but also everything is based on the connection string given.  So I just created two different connection strings in my app.config file—one called D3Context (the default) and the D3TestContext which is the exact same as the other except that it has a different initial-catalog name for my test database.

 

The SqlDb Internal Class

It’s probably also worthwhile to take a brief look at the internal class I called SqlDb which does the heavy lifting for these database interactions.  The first interesting piece of the puzzle is that in order to create or drop the database with SQL Server (or in this case Sql Express) we need to make sure that we don’t use a connection string with the database in question as the initial-catalog.  So, we extract the connection string from the StoreConnection property on the EntityClient connection instance created by the context and create a SqlConnectionBuilder from it.  SqlDb’s constructor takes a DbConnection parameter since that’s what ObjectContext exposes and creates the builder like this:

 

 var builder = new SqlConnectionStringBuilder(((EntityConnection)entityConnection).StoreConnection.ConnectionString);

Then we store the initial-catalog property into a string field so we know the name of the database we should be working with, and then we replace the initial-catalog with “master” since that database is present on every SQL instance.

The constructor then extracts the new connection string from the builder and uses it to create a new SqlConnection instance.  We use this connection to execute commands without directly connecting to the database.  The class also implements IDisposable so that its Dispose method can Dispose of the SqlConnection.

The other interesting trick which came up was the fact that while my initial executions of the above test succeeded while stepping through in the debugger, running the whole test straight through failed because the first part of the DropDatabase method executes a command to check if the database exists, and after that the drop command would fail with an error saying that the database was in use.  Eventually I fixed this by adding a ClearAllPools method to the class which closes the connection, calls SqlClient.ClearAllPools() to make sure that there are no connections in the connection pool holding onto the database and then re-opens the connection.  This method is called after the existence check and before the command to drop the database.

Conclusions

We’re finally off and running!  We have an initial model, a simple way to deploy the database and some initial tests.  (Those tests have already found a couple of issues I had in my first crack at the model, by the way.  So the testing is already paying off.  Yay!)  Next I’ll probably tackle some further customizations of the generated database.  Soon we’ll look at replacing the default code gen with a POCO template, and in the not distant future I hope to fill out more pieces of the architecture so we can get a small end-to-end slice going with a WCF service and Silverlight, but all in good time…

- Danny