Intrinsic Database Management with Database Projects (Chris Rummel)

One of the new LightSwitch features in the Visual Studio 2013 Preview is enabling the use of SQL Server Data Tools projects (here’s the MSDN SSDT documentation and their blog).  When you add a database project to your LightSwitch project, the contents of the database project will be incorporated into the intrinsic database that LightSwitch deploys when you F5 or publish your application.  This enables a few interesting scenarios in your LightSwitch projects.  To start out, you can right-click on your LightSwitch solution, select “Add –> New Project” and select the SQL Server Database project template.

image

image

Then you can add items to the database project – start with a post-deployment script.  This script will be run as the last step in creating the intrinsic database, so your tables will already be created when it runs.  The last step to using the database project is to let LightSwitch know about it.  Open the LightSwitch project properties and select your database project.

image

Note that you won’t be able to select a database project if you don’t have any intrinsic tables, so if you’re trying this out in a new project you’ll have to create a table first.

Test Data for Your Development Database

During development you probably want some data to use in your application.  With a database project, you can do this en masse instead of creating the test data by hand.  Another nice aspect of this is that the database project can be checked in, so you can share sample data with others on your team.  To start out with, create an entity called “Person” with FirstName, LastName, and LastBlogPost properties:

image

Then back in the post-deployment script, you can add this code:

  1. SET IDENTITY_INSERT People ON;
  2.  
  3. MERGE INTO People AS Target
  4. USING (VALUES
  5.   (1, ‘Chris’, ‘Rummel’, ‘2013-03-28T08:01:00’),
  6.   (2, ‘Matt’, ‘Evans’, ‘2013-04-15T11:44:00’),
  7.   (3, ‘Andy’, ‘Kung’, ‘2013-07-01T09:00:00’),
  8.   (4, ‘Brian’, ‘Moore’, ‘2013-05-14T08:07:00’),
  9.   (5, ‘Matt’, ‘Sampson’, ‘2013-05-03T06:07:00’),
  10.   (6, ‘Steve’, ‘Lasker’, ‘2013-04-22T09:38:00’),
  11.   (7, ‘Heinrich’, ‘Wendel’, ‘2013-04-10T08:13:00’)
  12. )
  13. AS Source(Id, FirstName, LastName, LastBlogPost)
  14. ON Target.Id = Source.Id
  15. — update matched rows
  16. WHEN MATCHED THEN
  17. UPDATE SET FirstName = Source.FirstName, LastName = Source.LastName, LastBlogPost = Source.LastBlogPost
  18. — insert new rows
  19. WHEN NOT MATCHED BY TARGET THEN
  20. INSERT (Id, FirstName, LastName, LastBlogPost)
  21. VALUES (Id, FirstName, LastName, LastBlogPost)
  22. — delete rows that are in the target but not the source
  23. WHEN NOT MATCHED BY SOURCE THEN
  24. DELETE;
  25.  
  26. SET IDENTITY_INSERT People OFF;
  27.  
  28. GO

 

Next add a BrowsePeople screen.  Then when you F5, you’ll see the data from the script.

image

Using MERGE INTO with these match options, the development database will be reset to the same state each time the post-deploy script is run (see also the documentation on MERGE).  If we used INSERT instead, we could end up with duplicate rows, which isn’t what we want in this case – it might fit for your application though.  One thing to note is that the post-deploy script isn’t run every time you start debugging.  After this first time, it’ll only be run when the intrinsic database changes.  This includes changing an entity in the designer, changing authentication settings, or editing a file in the database project..  You can dirty the file by just adding whitespace or a comment and saving if you want to force the script to be run again.

SSDT also has a blog post on Including Data in a SQL Server Database Project with more details.

Conversion of Existing Data

There’s also a couple things you can do if you already have data that you want to update.  One time you might want to do this is if you’re moving a choice list into the database.  I created an Orders table that has a string choice list for the order status:

image

image

Now we want to change the order status to be an integer foreign key to an OrderStatus table but we’d like to preserve the existing data.  Let’s set up the OrderStatus table using our database project:

image

  1. SET IDENTITY_INSERT OrderStatusSet ON;
  2.  
  3. MERGE INTO OrderStatusSet AS Target
  4. USING (VALUES
  5.   (1, ‘Received’),
  6.   (2, ‘Payment Cleared’),
  7.   (3, ‘Packed’),
  8.   (4, ‘Shipped’),
  9.   (5, ‘Delivered’),
  10.   (6, ‘Backordered’)
  11. )
  12. AS Source(Id, StatusText)
  13. ON Target.Id = Source.Id
  14. — update matched rows
  15. WHEN MATCHED THEN
  16. UPDATE SET StatusText = Source.StatusText
  17. — insert new rows
  18. WHEN NOT MATCHED BY TARGET THEN
  19. INSERT (Id, StatusText)
  20. VALUES (Id, StatusText)
  21. — delete rows that are in the target but not the source
  22. WHEN NOT MATCHED BY SOURCE THEN
  23. DELETE;
  24.  
  25. SET IDENTITY_INSERT OrderStatusSet OFF;
  26.  
  27. GO

And then we can rename the old status column and add our OrderStatus relation:

image

Then we can add the actual code to do the update to our post-deployment script:

  1. UPDATE Orders SET Order_OrderStatus = OrderStatusSet.Id
  2. FROM Orders
  3. JOIN OrderStatusSet ON Orders.Status_OLD = OrderStatusSet.StatusText
  4.  
  5. GO

Next we can add a BrowseOrders screen and see that we now have a foreign-keyed OrderStatus column that matches the old Status column:

image

 Now we can delete the old status column but still have all our old data.

Build Scripts

Another way to use SQL scripts is called a build script.  Build scripts are included as part of the build of the database project and are parsed by the database deploy rather than executed directly.  Add a build script to your database project:

image

Let’s say you have a query on Orders that filters by OrderStatus, and you’ve done some SQL profiling and this query is taking up a large chunk of time.  One possible performance improvement you can make is adding an index to the OrderStatus column.  Normally you could do this by checking the “Include in Unique Index” box in the entity properties, but OrderStatus isn’t unique in the table, so this won’t work.  Luckily we can use the database project to add a non-unique index to the column.  All you need is one line in your build script:

  1. CREATE INDEX OrderStatusIndex ON Orders(Order_OrderStatus)

If you used this line in your post-deploy script, it would cause a build error after the first time you F5 or deploy the project, because the index already exists.  But because the build script is parsed and compared against the existing schema instead of just executed, we won’t have that problem here.

Database projects also have pre-deploy scripts, which are run as the first step of the database deployment and can be similarly used to help resolve schema changes.  Another thing pre- and post-deploy scripts are useful for is type conversions that you want to resolve in a way other than the LightSwitch default – today, you could use this to narrow a column from integer to short but set any entry over the short maximum value to zero, for instance.  Future LightSwitch features might make this even more useful, so stay tuned.

So there’s some of the cool stuff you can do with database projects.  Let us know on the forums how you’re using the database projects or if you run into any problems.

Thanks,
Chris Rummel, LightSwitch SDET