ASP.NET 2.0 #7: Data Access

Given the monumental success of Dan Brown’s novel, The Da Vinci Code, I must assume that many of you have read it. For the handful of you who haven’t read it yet, it is a reinterpretation of Western history and sexuality, as well as a theory of the true nature and location of the Holy Grail, all thinly disguised as a fast-paced thriller. We in the programming business are in pursuit of several Holy Grails ourselves, one of which concerns data access.

 

The vision we have is of being able to read data from our data stores into instances of our classes, and being able to bind those instances directly to our user interfaces. What we must typically resort to today is almost medieval in comparison to that objective. We typically write considerable code to retrieve data from our data stores, and then it is in the form of generic DataSet objects rather than instances of classes corresponding to concepts particular to the business logic of our application, and it is generally these generic DataSet objects that get bound to the controls on our user interfaces. Here’s another way of describing the problem. In the design of our applications, we generally like to identify the key concepts expressed in the requirements, and specify classes representing them, and describe how users will work with the solution in terms of operations they will perform on those classes. However, when it comes to actually building our applications, what we find is that Visual Studio does little to make doing any of that easier, and instead encourages us to eschew our own classes, and simply push DataSets back and forth between the screens and database.

 

A better infrastructure for our applications would need to provide two things: object-relational mapping, and object data binding. Visual Studio 2005 was originally intended to provide both of these two facilities, but now provides only one, however, we’ll show you in a moment how you can still have both.

 

We said that what we need are object-relational mapping and object data binding. What are those?

 

Well, object-relational mapping, or ORM, as it customarily known, refers to having a layer in one’s application that serializes its objects to and from a relational database. That layer would quite possibly decompose a single object into rows in several database tables. Because developing an ORM system is quite a complex undertaking, and because it is quite possible to build an ORM layer suitable for most applications, one would typically buy an ORM facility rather than build one, and there are several vendors of ORM facilities.

 

If that makes the situation with respect to ORM sound straightforward, well, it is not so at all, unfortunately. The original Enterprise Java Bean specification brought ORM into vogue with its concept of container-managed persistence of entity beans, which gave the vendor of the Enterprise Java Bean implementation the option of incorporating an ORM facility. Then Sun released a specification called Java Data Objects which defined the interface of an ORM layer primarily intended for use outside of Enterprise Java Bean solutions. The release of that specification caused several vendors of Java ORM facilities to be non-standard, including, most notoriously, Castor, which misleadingly called its product, Castor JDO. Late last year, Sun said that it would be releasing a new specification that would supercede both the JDO specification and the container-managed entity beans persistence section of the Enterprise Java Beans specification. With so much churn in the specifications, it is hard to see the business justification for providing an implementation of the Java ORM specifications.

 

In the Microsoft world, things have been no less confusing. Microsoft announced that it would be releasing an ORM facility called, ObjectSpaces as part of Visual Studio 2005. Then Microsoft said that ObjectSpaces would be delayed, and would not be included in Visual Studio 2005, but rather in the subsequent version of Visual Studio. Later, in May 2004, Microsoft declared that ObjectSpaces and a technology that was originally supposed to be part of the Longhorn version of the Windows operating system, called, WinFS, overlapped in their intended functionality, and that instead of releasing both technologies, a single ORM facility would be released as part of Longhorn. Then, later last year, Microsoft announced that component of Longhorn would be delayed beyond the release of Longhorn. Microsoft has continued to say, though, that it is committed to providing an ORM technology!

 

So, the history of ORM is about as long and divisive as the history of the Catholic Church, which Dan Brown revises in The Da Vinci Code. Happily, a perfectly good ORM facility is available for use with .NET, which is provided by IdeaBlade.

ORM is just half of what we need in order to easily read data from our data stores into instances of our classes, and bind those instances directly to our user interfaces. The other thing that we need is object data binding: the ability to bind custom objects to the controls of user interfaces. ASP.NET 2.0 has built in support for object data binding.

Here are object-relational mapping and object data binding at work together:

This mark-up in our ASPX file uses an object data source that pulls data from a custom type and binds it to a drop-down list:

<asp:Content ID="FormTypesContent" ContentPlaceHolderID="FormTypesPlaceHolder" Runat="Server">
<asp:ObjectDataSource
id="ContractDocumentTypesDataSource"
runat="server"
selectmethod="FindAll"
typename="Saler.Business.ContractDocumentType"
>
</asp:ObjectDataSource>
<asp:dropdownlist id="FormTypes" runat="server" CssClass="Normal" DataSourceID="ContractDocumentTypesDataSource" DataValueField="Key" DataTextField="Description"></asp:dropdownlist>
</asp:Content>

Here is the code in the custom type's FindAll method, which uses IdeaBlade's ORM tool:

public

static ArrayList FindAll()

{

Data.ContractDocumentType[] documentTypesData = (Data.ContractDocumentType[])PersistenceManager.DefaultManager.GetEntities(

typeof(Data.ContractDocumentType));

ArrayList documentTypes =

new ArrayList(documentTypesData.Length);

foreach (Data.ContractDocumentType documentTypeData in documentTypesData)

{

documentTypes.Add(

new ContractDocumentType(documentTypeData));

}

return documentTypes;

}

All of the data access happens in that single line highlighted in red. 

 

Here is a sample of a more complex application of the IdeaBlade tool, where the IdeaBlade ORM facility is used to handle users adding batches of contract documents to the database. Their input is handled by a method that invokes the Add method of the ContractDocumentSet class. If any serial numbers in the batch being added match serial numbers of documents of the same type already in the database, then the list of duplicate serial numbers is returned so that they can be displayed in an error message to the user. So, the Add method of the ContractDocumentSet class needs to access the database twice, first, to determine if there are duplicates, and, second, if there are no duplicates, to add the new serial numbers to the database. Here is ALL of the code required:

public int[] Add()

{

PersistenceManager manager = PersistenceManager.DefaultManager;

RdbQuery query = new RdbQuery(typeof(Data.ContractDocument), Data.ContractDocument.ContractDocumentTypeKeyRdbColumn, RdbQueryOp.EQ, this.documentType.Key);

query.AddClause(Data.ContractDocument.ContractDocumentNumberRdbColumn, RdbQueryOp.Between, new int[] { this.startingSerialNumber, this.endingSerialNumber });

Data.ContractDocument[] duplicates = (Data.ContractDocument[])manager.GetEntities(query);

if ((duplicates == null)||(duplicates.Length <= 0))

{

ContractDocumentStatus status = ContractDocumentStatus.FindByCode("N");

for (int currentDocumentNumber = startingSerialNumber; currentDocumentNumber <= endingSerialNumber; currentDocumentNumber++)

{

Data.ContractDocument.Create(manager, this.documentType.Key, currentDocumentNumber, this.branch.BranchKey, status.Key, this.additionDate);

}

return new int[]{};

}

else

{

ArrayList duplicateSerialNumbers = new ArrayList(duplicates.Length);

foreach (Data.ContractDocument contractDocument in duplicates)

{

duplicateSerialNumbers.Add((int)contractDocument.ContractDocumentNumber);

}

return (int[])duplicateSerialNumbers.ToArray(typeof(int));

}

}

It is the initial lines of code that check for duplicates that is especially interesting. Consider what the code has to find out from the database: it must ascertain whether, among the contract documents of the same type as those in the incoming batch, any of their serial numbers fall between the starting and ending serial numbers of the batch. Now let us look at the code does. It instantiates an IdeaBlade Query object, specifying that the query is to obtain all of the contract documents with contract document type identifiers matching the contract document type identifier of the documents the user is attempting to add. The code then adds a clause to the query specifying that only those contract documents are to be retrieved that have serial numbers equal to, or higher than the starting serial number of the batch, or equal to or lower than the ending serial number of the batch. The query is then submitted for execution by the IdeaBlade PersistanceManager.

If no contract document objects are returned by the query, then the serial numbers of the batch that the user is inserting do not duplicate the serial numbers of any documents of the same type already in the database, and the new serial numbers can be added to the database. The code that accomplishes that loops through the serial numbers in the batch, simply calling the Create method of the ContractDocument class that was generated automatically by the IdeaBlade Object-Relational Mapping Tool.

What should be apparent from this is that with ASP.NET 2.0’s provision for object data binding, and IdeaBlade’s object-relational mapping facility, it is possible to accomplish two crucially important things. First, it is possible to easily code solutions in a way that closely corresponds to how we customarily design them, passing around business objects that reflect our analysis of the user’s requirements, instead of having to translate those objects to and from generic types like .NET’s DataSet object. Second, we are able to drastically increase the proportion of our code that is focused on implementing the business logic unique to the system we are constructing, rather than spending our time writing code that has to do with generic problems of connecting to databases and putting data on screens.