One of the frequently asked questions is about the lifetime of a DataContext. Should it be a long-lived, application-scoped object or should it be a short-lived, request-scoped object? Let’s get to the answer by considering the key parameters:
DataContext is ideally suited for a “unit of work” approach: Retrieve a bunch of objects through one or more queries, make changes to the resulting object graph based on user input (databound to controls) or some other request and then call SubmitChanges(). Where applicable, this can be a very efficient usage since all the changes in the unit of work are computed at once and the cost of query (or queries) is amortized over all the CUD operations. This is the case in case of a 2-tier app or in case of an SOA-type app where the service is sufficiently coarse-grained to allow this pattern.
DataContext is also designed for “stateless” server operation: In ASP.NET apps, it is important to minimize state. Plus the only scalable mechanism for maintaining “state” is to serialize it and DataContext is not (by design) serializable. Hence, we spent considerable effort in making DataContext lightweight to construct and disposable. For example, you can use pre-cooked mapping (MappingSource) and cache compiled queries and then use them with a request-scoped DataContext. Here, even the DataContext instances used for a query and a CUD (Create, Update, Delete) operation will be different. This is how it works in case of a web app using LinqDataSource.
With the two “patterns”, let’s look at some caveats if not antipatterns (YMMV)
Long-lived usage:DataContext does not itself overwrite the objects once you retrieve them through queries. So as time passes, the retrieved objects can become stale if they are frequently changed. Hence, the longer the elapsed time since the query (or queries), the greater the chances of running into an optimistic concurrency exception when you eventually call SubmitChanges(). Of course, how long is too long entirely depends on the characteristics of your data and application. This caveat not very relevant for reference data that is infrequently updated.
Life after SubmitChanges(): DataContext could be used after SubmitChanges() but one has to be careful. SubmitChanges() does all the hard work of figuring out all the changes you have made to the object graph. It orders the CUD operations for you and provides optimistic concurrency check at the granularity of each changed object. However, (by design), it does not do anything about the objects that you have only read but not changed. If those objects have changed in the database, then you have stale data that cannot be easily refreshed. Most applications are quite tolerant of not requiring checking of the “read-set” for submitting changes. However, as time passes, the staleness of “read-set” can be a problem.
In a nutshell,
- It is better to err on the side of shorter lifetime – a unit of work or even a single request for stateless servers are good patterns to start with.
- If you have reference data that doesn’t get stale, then by all means consider using a long-lived DataContext instance. Again what is acceptable as “long life” will depend on your app.
- The dominant cost is likely to be queries rather than creation of a new DataContext instance. So use compiled queries and see if you can keep the reference data around. Don’t sweat the overhead of creating DataContext instance for making a set of changes unless you have hard data from your app indicating that it is an issue.
- If you want to use a DataContext instance for a long time or for more than one SubmitChanges(), take the time to understand the semantics (described above). It is not the best “default” usage.
- Above all, first think about the correctness or acceptable level of “staleness”. Then, see if the DataContext lifetime is even on the critical path for perf. Otherwise, the cost of instantiating a DataContext is not terribly relevant (OK, that is a platitude but if I had a dollar for every attempt at premature optimization I saw, I would be a rich man! Yes, even though dollar buys a lot less these days, premature optimization seems to have become even more plentiful 😉 )