The Anatomy of a LightSwitch Application Part 3 – the Logic Tier

The LightSwitch running application consist of three tiers—presentation, logic, and data storage. In the prior post we covered the presentation tier. In this post we will take a deeper look at the logic tier.

Just as a recap, the presentation tier is a Silverlight client application that can run as a desktop application (out-of-browser) or as a browser-hosted application. LightSwitch uses a 3-tier architecture where the client sends all requests to fetch and update data through a middle tier.

The primary job of the logic tier is data access and data processing. It is the go-between from the client to each of the data sources that have been added to the application. LightSwitch creates a data service in the logic tier for each data source. Each data service exposes a set of related entities via entity sets. There are query operations for fetching entities from the entity sets, and a single submit operation for sending added, modified and deleted entities for processing.

Each data service talks to its corresponding data source via a data source provider. These providers are implemented by LightSwitch. The LightSwitch developer doesn’t need to know about the underlying data access technologies employed in order to work with data or to write business logic.

Let’s dig into the details of data services, business logic, query operations, and submit operations. We’ll also look at some implementation details: data providers and transaction management.

Data Services

A data service encapsulates all access to a data source. A LightSwitch logic tier hosts any number of data services which are exposed as public service endpoints at the service boundary. Each data service exposes a number of queryable entity sets with operations for querying entities and an operation for submitting a change-set of added, updated and deleted entities. An entity set is a logical container for entities of the same entity type. Operations that fetch entities always fetch them from a given entity set. Likewise, operations that add, update, or delete entities update them through a given entity set.

If this terminology is new to you, you can think of an entity set as analogous to a SQL table. An entity instance is then analogous to a SQL row, and the properties of the entity type match the SQL columns. (Note that these are not equivalent concepts, but will help ground you in something familiar.)

At a more formal architectural level, LightSwitch chose to follow the Entity Data Model (EDM) for defining our data services. You can think of a LightSwitch data service as implementing an entity container having entity types, entity sets, association types, and association sets. We do not yet support complex types or function imports which are also part of the EDM. The LightSwitch developer needn’t be bothered with all these concepts. The LightSwitch tool makes importing or defining entity types and relationships straightforward and LightSwitch takes care of the rest.

To get a better understanding of the structure of a data service, let’s look at an example. The following diagram shows example Northwind data service with two entity sets, Customers and Orders. Each entity set has the default All and Single queries, and customers has an additional modeled query that selects the Active customers. It also has the default SaveChanges operation.

data service

Let’s now look at this in more detail. We’ll go over query operations, submit operations, custom operations, and how custom business logic is associated with the entity sets and operations.

Query Operations

A query operation requests a set of entities from an entity set in the data service, with optional filtering and ordering applied. Queries can have parameters and can return multiple or single results. A query can define specific filtering and ordering intrinsically. In addition to passing parameters, the caller can pass additional filter and ordering predicates to be performed by the query operation.

For each entity set, LightSwitch provides a default “All” and “Single” query. For example, for a Customers entity set, LightSwitch will generate Customers_All to return all of the customers and Customers_Single to return one customer by key. You can define additional query operations for an entity set that defines its own parameters, filters and ordering.

LightSwitch queries are composable so that new queries can be based on an existing one. For example you can define an ActiveCustomers, based on the built-in Customers_All, and then then define ActiveCustomersByRegion(string region) based on the ActiveCustomers query.

Query operations can have parameters that are used in the filter (where) clauses. LightSwitch also supports optional parameters. Optional parameters are a special case of nullable parameters. If the parameter value is null at runtime, LightSwitch omits the query clause that uses it. Optional parameters are useful in building up screens where the end-user may or may not provider certain filter criteria. For example, say you are defining a search query for Customers. You want to return all the customers, but if the “active” parameter is specified, you return only the active or inactive customers. You define the filter clause in the designer as follows, designating the parameter as an optional Boolean type.

anatomy query filter

LightSwitch generates a nullable Boolean parameter and interprets the where clause as follows:

where (!active.HasValue || customer.IsActive == active.Value)

A query can be designated as a singleton query by specifying “One” as the number of entities to return. A singleton query operation returns just one entity or null and the query is not further composable. The runtime behavior matches the LINQ SingleOrDefault behavior. It will throw an exception at runtime if the query returns more than one record.

Although a LightSwitch query operation can define filtering and ordering within the query, a client can compose additional query operators at the call site. For example, say you’ve defined the ActiveCustomers query which filters out only active customers and returns them ordered by the customer ID. Any LightSwitch business logic can then use LINQ to invoke this query and pass additional filters or ordering.

Dim query = From c As Customer In northwind.ActiveCustomers Where c.State = “WA” OrderBy c.ZipCode

Note that the additional Where and Orderby clauses get executed in the data service—not on the client. Not all of the IQueryable LINQ operators are supported—only those that we can serialize over WCF RIA Services. (See IDataServiceQueryable in the LightSwitch runtime API.)

During execution of the query, the operation passes through the query pipeline, during which you can inject custom server-side code. Here is a quick summary of the query pipeline.

1. Pre-processing

a. CanExecute – called to determine if this operation may be called or not
b. Executing – called before the query is processed
c. Pre-process query expression – provides the base query expression

2. Execution – LightSwitch passes the query expression to the data provider for execution

3. Post-processing

a. Executed – after the query is processed but before returning the results
b. OR ExecuteFailed – if the query operation failed

The bolded elements represent points in the pipeline where you can customize the query by adding custom code. In CanExecute, you can test user permissions to access the operation, or any other logic to determine if the operation is available. (LightSwitch also checks the CanRead status for the underlying entity set.) In the Executing phase, you can modify the transaction scope semantics. This is very advanced and typically not necessary. During the pre-processing phase, you can append additional query operators using LINQ. This is helpful because some data providers support more complex query expressions than can be modeled in LightSwitch or serialized from the client.

During the Execution phase, LightSwitch transforms the query expression into one that can be used directly by the given data provider. In some cases this involves transforming the data types used from LightSwitch specific entity types to entity types used by the underlying provider. More on this in the section below on data providers.

Submit Operations

Each data service has a single built-in operation called SaveChanges. This operation sends changed entities from the client to data service for processing.

SaveChanges operates on a change set. The client’s data workspace tracks one change set per data service. (See data workspace in prior post.) The change set includes all of the locally added, updated and deleted entities for that data service. The change set is serialized, sent to the data service, and is deserialized into a server-side data workspace. (Note that locally changed entities associated with a different data source are not included in the change set.) Execution in the data service follows the submit pipeline, which looks like this.

1. Pre-processing

a. SaveChanges_CanExecute – called to determine whether this operation is available or not
b. SaveChanges_Executing – called before the operation is processed

2. Process modified entities

a. Entity Validation – the common property validation is called for each modified entity (the same validation code that runs on the client)
b. EntitySet_Validate – called for each modified entity
c. EntitySet_Inserting – called for each added entity
d. EntitySet_Updating – called for each updated entity
e. EntitySet_Deleting – called for each deleted entity

3. Execution – LightSwitch passes all of the changes to the underlying data provider for processing

4. Post-process modified entities

a. EntitySet_Inserted
b. EntitySet_Updated
c. EntitySet_Deleted

5. Post-processing

a. SaveChanges_Executed
b. OR SaveChanges_ExecuteFailed

Each modified entity in the change set gets processed at least once through the save pipeline.

If the validation phase fails for any entity, the pipeline stops and returns the error to the client. If the business logic in the Inserting, Updating or Deleting phases make additional changes, LightSwitch ensures that any newly added, modified or deleted entities also pass through the pipeline. This ensures that business logic is applied uniformly to all entities and entity sets.

In addition to the pipeline functions above, LightSwitch evaluates the per-entity set CanRead, CanInsert, CanUpdate, CanDelete for entity sets affected by the change set. LightSwitch rejects the change set if it a requested change to an entity set is not currently allowed.

If the processing succeeds, any modified entities are serialized back to the client. In this way, the client can get updated IDs, or observe any other changes made by the data service code.

LightSwitch uses optimistic concurrency for updates. If a concurrency violation occurs, the concurrency error is returned to the client where it can observe the proposed, current and server values.

Custom Operations (future)

Many real-world scenarios require general purpose operations that cannot be classified as either an entity query or a submit operation. These operations may be data-intensive, long-running, or require access to data or services that are not otherwise accessible from the client.

Although custom operations are definitely a part of the LightSwitch architectural roadmap, they did not make it into the first release. This was a difficult trade-off. I’m writing this now just to ensure you that we won’t forget about the importance of this and that we intend to publish some workarounds that are possible in v1.

Transaction Management

Transactions in the data service are scoped per data workspace. Each operation invocation gets its own data workspace instance and, in the default case, each data workspace uses its own independent transaction and its own DB connection. 

If an ambient transaction is available, the data workspace will use it and compose with other transactions within that transaction scope, but by default, there is no ambient transaction. The default isolation level for query operations is IsolationLevel.ReadCommitted. The default isolation level for submit operations is IsolationLevel.RepeatableRead. In neither case is DTC used by default.

The LightSwitch developer controls the default transaction scope, including enrollment in a distributed transaction, by creating his own transaction scope in the “Executing” phase of the pipeline, and then committing it in the “Executed” phase.

It is possible to transact changes to multiple different data sources simultaneously, but LightSwitch does not do this by default. We only save changes to a single data service, and thus to a single data source. If you need to save changes to a different data source within the current submit operation, you can create a new data workspace, make the changes there, and call its SaveChanges method. In this case, you can also control the transaction scope used by the new data workspace.

Note that changing the transaction scope within the query pipeline does not work in this release. This is an implementation limitation but may be fixed in a future release.

Data Providers

LightSwitch data workspaces aggregate data from different data sources. A single LightSwitch application can attach to many different kinds data sources. LightSwitch provides a single developer API for entities, queries and updates regardless of the data source kind. Internally, we have implemented three strategies to provide data access from the logic tier to the back-end storage tiers.

ADO.NET Entity Framework for access to SQL Server and SQL Azure
WCF Data Services for access to SharePoint 2010 via the OData protocol
A shim to talk to an in-memory WCF RIA DomainService for extensibility

LightSwitch uses existing .NET data access frameworks as a private implementation detail. We are calling them “data providers” in the abstract architectural sense. LightSwitch does not define a public provider API in the sense of ADO.NET data providers or Entity Framework providers. LightSwitch translates query expressions and marshals between the LightSwitch entity types and those used by the underlying implementation. (For access to SQL using Entity Framework, we use our entity types directly without further marshaling.)

For extensibility, LightSwitch supports calling into a custom RIA DomainService class as a sort of in-memory data adapter. LightSwitch calls the instance directly from our data service implementation to perform query and submit operations. The custom DomainService is not exposed as a public WCF service. (The [EnableClientAccess] attribute should not be applied.) Using this mechanism, a developer can use Visual Studio Professional to create a DomainService class that exposes entity types and implements the prescribed query, insert, update, and delete methods. LightSwitch infers a LightSwitch entity model based on the exposed entity types and infers an entity set based on the presence of a “default” query.


The LightSwitch logic tier hosts one or more data services. The LightSwitch client accesses goes through the data service to access data in a data source, such as a SQL database or a SharePoint site. Data services expose entity sets and operations. A query operation fetches entities from entity sets. The submit operation saves modified entities to entity sets. The data service implements a query pipeline and a submit pipeline to perform application-specific business logic.

In the next post we’ll take a quick look at the storage tier. This will detail the data sources we support, which storage features of the data sources we support, and how we create and update the data schema for the application-owned SQL database.