Is the Business Tier Dead?

With the PDC in mind, I was experimenting with two of the most broadly advertised
features of Yukon earlier this week: CLR hosting in the database engine, and ObjectSpaces.
If you missed the TechEd announcements and haven't seen these features at VSLive!
or elsewhere, there's some detail in
this chat transcript
. To summarise briefly, with the Yukon release, the plan is
to integrate the CLR directly into the database engine, allowing .NET assemblies to
be catalogued in a database. This will allow stored procedures, functions and triggers
to be built using a language such as C# in addition to, or instead of Transact-SQL.
ObjectSpaces is a technology that was demonstrated at the last PDC two years ago that
will provide an object persistence framework; with the right code, you'll be able
to use ObjectSpaces to simply serialise objects into one or more tables in the underlying

A long discussion with Eric Nelson, one of my colleagues, and a shorter discussion
with Niels Berglund from DevelopMentor
led to a few initial musings. As ever with a blog, these are my personal opinions
rather than the considered corporate view of Microsoft, and there's a lot of water
to go under the bridge before the release.

At a superficial level, it seems that ObjectSpaces does away with the need for a complex
data access layer (DAL) to a greater or lesser extent. If you can tell an object to
go persist itself into the database, and by providing some extra metadata ensure that
it knows how to go about achieving that, the work of a database tier is generally
complete. Of course, there are plenty of questions that a developer needs to consider,
such as security, the database schema itself, and performance and scalability issues;
but none of these are insurmountable given a good enough supporting infrastructure.

Let's park that thought for a moment, and consider the other big new feature: SQLCLR.
We've been running a number of labs over the last eighteen months that I've been involved
with, where ISVs have brought their .NET development projects in to the UK office
and we've worked with them on any issues they are experiencing with moving to .NET.
I've observed that many of them have quite lightweight business logic layers (BLLs)
- because their applications are fundamentally pretty data-centric and they've been
able to create a design that has a pretty close mapping between business objects and
data entities. For a while I've wondered how much of that layer exists primarily to
avoid the need to write large quantities of T-SQL.

Indeed, many business layers have two major functions. Firstly, they support a workflow
that matches the underlying business process; secondly, they provide some business-level
consistency checking for data input via the UI. I've always been uncomfortable with
the latter function as a database bigot: as far as I'm concerned anything that relates
to the consistency of the data should be done as close to the data itself as possible.
Sometimes I have conversations with developers that leave me worried that a user bypassing
the business application could insert data that might break the application itself.

Now we've got SQLCLR, it's possible to move that business-level consistency checking
directly into the database as triggers or stored procs written in C#. The devs are
happy because they don't have to use T-SQL - and the DBAs are happy that the ACID
principle hasn't been violated (once they've got their heads around compiled code
being stored on their server). So where does that leave the business tier? Greatly
diminished, I'd argue, particularly when other factors that will no doubt be revealed
at the PDC are taken into account.

So, are we going to be able to eradicate one of the three logical application layers
within the next few years? I wonder...

Comments (12)

  1. Anonymous says:

    "Business Tier" – I don’t think the business tier will disappear, but rather it will change. Rather than being a workflow layer and a domain objectmodel it will become a service layer (with a workflow layer and domain objectmodel as implementation) with messages floating around.

    "Yukon" – For products that require database independence, this is not an option (remember ANSI SQL?).

    "Business-level consistency checking for data input via the UI" – Theirs different kinds of validation: UI validation to enhance the user-experience, Business-level validation (belongs in the business layer), Data-consistency validation (belongs in the data layer). The only reason to push ‘Business-level consistency’ down a layer, is performance.

  2. Anonymous says:

    I think we have a completely different perspective here.

    Our business tier offers a range of services to consumers (be they other services, external services, GUI layer etc.). Some of those services *somtimes* require us to persist some information into our own database. Equally often, we use implementations of those services that either temporarily cache information in some persistent form, pass the information on to another service (such as a third party app), or retrieve the information from such an external service. If we used the "Bind the logic to our database" approach, we’d be very brittle in the face of changes to our persistence model (for example, we couldn’t migrate our ‘Customer Demographics’ service to talk to a third party CRM, without completely re-writing all the business rules that lived in our data storage layer).

  3. Anonymous says:

    Yves, quite agree that the conclusions I’ve drawn here presume the use of SQL Server as a database.

    As far as your last comment goes, I’m not sure that I agree that what you call business-level validation belongs in the business layer. My contention is that this kind of validation ensures that the data is consistent – not in the traditional ACID sense of the term "consistency", but consistent in a _semantic_ sense. I’d make a general assertion that the database itself should be consistent in both these senses. SQLCLR makes this a feasible proposition without sacrificing performance, since you can move those business rules into the database rather than leaving them in an external component and simply hoping that all accesses to the database occur via that component.

    But I’m quite ready to be proved wrong (and looking forward to it!) by the majority of you who spend more time actively dealing with this stuff in the real world rather than in the unreality that is MS…

  4. Anonymous says:

    Doesn’t this completely ignore scalability? If all the logic is placed in the database (or some unintelligent object mapping layer), there is no way to scale out…

  5. Anonymous says:

    There is no business tier. There are only services and messages.

  6. Anonymous says:

    Business tier scalability was always rather difficult because most interaction with a business tier was synchronous, so it didn’t actually have an impact on end-to-end scalability. The real reason seemed to me the inability for databases to be easily clustered for transaction processing scalability, thus requiring a "funnel" to front-end it. Unfortunately this lead to a difficult set of tradeoffs for how much data to pull across the wire (i.e. what should be processed in triggers and stored procedures vs. the business tier’s domain model).

    With the advent of Oracle 9i/10g and SQL Server Yukon, here’s hoping the database itself can scale in the same way the business tier could.

    The advent of C# triggers will also be a wonderful addition – for example, perhaps no more need for temp tables act as a transient data structure.

  7. Anonymous says:

    "Business Tier" doesn’t automatically imply physical location of business logic. In the case of Yukon (and be advised my opinion exists w/o actually having been exposed to Yukon), I imagine a reasonable architecture would seperate business logic / services from persistance services (aka data services), with both services existing in SQLCLR. So a Contract object may exist in Yukon, which performs validation and calculations, and then calls a ContractPersistance object to persist the data.

    Such an architecture would be fairly easy to refactor. Deploying a new database could be accomplished by re-writing the persistance services to utilze a Oracle or DB2.

    From what I understand, SQLCLR and Yukon will provide namespaces that parallel System.Data.SqlClient. This could theoretically render the refactoring of the persistance service to a search and replace operation on namespaces.

    In conclusion, Yukon’s SQLCLR makes it possible to couple what is considered "business logic", to persistance services. But that doesn’t mean you should. Writing a single method that, for example, assures that a zip code is proper for the specified State, and then inserts / updates the zip code and state in the customer table, is akin to writing code to do field validation in a button’s click event. Microsoft makes it very easy to do so, but it’s bad design.

    Yukon SQLCLR allows you to not have to cross process or machine boundries when calling between business and persistance services, and this is the true value. Architects should take advantage of this option, but at the same time not allow themselves to paint themselves into a corner.

    Eric F. Vincent
    Director, IT
    Volvo Financial Services, Insurance Group

  8. Anonymous says:

    One of the main reasons NOT to put business code in a Database is cost. You have to buy a SQL (and NT server) licence for each CPU, the cost per CPU is also more on bigger boxes.

    E.g. Do I have single box, with 4 CPUs and 4 SQL server licences. Or a Data Base Box with 2 CPUs and 2 SQL server licences and then 6 ASP.NET/business boxes with 1 CPU and NT Web edition. The second options costs less. This gets even more so if I want to cluster my Data Base box.

  9. Anonymous says:

    If you are using a table-module architecture and can guarantee an all MSFT architecture now and in the foreseeable future, I agree. This seems to be the standard MSFT approach, or has been until ObjectSpaces.

    If you are using a domain-model architecture, ObjectSpaces is finally an answer to CMR/CMP from J2EE. I personally think domain model is the way to go because it maps real-world "things" into code and provides increased flexibility necessary to adapt to a changing business environment. The biggest pain has always been doing OO analysis and design only then to have to map this to a E/R data-driven reality of relational databases.

    With ObjectSpaces, I can remove much of the headache of the data layer coding. I can’t wait to see how well it stacks up to CMP.

  10. Anonymous says:

    In a data-heavy application, sometimes the database is just going to be more efficient at finding and manipulating the data than transferring large percentages of the data over the wire for processing. It may seem counter to scaling but the lifetime of the business objects will be shorter with this approach. It just moves the bottleneck somewhere else. Build pooling and queuing with connection objects to account for the complex database procedures.

  11. Anonymous says:

    ::With ObjectSpaces, I can remove much of the headache of the data layer coding. I can’t wait to see how well it stacks up to

    You can do this today, and yo ucan do this today with tools way more powerfull than objectspaces which (sadly) is going to be a basic mapper, much as the MS datgrid is a basic grid.

Skip to main content