Perhaps it is time to declare victory in the battle of Rules Engines vs. Dependency Injection

I watched on the sidelines, not long ago, as a team of architects carefully inspected and examined different technologies for managing a rules engine.  I found it interesting, but not terribly pertinent, because...well... to be honest... rules engines tend to create more problems than they solve.

So let's look at the problem we are trying to solve with a rules engine. 

When an application is written, it will encapsulate some business rules related to the problems it is trying to solve.  Assuming it is valuable, continued investment will occur.  As this happens, more business rules will be placed in the application.  Unfortunately, these rules can be applied at different points in the system's architecture (user interface, composition layer, service layer, data layer, and in data migration processes).

Some rules apply in the user interface.  We may decide that a field should be implemented as a drop-down list.  Why?  Because the data entered in that field must be "known" already to the application, either for integration purposes or just to reduce data entry error.  That is a business rule.  We could allow free form input in a user interface field... and then apply editing rules (like date formatting, or putting the spaces and dashes in phone numbers).  That is a rule as well, especially if the user interface is data driven.

We may decide to validate data at the middle tier or services layer.  For example, in some situations, I can only submit an order for products if I provide the agreement number that I have signed with vendor.  That agreement specifies lots of things, like terms of payment and perhaps pricing and rebate rules.  So what if I enter an order but provide an invalid or expired agreement number?  There would be rules for this as well.

There are a couple of problems that rules engines are designed to solve:

  1. If a business rule needs to change, and it is implemented in many places, potentially in many applications, then it is tedious and expensive to change it.  This slows down the business and increases the cost of agility. 
  2. If an existing application will take on new needs, then new business rules may need to be added to it.  This can increase the complexity of the application substantially.
  3. Business rules often drive tight coupling between systems, especially in an integrated environment.  If I am going to pass data from system FOO to system BAR, and I want to make sure that the data will be acceptable to system BAR, I may be tempted (or required) to validate the data in FOO using the business rules from BAR.  The person who write the code for those rules in BAR is long gone from the department.  The expense of making sure those rules are in sync, and kept in sync, can be high.

These are very valid problems, and a rules engines propose to solve it by providing interesting mechanisms.  They include the ability to pass values to the engine and have it calculate a result that can be understood by the caller.  This allows isolation, to a point, because the calculation itself can be changed.  You can also place process rules into a rules engine, so that you pass in the state machine information and one or more inputs or events, and the state machine reacts by sending out events and changing state.  This is the core concept of workflow components.

That said, I think there are very narrow uses where rules engines are actually a good idea.  Many folks argue that workflow engines are essentially a subclass of rules engines, and Workflow is a good idea to isolate.  Why? Because writing parallel workflow capabilities into your code, unless you are an expert in Petri-Nets, is HARD.   What you are really encapsulating is not the data, or even the process, but the capability of executing the process properly.  I'm not even sure if I consider workflow engines to be a subclass of rules engines, given this fact, and the remainder of this blog post specifically excludes workflow engines or any other 'rules engine' where "how" you execute is more difficult to manage than "what" you execute.

The generic rules engine, on the other hand, is not so specific.  More often than not, rules engine proponents say "use the engine for encapsulating the rules, and allow them to be executed here."  Nice idea.  Too bad it doesn't work.

The problem with making it work is, as always, in the details.  In order to delegate the execution of rules, they have to be rules that are efficient to delegate (there goes user interface editing rules), not specifically associated with data coupling (there goes the problem of passing in domain data for a drop-down box), and describable from the standpoint of an algorithm or formula (there goes error handling rules).  In addition, the algorithm sometimes has to be encoded in a programming language that is executed as script.  This is slow and inefficient.

A much better approach is to create a set of strategy patterns to be used rules validation, write code that implements those patterns, and inject that code, at run-time, into the executing environment. 

Write the rules as small bits of code, carefully controlled, adopting an interface that is called in a standard manner.  Your data drives your system to use the code module.  Inside the module, you have a good bit of freedom to figure out what you want to accomplish.  You can even share 'global' values across code modules if the framework is put together well.

Note: this is not a rules engine.  It is a rules framework.  Rules engines execute the rules.  A framework merely gives you the ability to control their instances.  Your app directly executes them.

This is the basic idea behind event driven programming!  Nothing new there.  I'm just suggesting that you use a framework to do it, so that systems can change at run time by changing configuration files. 

For those folks who don't know what I mean by 'inject,' that means that you set up configuration in text files (presumably XML) that declares what code module contains your rules classes, all of which implement the proper interfaces.  Then, your system uses that configuration data to load those modules at the right time and keep them around for rules validation.

This is far better than using a rules engine.  What's odd is that I haven't seen many comparisons of the two, yet dependency injection has clearly won the battle.  Over the years, a lot more code has been written to be injected than to call out to external systems for execution.

So why bother to revisit this topic?  To declary victory for dependency injection and kill the generic 'rules engine' concept completely.    Dependency injection won.  Let's not waste any more time discussing generic rules engines.