One of the more interesting aspects of being involved in supporting Business Intelligence applications is the opportunity to investigate behaviors that are either unexpected or perhaps not so intuitively understood. Proactive Caching is one aspect of Analysis Services that is not well understood by some and can present some interesting issues, especially when errors related to data quality occur. A behavior that was encountered recently involved Proactive Caching entering a loop when partitions in a cube were configured for Proactive Caching using either Client Notifications or SQL Notifications and the Proactive Caching operation failed. When that situation occurs, the Proactive Caching operation is immediately re-executed then fails and continues executing in a loop. When this condition occurs, the only way to terminate the loop of Proactive Caching operations is to stop and re-start the service.
The key point to this discussion is that using the default ErrorConfiguration settings, data quality errors (i.e. orphaned fact rows, referential integrity issues, etc.) as well as other data errors will result in processing failures and will not cancel the alert that triggered the Proactive Caching process operation. While it may be seen as an unexpected behavior, this is the designed behavior of Proactive Caching. For these reasons, Proactive Caching should be used with care and closely monitored. While eliminating data quality issues in the relational source would be the most desirable way of controlling this, doing so is not always feasible. The recommended practice is to enable custom ErrorConfiguration for partitions with ProactiveCaching enabled using either Client Notifications or SQL Notifications. Custom ErrorConfiguration options would include setting the ErrorConfiguration to report and ignore errors to make sure the ProactiveCaching operation doesn’t enter into a loop.