In most production solutions, changes are considered a very risky maneuver. So, typically one reduces risk by introducing a solution that would:
- Limit the number of code changes to as few places as possible – ideally inside caching only
- Limit the impact of code additions so that the performance is as optimal as possible
- The changes are configurable so that all the features described can be switched off
- Instrument the changes so that the effects of the changes can be monitored
In terms of a custom caching mechanism to improve on the use of an existing ASP 1.x Cache object. Using a custom synchronization mechanism, you could follow these few steps below, or indeed move to ASP 2.0 with its many more options around caching. Furthermore cache notification’s for rapidly changing data would be preferable, but if you can’t change, upgrade or are limited in your modification, then these few steps to consider may alleviate load on your database (and take some of it back to your application boxes:
1 – Blocking
The idea here is to block subsequent threads while the first thread is reading from the cache. The rationale was that additional requests for the same information are unnecessary and quite likely a problem or stress on the database. This design change could be introduced via a new linked Hashtable representing the status of the cached items. A CacheStatus object exists for every Cache entry. A new status object” IsReading” flag is set to true when the cache is first read but no data is found. When the data is added to the cache any other threads blocked will be released, and the “IsReading” flag reset to false. The blocking and releasing can be done through the .net Monitor class as outlined in this synchronization options discussion site
2 – Stale data
The problem with stale data is how long to keep it for. Naturally keeping data indefinitely presents some performance issues (cache size and retrieval) and some data integrity issues (I.e. When to expire and scavenge). This is where an assumption can be made: Keep stale data for the same period as the original cache. For example, data being cached for 30 seconds will remain in the cache but marked as stale for a further 30 seconds. The first event handler implements the delegated function to re-add to the cache, the second event handler switches the stale flag off and removes the status item from its cache.
3 – Exception management
The design decision made here is to make sure that during a read from the database that if an error occurs that the reading process does not block or return stale data indefinitely.
To add exception management may mean amending each method – which is against the design principles above. The following alternative could be implemented
When waiting for the data be put into the cache, if the (configurable) sleep wait limit is reached the secondary thread will take over the responsibility of reading. This way a timeout due to an error does result in an unacceptable timeout. If a (configurable) maximum read time is reached the thread would throw an exception. On both occasions the reading responsibility is effectively revoked from the first reader.
4 – Nulls
One has to be careful that nulls do not get cached (ASP.Net Cache won’t let you) but nulls do happen and the use of a null identifier may be the solution here
If you have scope to move to a new pattern you could do worse than looking at the Enterprise library caching application block. Check out the following article: Caching Architecture Guide for .NET Framework Applications
Incidentally I used this code to test my ASP Custom Cache. I thoroughly recommend doing this and create a nice VS2005 testing project with plenty of threads (name them) and log out to the console each action.