Transactions and in-memory data (part 1 of n)

[Maybe I'll fill in the "n" when I'm done.  I haven't had time to plan out the entire series...]

In yesterday's post, I covered basically two approaches towards how to achieve transactionality.  One is to build up a log of what to do when the commit is requested and the other is to keep track of what you have to undo if a failure or rollback occurs.

Astute readers will recognize that the first case doesn't actually solve The Big Problem of persistence and transactionality which is that you need a guaranteed way to go backwards once you've started making changes to persisted state.  There's no guarantee in the world that the "n" I/Os required to apply the changes will wholly succeed.

I haven't followed up on verifying my claim that a major database did do this but you can see that (a) it does solve the problem for voluntary rollback and (b) if you're careful and allocate space before making any dangerous changes you can see that the only damaging failure modes are basically hardware failure or excessive system stress.  For the latter, you might notice that a wide variety of databases more-or-less assume that they own the system while they're running and their installation guides probably highly recommend that you not run anything else on the hardware so the likelyhood of a failure to write due to lack of kernel mode address space or something is pretty low.

This technique also has some real advantages over the other approach.  For one, since all changes are buffered in memory, if you are going to update a value repeatedly in the context of the transaction, you are only modifying the in-memory image.  Also, subsequent reads can be guaranteed to be satisfied from memory.  Once you start worrying about concurrency, you might find that you want to capture state that you even just read from the persistent store in memory.  If you're on dedicated hardware and have the physical resources, hey maybe this is a huge perf win!  At the same time, the amount of physical memory to back the state can grow (if you're reliant on pagefile to back the state you (probably) have no perf advantage over having written the data back to the file in the first place) excessively so if you support concurrent transactions you can get into trouble if you give each transaction an arbitrary write-back quota.

So that's interesting but let's map it back into the in-memory problem that I'm focussing on.  If you have no in-memory state and you can leverage the great work done by database vendors there's a lot you can leverage.  But like I said, for a lot of business-critical code you can't do this - imagine that every keystroke in your word processor turned into a database transaction. It's just a non-starter.

It seems that there are three ways you could approach achieving this:

  1. Go the "functional" route where the buffered state is a complete copy of whatever networks of objects you are trying to modify in memory
  2. Build all your data structures so that they have a way to participate in an "apply log" where they do the hard work (comparisons, allocations etc.) up front but don't change anything until you commit
  3. Somehow play this same game at the memory access level

1. Functional Route

This clearly doesn't scale.  If you have a large graph of objects that you're making changes to, you can't really imagine making a copy of the entire graph for every operation.

2. Defer updates into a commit-time log

This is really tricky.  Consider a binary tree implementation.  In the insert() function you'd have to allocate the memory for the new node, record where it would go and stash away enough state so that you could commit the change.  Well that part doesn't sound hard but now consider that reads against the mutated but uncommitted state must be satisfied against the log.  Tomorrow when I discuss the other approach (make changes, allowing for nofail rollback) hopefully it will be obvious that this is much much harder to get right.

3. Memory access level

Yeah right.  Research topic for someone to use to get their Ph. D. and then eventually productize it 30 years later when the hardware and compilers are at the point that it's feasible.  Basically the same pattern as GC has had. :-)

My general conclusion: while there are some compelling benefits to this pattern (rollback is easy!), for general purpose usage it's very hard to get right.