Had lunch today with Joel P and Franky at the best Yum Cha restaurant this side of The Great Wall, and we got onto the topic of concurrent .NET. This opened up the discussion around an idea myself and Nigel Watson cooked up one day while talking pie, and chatting with Joel P moved it one step closer from the sky to the IDE.
See, the idea goes by the name of Blanket.NET, and is based on a concept we call “spongy” interfaces (quite sure the concept probably exists under another name somewhere in the galaxy). So, it goes a little something like this:
Blanket.NET is a firstly a runtime that is deployed to one or many multi-core, multi-proc servers. The runtime supports a declarative framework which enables developers to decorate a class as being “spongy”, but we’ll get to what “spongy” means later. So, now this class is marked as kinda like [Blanketed].
You then deploy the component to any server “under” the blanket. The runtime immediately picks up a change under the blanket, and communicates via some kind of channel (this should be abstracted, so to provide binary and text protocols, but all should be stingy as) to the other “nodes” under the blanket, communicating the composition of the component. Now, it doesn't transmit the whole components, just a set of lightweight instructions that enable the other frameworks to compose a copy of this object, it won’t be the identical object, just a composition (or imitation). Why? Well, generally most of the code in a component is made up of repeating patterns, so transmitting the same line of code 15 times is a waste, or even 15 instances of a text pattern in code, so you assemble a way of recreating the component from repeated patters. So, now each time something changes under the blanket, the runtimes sync, and while they’re sync'ing, all calls to the blanket get queued by the “stitching”, which is part of the runtime that handles queueing messages while the blanket is unavailable.
Next, what is this spongy business? Well, you make a call into the blanket for 1..N component instances of a class or type, say Customers. When you make the call, you indicate to the blanket how many copies you want working, and whether they should be exclusive or shared. If they are shared, the areas decorated with the [BlanketShared] are sync, if exclusive, each component runs in isolation. Communication is all done through delegate callbacks.
OK, now this is what I reckon is the cool stuff. The blanket is hooked into the low level hardware, and can develop heat maps around core and processor performance (and workload). When you ask for say, 15 StockPriceCalculator instances, then give them some values to compute, the blanket looks to the “under” nodes, and instructs components across those servers to perform. During this process, each component registers with the local machine blanket runtime a set of service level counters, and should the server that component instance is executing on be unable to meet that SLA, the blanket looks for another node, and simply passed the execution context and callback ref over to that node to complete processing. Why? ‘Cos then you have a highly dynamic, hot scalable. transparent way of managing your workhorse components. And all developers have to do is simply mark their code with some attributes. Also, for your projects, you simply invoke a VS add-in that queries the blanket for current components, and builds the dynamic collars into your project App_Code folder so you can make the calls. This can be designed to be refreshed each time the blanket environment changes.
Phew! Anyway, at this stage its just an idea a couple of us are kicking around, but I’d love to hear from anyone and everyone about what this could mean to them, and how it could be built upon. Just some Friday funsies to end the week if nothing else 🙂