Well it’s high time I gave you some numbers for the new stuff.
In the original benchmark the Linq version was running at 13.62% of the original time. And while I’m discussing that result, Sekiya Sato pointed out an error in my original benchmark (see the comments of the above posting) in which I had one of my ISDBNull() checks backwards. That error made the “nolinq” version actually run 3.6% faster than it should have. So the number I reported, 13.62% should have actually been 14.09% — let me restate that result for clarity, in May 2006, DLinq was running at 14.09% of the underlying provider speed in this (harsh) test case on my hardware and not 13.62% as previously reported.
I have in my hands a nice fresh build, which is similar to what you’re going to get when you adopt Beta 2. The results below include my original test plus an some quick insert and update tests I added — I’ll describe those in the next installment. What we want to talk about right now is the select cases. The regular select is as orignially described. The syntax for the compiled select (and this really builds) is this:
var fq =CompiledQuery.Compile(
(Northwinds nw) =>
from o in nw.Orders
select new OrderDetail
OrderID = o.OrderID,
CustomerID = o.CustomerID,
EmployeeID = o.EmployeeID,
ShippedDate = o.ShippedDate
Note that with the nice type inferencing you never have to see the generic types in your code but it’s still strongly typed. To use this query you simply
foreach (var detail in fq(nw))
sum += detail.OrderID;
Now let’s have a look at the numbers:
The units for all of the above are test iterations per second so bigger is better.
|update||20.67||4.92||420.19%||(DLinq is faster)|
|compiled update||20.71||4.92||421.00%||(DLinq is faster)|
|insert||16.12||4.57||352.66%||(DLinq is faster)|
Wow that’s pretty good. If you do nothing to your code, just raw internal improvements go from 14.09% of the underlying provider to 53.56% — that’s a 3.8x improvement. But look at what you can do with compiled queries: if you compile the select statement I got 93.06% of the underlying providers raw speed — that’s 6.6x faster than what I got back in May of 2006. This is a truly great result because, as I’ve mentioned before, this is a harsh test. With the normal overheads associated with actual business logic and data transfer this result basically means that you may not even be able to measure any throughput degradation at all if you use compiled DLinq queries in your program.
I think I’ll let Matt talk about the details of how we did this because he did the work but I can give you the high level points if you haven’t already guessed them from the previous postings
- create custom methods that bind the data perfectly using lightweight code generation
- create reusable SQL with parameters to avoid generating the SQL query again
- provide read-only contexts to avoid any unnecessary entity management (this not needed anyway in my case because I new up an OrderDetail object with only part of the data)
When I modelled this on paper last summer it looked like we could get about 95% of the underlying provider speed plus or minus measurement error and we seem to have landed at 93%.
Now you may ask, why is DLinq doing better at updates than my code that writes directly to the underlying provider? I’ll talk about this a bit more next time but the short answer is this: the code I wrote to do the updates looks like pretty typical SQL sent to the database in batches. However I didn’t go to the trouble of creating prepared statements for update and insert cases, DLinq gives you this automatically. So despite my more complicated logic the savings DLinq got from superior SQL trumped my techinque.
And lastly, as always, this doesn’t necessarily translate to any specific numbers for your application but it sure bodes well. I’m very pleased indeed.