Whidbey ADO.NET beta 1 has shipped. Favorite features

Well, we have shipped whidbey beta 1. This is a great release for the ado.net team, we have managed to not break away from an existing data model while adding great new features. Everything you have will continue working as is, we have fixed old problems and added features to address the biggest shortcoming of previous releases, there are a few features that I am really excited about. I would like to take the time to blog on individual features, one per week for the next few weeks, drop me some feedback on the order that you would like to see these or with suggestions for ado.net features that I might have missed.

  • Distributed Transactions with System.Transactions. Finally managed distributed transactions are easy to use, leaner meaner faster and as robust as always, this is a really good feature.

  • SqlBulkUpdate. Bulk update is nothing new, but now you can do it from managed code and it is faaast. Not able to insert 100.000rows of data into sql server with ado.net 1.1? Take one of these SqlBulkUpdates and call me in the hmm.. never mind you are probably done by now <g>

  • Adapter batch update. Nothing like a good batch to wake you up in the morning, hey seriously this is not rocket science but it is about time that adapter update worked

  • Provider SDK. So you want to be a ..managed provider writer? Do you hate to write a lot of code? Do you want pooling and performance counters to just work? boy do I have a deal for you! How much would you pay for this once in a lifetime opportunity? Well I think you know where I am going with this, we have made common classes for all of our providers and abstracted pooling and perf counters, this is not a vanilla sdk, all of our providers derive from this common model and so can you.

  • Provider factory. This is not really one of my favorite features, but writing about it allows me to bash on UDA(which is imo a bad thing). One of the biggest requests when ado.net shipped was UDA, universal data access. I did not like it then and imo I don't like it now ( I have to put in the “imo” any time I say something like that, but trust me, UDA is Eviel “imo“). This feature looks like UDA, smells like UDA but it is (thankfully) not UDA. Thanks to the common classes you can now use a provider factory to write provider independent code, at runtime you can connect to Sql or to Oracle with the same provider independent code. Performance? the exact same performance that you would get by connecting with OracleClient or SqlClient. Why is this not UDA? well this factory model breaks down any time you do _anything_ (I forgot to mention in the post below that I love _underscoring_ too) that is backend dependent. Not UDA, but an interesting feature.

  • SqlNotfications. I am bluffing I don't know anything about these. I am putting this in here hoping nobody will notice.

  • Pooling improvements. I am not kidding about this one, I know about pooling. If anybody votes for this one they are going to get hit with more information than they can shake a stick at.

Hey its friday, calling it a day.

Rambling out.

Standard disclaimer, this post (and everything in this blog) is just my opinion, information here is provided “as is” and confers no rights.


Comments (9)

  1. John Lewicki says:

    Thanks for the provider factory feature. You call it an ‘interesting feature’, but I think its a very important one. Ive missed something like that in the framework since the 1.0 release, and promptly built my own version. I guess I will now have to retire that code once 2.0 comes out (which goes for a lot of code Ive written actually…)

  2. C. v. Berkel says:

    Well, I’m especially keen about learning the innerworking of the distributed transactions capabilities with System.Transactions and the changes of this API / model compared to the one we have with the enterprise services / DCOM.

    Another topic, which also has my interest, is the SQL notification, so I’m calling your bluff 🙂 But he… if you rather want to write about those other dull topics… be my guest.

    Just kidding, anyways looking forward to read your blog, cheers.

  3. Luke Stevens says:

    OK, I’m game, why is UDA evil?

  4. Angel says:


    I agree that it is an important feature, especially because we thought we shipped this in v1.0 with the interfaces. Since you have already cooked your own you are aware that the interface story was fool’s gold. I am really looking for feedback on this feature to make sure we have gotten it right this time.


    I keep looking for people who use distributed transaction in strictly managed components, are you telling me that all I have to do was ask? If you are using distributed transactions make sure to drop me a line, I would love to follow up with you.

    I guess I better hussle on the notifications thing, I know just who to bug for information <g>


    I was afraid I was opening a can of worms with this one. Not something I can answer fully in here, I’ll try to give it a go in blog form.

  5. Angel,

    Re: Adapter batch update. Nothing like a good batch to wake you up in the morning, hey seriously this is not rocket science but it is about time that adapter update worked

    I’ve been testing batch updates and have found that batching in Beta 1 (with any batch size greater than 1) causes a performance hit, not an improvement, using networked and local SQL 2000 Server instances.

    There’s a thread about this on the microsoft.private.whidbey.windowsforms.databinding newsgroup ("Performance Issues with Batched Updates").


  6. Angel says:


    I have been following the thread in the private news for some time. Here are my thoughts on the trade offs.

    What we are trying to accomplish with this feature is to reduce the number of roundtrips to the server that we need to do. The current behavior of Adapter update is to do a single roundtrip for each row modified in the dataset. In an extreme case if you have a table of 1 column and you insert 2000 rows you will need to do 2000 roundtrips to the server. With batch update you can send the entire 2000 rows in one roundtrip.

    Ok, so back to your problem. If you have the client and the server on the same machine or in virtual pcs running on the same machine the cost of each roundtrip can get to be very very small. In the extreme case, a table with 1000 columns and two rows the cost of batching the two rows into one roundtrip is going to be considerably higher than making two individual roundtrips to a local server.

    When the server that you are connected is not local the per roundtrip cost rises dramatically, this greatly offsets the cost of batching rows in the client side especially the more roundtrips we can avoid.


    Batching limits for Sql Server 2000: 2000 parameters, 250mb command.

    Batching will work better when the server is not local.

    Batching will be more beneficial on tables with less number of columns.

    Hope this helps.

  7. Hi, Angel,

    Thanks for the response.

    If you use value-based optimistic concurrency and the table have several columns, it doesn’t take a lot of updates to reach 2000 parameters. In Beta 1, timestamps with SQL statements are far more efficient because they send only the updated value(s) and the timestamp value(s). (Unfortunately, I can’t get stored procedures with timestamps to work, per that thread.) Is there a plan to do the same for value-based updates?

    My value-based tests with the Northwind Orders table shows 40 params per update. Failure on 52 (not 51) updates/batch indicates that the max looks like it’s 2048 params. Seems to be easy to limit the batch size based on (2047 / DataAdapter.Parameters.Count).

    The number of parms for timestamp-based concurrency management varies with the number of columns updated. Is there a fool-proof method of determining the maximum batch size in this case?

    Thanks in advance,


  8. Angel says:


    You are correct, this feature becomes hard to use. It sounds like we need to find a way arround the parameter issue.

    Thank you for your feedback, I have forwarded it appropriatelly and we will be looking carefully at this feature.


Skip to main content