ObjectSpaces’ performance requirements

"urn:schemas-microsoft-com:office:office" />

a comment on my “ObjectSpaces Myths” post:


it's NOT a myth that ObjectSpaces is spec'ed to be 40% slower than DataSets, correct?”


comparison is actually not that simple.  There
are two basic reading from the data store scenarios:  1)
Streaming and 2) Filling a cache.


Streaming, ObjectSpaces goal is to be within 30% of SqlDataReader.  That
is if you streamed out the same set of relation data through the SqlDataReader versus
the ObjectReader, the goal is to see about a 30% performance difference.  In
general, this performance difference comes from the materializing of the objects and
some overhead from the mapping layer.  Note
– if one was to materialize their own objects over the SqlDataReader, they probably
won’t see a significant difference between ObjectSpaces and their custom solution.


filling the ObjectSet caches with the results of an ObjectQuery, the performance goal
is to be on par with filling the DataSet from a SqlDataReader, particularly for hierarchy
cases with several levels.


"urn:schemas-microsoft-com:office:smarttags" />Along
the same lines, ObjectSpaces will leverage the MARS support in


.  Therefore for master-detail type queries,
the overall performance will tend to be better since the query engine will perform
a merge join over the streamed results.  So
unless one does a join on the server and then normalizes the results them self, it
will be hard to beat the ObjectSpaces’ performance.  As
the hierarchy becomes deeper and more complex, this performance gain will be more



Comments (6)
  1. peter says:

    Can you further explain what you mean by a ‘merge join’ ? How many sql statements will OS submit in that case?

  2. Read up on MARS on the internet. It is a pretty interesting technology. I am still not really convinced, though, that this "primitive" approach is right, and whether a good cache can not just kill the advantages of MARS. Not saying it is so – just thinking that a query that is answered from a cache and never goes to the database just is way more efficient.

    ::As the hierarchy becomes deeper and more complex, this performance gain
    ::will be more noticeable.

    This sounds like the mistake here. I can hardly imagine any real condition where:
    * I would preload more than two layers deep of hierarchy.
    * None of the layers actually would gain from a CACHE, totally avoiding the hit into the database.
    You naturally can optimize the data access, but normally my approach is the further "up" in the application you can cache, the faster the app is. Retuning a complete object from a cache is faster (especially when multiple DB’s are involved) than going to the database, returning a precalculated cached page is faster then generating it etc.

    And in this areas, ObjectSpaces totally lacks. No sensible caching system at all.

    You also bring up another good point. I see a large mistake in ObjectSpaces actually submitting SQL Statements. I seriously consider moving the complete data access layer INTO the database using the new C# written procedures. Not saying either that this is more efficient. Just thinking out loud. At least for non-queries (means for the CRUD operations) putting the object metadata/schema INTO the database can significantly improove performance. ESPECIALLY if you dont have to use this bad little thing named "SQL Stored procedure" but can really use C# written procedures. Our own mapper is already terribly efficient in generating nice reusable SQL code for CRUD opeations, but the main problem is the database side 🙂 Looks like a move into the db.

  3. Paul Gielens says:

    Thanks for clearing that up. You guys got some great stuff coming up, but why so late? The majority of .NET acceptors already made significant investments in or-mapping technology. Let alone the companies filling the gap.

  4. ::The majority of .NET acceptors already made significant investments in or-
    ::mapping technology. Let alone the companies filling the gap.

    Very interesting statement. As a matter of fact in my last session with our lawyers (which we just do every couple of months – you always have to clear things up, have them go over the standard contracts etc.) I was advised that there could be some potential ways heading in MS’ directions.

    We DO have ourselvs significant investments in O/R technology, offering one of the better frameworks in the market. Interesting enough this is an investment I am not yet determined to write off. I know a lot of other companies in a similar position.

    Interestingly enough, compared to the Java camp (JDO) where the persistence layer was defined as a replacable layer, MS has actually integrated everything in such a way that replacing is not an option.

    I do smell a "coming late and then destroying competitiors" here. Isn’t this what happened to our little friends at Netscape, sort of? Hm. Any of our "competitors" having an for some legal intervention to actually have MS NOT destroy other companies using a monopoly 🙂

    Just joking, naturally. At least for now.

  5. Thomas, I don’t get your point: you will still be able to use EntityBroker or any other ORM tool for that matter in V2, right? – You still use ADO.NET, .NET Framework and/or other parts of the operating system/environment with no penalty, right? – I don’t even see the slightest hint of MS ‘forcing’ OS onto the developers, not even jokingly…

Comments are closed.

Skip to main content