Maintaining balance [A versatile red-black tree implementation for .NET (via Silverlight/WPF Charting)]


This blog has moved to a new location and comments have been disabled.

All old posts, new posts, and comments can be found on The blog of dlaa.me.

See you there!

Comments (12)
  1. Thank you for submitting this cool story – Trackback from DotNetShoutout

  2. obsid says:

    You know it would be nice if we could “automatically” use these kinds of cool algorithms just by using an OrderedDictionary.  Clearly the current OrderedDictionary works great for reading values/keys, but horrible for insert/delete or memory size.  Be nice if there was say a way to do

    var myDictionary = new OrderedDictionary(0,1,0, 10000);

    Where there is an overload of the constructor that would have PerfMemory, PerfRead, PerfWrite, and PerfSize; expecting a float between 0 and 1 for each other then perfsize which is an estimated size.  In the case above, I do almost all read operations and don’t care about memory or insert/delete and expect about 10000 items.  Based on which things are important, it would select an algorithm that would be good for that situation.  Most (good) algorithms involve tradeoffs between situations which value each of the above types differently.  In this way, rather than selecting the algorithm that will be used, the programmer is just expressing the important of various perf goals.

  3. David Anson says:

    obsid,

    This is an interesting idea! I’m not sure how practical I think it is to try to collapse what can sometimes be pretty complicated performance decisions into a just 4 parameters, but I see your point and think this might not be so hard at the low end (i.e., for simpler scenarios). I’ll pass this on to a couple of folks I work with to see what they think. Thanks very much for the feedback!

  4. David Anson says:

    obsid,

    I passed this on to someone internally, and their reaction was similar to mine. 🙂 The suggestion was to create an issue for this Connect (http://connect.microsoft.com/) so that other customers can see it and vote for it. If enough people start asking for this, that will increase its chances of being incorporated into a future version of .NET.

    Hope this helps! 🙂

  5. hintzen says:

    Have you looked at using a SkipList instead of a binary tree?

    http://en.wikipedia.org/wiki/Skip_list

    It has efficiecny right up there with binary tree (number of probes proportional to log n) with super fast insert / delete / implementation.

    I really don’t understand why SkipList is not looked as as the first alternate for BinaryTrees, Microsoft sponsered the original work on this theory I believe.

  6. David Anson says:

    hintzen,

    I’d been only peripherally aware of skip lists before reading that article – thank you very much for sharing it! The ideas there sound interesting and the claim that skip lists are simpler than competing data structures is worth looking into. In our case with Charting, because we already had code that was working with a binary tree, switching to a red-black tree seemed like a pretty simple and risk-free move. The code for the left-leaning red-black tree ended up being relatively compact, so I’d be interested to see how a skip list implementation compares. At any rate, if we end up revisiting this decision some day, I’ll definitely give skip lists some consideration.

    Thanks again!

  7. isaiah says:

    interesting implementation of RB tree. One question which variant are you implementing 2-3 tree as a red black tree or a 2-3-4 tree as a red black tree. In rob sedgewicks paper  http://www.cs.princeton.edu/…/RedBlack.pdf,  the delete operation is for 2-3 tree variant. the assumption is there are no 4-nodes your delete implementation seem almost identical but it seems in your version there can be 4-nodes. Would you be able to clarify if in fact you are using the 2-3-4 tree  to implement the RB tree

  8. David Anson says:

    isaiah,

    The way I went about this was that I started by creating a direct C# implementation of Dr. Sedgewick's algorithm as best I could. Then I subjected it to a bunch of testing. In the process of testing, I identified a handful of bugs. As I recall, some of them *seemed* like they were present in the reference implementation as well. But however they came about, I attempted to fix them all as simply as possible – and in doing so did *not* spend time mapping things back to 2-3-4 trees for context. So, to answer your question, I believe that my implementation is the same as what's described in the research paper I link to *plus* some bug fixes. In the specific example you give, it *may* be the case that his delete didn't handle 4-nodes, but I found they were possible in practice and tweaked my implementation to handle them properly.

    Hope this helps!

  9. isaiah says:

    Thanks for you reply David. I am in the process implementing as a learning exercise. will let write a post detailing my results, if i come across any thing interesting.

  10. sebas says:

    Hi,

    I am trying to use your library, but I have a question..why did you write separate code for the multi dictionary implementation? Would not be just enough to have a comparison function that handles keys with the same value to solve the issue?

  11. David Anson says:

    sebas,

    I'm not sure I understand the question -but if you haven't done so already, please read the binary tree post I link to in the introduction because it explains a bit about the need for an ordered multi-dictionary.

Comments are closed.