Recall the original question asked a bit of trivia: could you name 5 implementations of GetHashCode in the framework that do things you might not expect to see in a hash function? It’s a little bit of a vague question and also requires a good deal of arcane knowledge of the CLR. The most interesting thing about the discussion I think is just what people considered “unusual.” But as always there was a method to my madness.
So without further ado here’s my list of “unusual” hash functions, there are 53 of them.
And you can already see why I think they are unusual: They allocate. In my opinion any “normal” hash function should be able to do its job without creating any temporary objects. Given the frequency with which things can be hashed I think that’s important. That doesn’t mean that I’m going to go to Red Alert about any of the above but it does mean that we should think more carefully about creating big hash tables of the above.
OK, so, great, an interesting observation. What’s the point?
Well, I while I believe that the particulars of this question are not profound, the notion that you can have a cost metric applied universally to the runtime — even one that is as simple minded as the one I just proposed — *is* profound.
Even though my “allocation complexity” metric may be feeble it is already useful and actually it is my hope that many people find it feeble and write about how their metric is much better and by the way here it is the much better metric in text format etc. etc. Imagine if we had the methods scored in a variety of ways all readily available and pluggable into intellisense. You could have scores relating to synchronization, i/o, memory, algorithmic complexity, even measured data where appropriate. Other engineering disciplines have abundant information about their raw materials but for those of us in the software business its often either guesswork or our own hard-won data.
Let’s look at what you can do with even my feeble metric. In my very first quiz I asked (in part) whether it would be better to call Write three times on a stream or create a format string and call it once. Can we answer that question with the published metric?
Here are the relevant two lines from my costs file:
That’s not as good as the actual measurement but, wow, that’s something now isn’t it? Rough guidance to give you a clue! A great “tip” that the var-args formatting option has some underlying cost.
Even these rough costs are very powerful in terms of helping you write code. Suppose you’re writing a new “Foo” and you want it to have cost that is roughly the same as existing “Foo”. You could go and look at how the existing “Foo” methods score which gives you some idea what costs you can afford in your new “Foo”.
Importantly you must observe Rico’s Rule: if you are writing a new “Foo” method and you want that method to have a cost of X (on any metric) then you must not use any methods whose cost is known to be greater than X. Just that simple rule can save you from many costly mistakes.
Finally you have some way to answer the question: Is it even remotely reasonable to use a given method in a given context. Certainly this particular metric is imperfect, maybe even feeble, but it’s something. And it will only get better if we all work on it.