Even though I’ve been doing general architecture work on Visual Studio for nearly a year now, my friends in DDPERF are still plugging away on performance problems and finding some interesting results.
This most recent thread is very interesting because it shows yet another example of how the consequences of hardware changes can be subtle and very hard to predict.
Just today I was working with Cameron McColl again– this time we were trying to understand why a particular benchmark was sometimes mysteriously slower than normal for no apparent reason. To our delight we found (well, mostly Cameron, I was the “consultant” <grin>) that the problem was in how the timing was being triggered and so the bulk of the variability seems to be measurement error and not actual test variability. But to our chagrin, the slower time seems to more accurately reflect reality. Well at least now we know.
How are these things related?
They remind us all that it’s very important to track down anomalies in your reported results because otherwise you have little understanding of why you are making things faster, what works and what doesn’t.
In the words of Alastor “Mad Eye” Moody: “Constant Vigilance!”
P.S. If you’re looking for the further adventures of the devdiv perf team, you could do worse than subscribe to their blog.