I’ve been talking to a lot of people internally and externally about performance, and have observed something very interesting: while people run the debugger over their code every day, stepping through the code to see if it works even when they don’t have a bug, very few people profile regularly. If the code crashes, people will usually restart it in a debugger rather than try to figure out what is wrong with the code by inspection. But if the code is slow, people still hesitate to profile and prefer to make guesses as to what is slow.
But guessing why something is slow (or fast) rarely works. Modern software and hardware is too complex, and what is going on in the caches or in a large framework like .NET is often counterintuitive. This weekend I was agonizing over how to write a new chunk of code the most optimal way…and decided before I proceeded to profile the simplest, least performant option. It turned out that code was so fast it was hard to find either the CPU time it was taking or the memory it was allocating. On the flip side, I recently code reviewed a checkin, noticed nothing that made me question its performance and the next day the performance tests showed a major regression. The coder who wrote the code is another person from our performance virtual team…so between us you’d think we might have noticed it.
So here’s my rule: The profiler is the debugger for performance. If the program crashes, look at it in the debugger. If it is slow, profile. I guarantee without this, NOTHING else will make your code perform.