I get many opportunities to review documents and processes in the course of my job, and sometimes they’re not even about performance. 🙂
About 2 years ago I started seeing a goodly number of security related documents; and now increasingly I see things about the Security Development Lifecycle. The thing that struck me then, as now, is that in many of these documents you could basically do a mass substitution of “performance” where it says “security”, change a few metrics and the document would still read fairly well. I suppose this shouldn’t be too terribly surprising because after all performance and security are both quality attributes and many of the processes that you use for one are perfect for the other.
If you have a moment, pick up your favorite security process literature – you do have some handy right? You don’t? Oops shame on you, better go get some; it’s important stuff – and have a look at how the security best practices you’re using compare to your performance practices. It’s interesting to see just how much cross fertilization is possible.
I think one of the most important notions that performance engineers can borrow from the security lifecycle is that there are many different activities, that each stage of the project has appropriate activities, and that massive effort at any one stage is the wrong idea.
If you’ve ever heard me speak you’ll probably remember me cautioning that the old cliché “Premature Optimization is the Root of All Evil” often leads to very bad thinking like “Just make it work the easy way first and worry about making it fast later.” Now of course the reason that this is bad is that the most egregious performance mistakes are generally not in execution but in design and those mistakes are made very early in the project – it’s important to have a design that is sound from a performance perspective. That means at least some consideration should be given to performance right away.
I’m quite certain that Tony Hoare (who Don Knuth was quoting when he made the phrase popular) didn’t mean that all performance work should be done at the end. His caution, and it is a good one, is that doing a lot of micro-tuning in the early stages of a project, without good data to support it, is much more likely to introduce unneeded complexity than it is to actually help your performance.
This brings us full circle. Hoare’s admonition is just one example of doing the wrong performance activity at the wrong time in your projects lifecycle. There is a time to micro-tune and it isn’t on the whiteboard on the first day of your project. At that point you should be thinking about your overall goals, thinking about what you will measure and how, how you will track performance regressions, what the key resources will be, what dependencies you can afford, and other overarching issues. As you move along in the project you’ll find that different activities start to become appropriate and rewarding.
If you expend all your performance effort at any one stage in your project’s life then you can expect a disaster. Instead balance the time you have available and invest in performance in appropriate ways throughout the cycle.
To see more of these similarities have a look at the lifecycle approaches being taken by the Patterns and Practices team. Look at the Security Engineering documents and the by-design similarities between Improving Web App Security and Improving .NET Performance and Scalability.