Day one of CAST 2008 consisted of day-long tutorials. I attended Scott Barber's Analyzing Performance Test Data session. I chose this session because in our next product cycle my team will need to do more performance testing and I don't have a lot of experience in that space.
The session started with introductory material about the various aspects of performance testing, which include context, criteria, design, installation, scripting, execution, analysis, reporting, and iteration. After that "setting of the stage," we looked at some charts with various arrangements of data points and were asked to speculate on what the data was saying. This was very interesting - I was surprised at what could be deduced just from the data itself without any context. There was some discussion of the danger of extrapolation based on performance data gathered from a non-production test system.
The meat of the session consisted of several experiential exercises that got us thinking about how to make sense of data that involved various dimensions. The culmination was playing the Set game (which I had never played before but is highly addicitive..). This got us in the mode of considering multpile dimensions of data simultaneously in real time.
Finally Scott wrapped up by talked about effective reporting. In general, effective reporting considers the audience - what they will and won't pay attention to and how your reporting can help them.
I didn't get any "nuts and bolts" guidance about how to do specific types of performance testing out of the session. Rather, I got an appreciation of the complexity of analyzing and reporting performance results with consideration given to pulling meaning out of complex data and presenting that information in a meaningful way to stakeholders. I think those are valuable considerations that will lead to more effective performance testing but there are prerequisites in the space that I don't have and still need to acquire. The session, to me, was a reasonable return on my investment.