As a technical writer, performance is always a scary thing to write about. You can do it, but you invariably will come under so much scrutiny that it is ridiculous. I mention performance once in awhile in an article, such as this one, and invariably I get review feedback asking me about all the rigors of my test configuration, where my code is, etc. Its usually too much to deal with.
I'm still inclined once in a while in a passing phrase to mention that approach A has better performance than approach B, if I know there is an added layer of processing in B that is skipped in A. So for instance in my article linked above I imply that DIME has better performance than SwA since it is smaller and has some processing efficiencies. Needless to say, I received feedback asking for the rigors of my testing scenario. I managed to ignore that feedback.
So when it comes to actually comparing a product of ours to the performance of someone else's, I stay away from it since you have to be so thorough. When Msft does this, they usually hand it over to a third party, like they did with the Pet shop stuff. Publishing your test's source code is certainly one part of what should happen. Having someone other than yourselves write the test code and run the tests is another. Really good comparisons allow for the platform suppliers to provide input on squeezing out the best performance for their platform.
I do take issue with Microsoft's assertion that sending a minimal message does not make a good comparison. It does make a good comparison in the sense that it measures our Web service infrastructure directly with the J2EE infrastructure. But Microsoft's point is that this infrastructure really only accounts for a certain percentage of your actual Web service processing. So if J2EE on Tomcat is twice as fast as .NET on IIS at handling a minimal SOAP message, and that part of the processing is only 5% of your entire Web service processing then you are only saving yourself 2.5%. Add that to the fact that .NET is more efficient in other areas then when you look at the whole picture you are better off on .NET.
The better point in my mind is that Tomcat is a barebones Web server compared to IIS and that javabeans and other technologies were not incorporated into the tests. The .NET approach has a ton of functionality available to you and there is a certain cost that is associated with that. Of course the full fledged Java Web servers aren't part of Java so I'm not sure a real comparison is possible. It should just be noted that there is a difference.
Here's The Server Side .NET's post on the topic.