If you were to compare the taste of the oranges in the fruit basket at work to the taste of the apples you bought on your way home and realize that they taste differently, what would your conclusion about the difference in taste be?
a) They taste differently because one is an apple and one is an orange
b) The apple tastes better because you eat the apple at home relaxed in your sofa and the orange while you are coding at work
c) The apple tastes better because it is fresh from the store (which really may or may not be fresh but anyways:)), and the orange has been sitting in the
fruitbasket for a week
d) The apple tastes better because it wasn't sprayed with some bug killing spray
e) Any of the above could be affecting the difference in taste
I'm guessing that your answer will probably be e). At least that would be my answer.
The reason I am bringing up this question, which on first look doesn't have anything to do with debugging or troubleshooting will become very apparent in a few seconds...
We often get questions like this one. My ASP.NET application worked really well on 1.1 but when we moved to 2.0 the request execution time for my ASP.NET pages went through the roof. Can you provide me with the list of changes between 1.1 and 2.0, or can you tell me what changes may have caused this performance degradation.
Don't get me wrong, there are probably many changes between 1.1 and 2.0 that could potentially cause a performance degradation in a given scenario, but even if I did have a ready compiled list of the almost infinite number of changes between 1.1 and 2.0 and you had time to go through them all and see which ones applied to this situation, that would probably not help much.
The problem is that in most of these cases the tests fail a very important criteria for performance comparisons
"make sure that you are comparing apples to apples, and that you know what variables changed between your tests so you know where to put your efforts"
In fact when you start probing you often discover that one or more of the following statements are true
- The observations are made in production so there hasn't been a proper performance test done so that you can compare actual data from both versions
- When moving to 2.0 a substantial amount of code was rewritten as part of the upgrade
- When moving to 2.0 the new application was put on a different server with different configuration settings, different hardware and in some cases the OS was upgraded at the same time or at least some service packs or security updates were applied
- The 1.1 observations are made in production and the 2.0 observations are made on a lab machine with different hardware, connecting to a test database etc. etc.
Still, it is often assumed that the thing that caused the difference was changes between 1.1 and 2.0... which comparing it to the apples and oranges question would be similar to making the assumption that only d) could apply.
Unless you have a fairly small subset of code that performs differently on 1.1 and 2.0 (i.e. the exact same code, the same stress test, with the same number of users, on the same machine with the same configuration etc.) your best bet for figuring out what is causing the slowdown is probably to troubleshoot it as a performance issue or hang rather than to troubleshoot it based on the assumption that the difference is caused by differences between 1.1. and 2.0.
Just my 0,02€