Profiling is great! … What does it do?


The profiler has been gone from Visual Studio for awhile.  I think… 6.0? Or maybe even 5.0 was the last incarnation.  The core idea is to provide a tool to find those parts of your application that are preventing it from meeting performance goals, and to help document where the critical time being spent in the application.


As such, I would like anyone reading this to leave a comment and let me know what you expect a profiler to be, and what you expect it to do.


Should it only tell you about your CPU so you can find bad algorithms? Should it integrate other system information? And if so, what? Thread interactions? Perfmon Data? System event tracing logs? What needs to correlate? What doesn’t.  Has to work for .Net Code? Native code? Mixed? Asp.Net? Code running on my local machine? Code running in a deployed environment?


Obviously the answer is: Do all of it, all the time.  But let’s be realistic -> What information needs to be surfaced quickly and easily for you to make sure your projects have the best performance they can, and what would just be ‘nice to have’.


[jrohde]


Comments (7)

  1. Eric says:

    I’m happy to answer the question, based upon my extensive use of profilers for 20 years or so.

    A profiler needs to find bottlenecks, so they can be fixed. The bottlenecks are often algorithms. Profilers don’t work well when the bottleneck is outside the code, eg. disk i/o or database engine. This is an area where some good advances could be made.

    Effective use of profilers needs repeatability of tests. A profiler should ideally come with some sort of UI or event capturing tool, with identical playback. Eg. record all keystrokes.

    To summarize:

    * profiler highlights bottleneck

    * developer fixes code

    * re-run profiler with same test, see if bottleneck fixed

    * repeat until no more bottlenecks

  2. Hey Eric, thanks a lot for the feedback. If you are at TechEd, try to catch the VSTS demo general session tomorrow. One scenario they will be demonstrating is integration with Web stress testing. You can record a load test, then start up the profiler and start running stress and you will have a report showing where your bottlenecks are for that load. We also provide command line tools that will allow you to collect data while running the test harness of your choice. (May we recommend the testing tools bundled with the Team System. 😉 )

  3. Eric says:

    Sorry, not at TechEd. I’ve had a brief look as VTST doco, looks good.

    A few more thoughts:

    Profilers fall down in isolating particular bottlenecks. Ideally I want to just measure the throughput of the 3d card, ignoring memory/disk i/o. In this case, the question is where is the bottleneck – card i/o, card memory, card rendering, etc.

  4. Waldemar Sauer says:

    I really missed the profiler from VS.Net & .NET 2003, so I really would welcome it back. If you’re trying to cut down on development time, I would say that as far as speed is concerned, profiling native applications are much more important than Managed anything. The usual design philosophy to which I subscribe (and I suppose many others) is this: Java/C#/any of those are sluggish and slow. I.e. use them to cut develop time because they are very high level and easy to use, and by no means known for being fast. Whenever you do have an algorithm that uses an awesome amount of CPU, write it in C++, and if you need more power, use assembly: possibly embedded as a sequence of __asm { … } in the C/C++ code, or a separate file if there’s a lot of it. I don’t see MMX, etc. being incorporated into C# anytime soon. Code coverage is enough for the managed family, because it’s used to verify code correctness.

    Oh yes, and something that would be really nice is if you can have a C++ function header, and just right click on it, and say: "Profile this function". I remember in the VS6 profiler, whenever you wanted to do this, you had to first compile a map file for your project, look up the function’s symbol name with all the "@@!1212" and what not, and then you had to look up the profiler’s weird command-line syntax, and only then could you profile the function. Quite a mess.

    Hope this helps

    – Waldemar

  5. Don’t worry, Waldemar. I assure you we are not skimping on native code. We are going to deliver native, managed, and mixed mode profiling.

  6. Joe Rohde [MSFT] says:

    I agree Waldemar. While we balance our efforts, I am of the opinion that in Native, you really want to know about CPU. With reference to the 3d card, we should be able to tell you how much time you spend in each D3D call (or any other external call), but we may not be able to break down what’s happening inside that call. I can pretty much guarentee you that at this point we won’t (in V1!) be allowing profiling of shader code.

    For Managed, I think we want to do a bit more than code coverage. There is a big issue around ‘What!?! How many objects got allocated/collected during that method call?!?’. So we put extra effort on this side to try to help the developer get their code streamlined to the environment.

    The system level events that Eric mentions are tricky. Ideally we would like to incorporate system level events (ETW), but without accurate cycle values of when those events occured, trying to interleave that information with a trace of function calls has the risk of being misleading. 🙁

Skip to main content