ASP.NET Performance By Design: Takeaways From PDC

 Alik Levin    During PDC, there were 5 dedicated sessions for improving performance in .Net titled "Performance By Design". The presenters are Rico Mariani, Vance Morrison, and Mark Friedman. These guys live and breathe performance. Although I did not make to get to PDC, I was following after what's going on there. Fortunately, Vance published all slides on his blog. These are my takeaways from the slides he published.

Performance by Design Intro

By Rico Mariani

My favorite is the first slide that sets the expectations - Performance is about Culture:

  • Part 1 - Teaching Performance Culture
  • Part 2 - General Topics about Managed Code

I am so happy to see it!  There is a perception about performance in the field - it is about tools. There is another perception - "we can fix performance after we build the app". I've been through few situations where such approach resulted in extra budget to "fix the performance issues", missed deadlines, and frustrated customers (and stakeholders). Rico says there are very few rules to follow:

  • Rule #1 - Measure
  • Rule #2 - Do your homework

Rico explains Performance Culture very simply but in powerful way:

  • Budget. An exercise to assess the value of a new feature and the cost you’d be willing to pay:
    • Begin by thinking about how the customer thinks about performance
      • Responsiveness
      • Capacity
      • Throughput
      • Cost of Entry
    • Identify the resource the customer views as critical to this system
    • Choose the level of performance we want to deliver (do we need an “A+” or is a “D” good enough)
    • Convert this into what resource usage needs to be to succeed
    • Don’t think about the code, think about the customer
  • Plan. Validate your design against the budget, this is a risk assessment
    • You can’t plan without a budget, so get one
    • Use best practices to select candidate algorithms
    • Understand their costs in terms of the critical resource
    • Identify your dependencies and understand their costs
    • Compare these projected costs against the budgets
    • If you are close to budget you will need much greater detail in your plans
    • Identify verification steps and places to abort if it goes badly
    • Proceed when you are comfortable with the risk
  • Verify. Measure the final results, discard failures without remorse or penalty, don’t make us live with them
    • The budget and the plan drive verification steps
    • Performance that cannot be verified does not exist
    • Don’t be afraid to cancel features that are not meeting their budgets – we expect to lose some bets
    • Don’t inflict bad performance on the world

Bottom line - performance is about culture - tools only support it. Performance cost you either way - either you invest in it or not. Manage it as you'd manage any other risk.

My Related Posts

CPU Optimization for .NET Applications

By Vance Morrison

Vance supports Rico's Rule #1 - Measure. He provides several practical options:

  • Low Tech: System.Diagnostics.Stopwatch
  • Medium Tech: MeasureIt (Automates Stopwatch)
  • Medium Tech: Use ETW (Event Tracing for Windows)
  • Higher Tech: Sample Based Profiling.

Vance admits that ETW (Event Tracing for Windows)  " not easy to use End-to-End:

  • We are working it
  • We will have more offerings in next release
  • It is a complete talk just by itself
  • If you need logging NOW you CAN use EventProvider, xperf
  • If you can wait a year, it will be significantly nicer. 
  • If there is interest, we can have an ‘Open Space’ discussion"

My biggest takeaway from this session is that I know unforgivably too little about ETW - need to ramp up myself on it.

My Related Posts

Memory Optimization for .NET Applications

By Vance Morrison

My favorite slide is #14:

"Fixing Memory Issues: Prevention!

  • Fixing Memory Issues is HARD
    • Usually a DESIGN problem: Not Pay for Play
    • Using every new feature in your app
      XML, LINQ, WPF, WCF, Serialization, Winforms, …
    • Initialize all subsystems at startup
  • GC Memory Are your Data Structures
    • Tend to be designed early
    • Hard to change later
  • Thus it Pays to Think about Memory Early!"

What does it mean to think about memory early? The slide deck is packed with explanation about the measurement tools and the theory behind GC. I'd also expect to see few code samples - both patterns and anti-patterns. Mid-life crisis drill down would work for me.

My Related Posts

Parallelism for .NET Applications

By Vance Morrison

The only takeaway (beside good tips) is that Parallel Computing is getting into mainstream. From the field I see more and more demand for multithreaded work, I observe customers buying strong servers but do not utilize it to its capacity while asking to improve performance. I liked the structure of the session, especially "How .Net Can Help..." slides that offer practical tips and implementation suggestions to improve performance by parallelism.

My Related Post

ASP.NET Web Application Performance

By Mark Friedman

This is really huge slide deck - 110 slides... that covers tons of stuff.

I was looking for unusual stuff. I found it. Turns out Visual Round Trip Analyzer (VRTA) was released to the web. The tool was internal for some time and now it is  available for the masses. Good news!

I also liked the slides about ETW for IIS. Especially the ETW Trace reporting tool , which is Excel ;). One statement made me feel alert - "Caching the same data in multiple places tends to be wasteful". Not the statement itself rather the relation to Velocity, which is MS distributed cache mechanism. Need to dig deeper. Overall, the slide deck is packed with very useful and practical recommendations spanning multiple technologies like ASP.NET, AJAX, and WCF.

My Related Posts


submit to reddit

This template is made with plugin for Windows Live Writer

Comments (1)

  1. You’ve been kicked (a good thing) – Trackback from

Skip to main content