Common performance issues on ASP.NET web sites

I spend a lot of my time analysing the performance of web sites and tuning the applications to make the sites run more efficiently and scale better. Over time I’ve pulled together a checklist of some of the more common performance issues that I see and how to resolve them, and I thought it was about time I shared them here.

Most of the issues I’ve identified are straightforward to fix (many are just configuration changes) and can give significant improvements to the scalability and the responsiveness of your web site. Some of them you may well already be aware of and I’m still amazed how many of the more obvious ones don’t get implemented as a matter of course, but then it keeps me in a job!

This post is broken down into three sections; the first is cold start improvements, or “why does my web site take so long to start up?”. This involves looking at what the IIS worker processes (w3wp.exe) are doing during initialisation, prior to completing the initial client request that caused them to launch.

The second section, which is generally more important, looks at the efficiency of processing requests once the server has “warmed up”, and is hopefully the state that the web site will spend most of its time in!

Finally the third section provides a general discussion around accessing SQL Server and web services from within web applications. These aren’t necessarily quick wins and may involve some changes to interfaces (or even the solutions architecture) to successfully implement, Finally, I’ll reveal the three golden rules for producing fast, scalable applications that I’ve derived from my investigations.

For each issue, I’ve included a brief description and references to where more information on the issue can be obtained and how to resolve it.

Cold start

  • If the application deserializes anything from XML (and that includes web services…) make sure SGEN is run against all binaries involved in deseriaization and place the resulting DLLs in the Global Assembly Cache (GAC). This precompiles all the serialization objects used by the assemblies SGEN was run against and caches them in the resulting DLL. This can give huge time savings on the first deserialization (loading) of config files from disk and initial calls to web services.

  • If any IIS servers do not have outgoing access to the internet, turn off Certificate Revocation List (CRL) checking for Authenticode binaries by adding generatePublisherEvidence=”false” into machine.config. Otherwise every worker processes can hang for over 20 seconds during start-up while it times out trying to connect to the internet to obtain a CRL list.

  • Consider using NGEN on all assemblies. However without careful use this doesn’t give much of a performance gain. This is because the base load addresses of all the binaries that are loaded by each process must be carefully set at build time to not overlap. If the binaries have to be rebased when they are loaded because of address clashes, almost all the performance gains of using NGEN will be lost.

Warm start

  • Turn on IIS Compression (this is off by default on IIS6 and IIS7.x). Here are references for IIS6 and IIS7.x respectively:

  • Turn on IIS content expiry (again this is off by default). See here:

  • Ensure ViewState is turned off wherever possible, and then only enable it for the controls that really need it. ViewState can be huge and is on by default. Turning off ViewState can radically reduce the size of pages and significantly improve page load times over slow links. Read a discussion on ViewState here:

  • Merge static content into as few files as possible (e.g. only have one .js file and .css file). It is faster to download one big file that several smaller ones over the internet. Ideally, merge .js and .css files into a single download as discussed here:

  • Enable ASP.NET output caching wherever possible. This will need to be looked at on a page by page basis, but it can yield huge gains for pages that contain mostly static content. See here:

  • Enable web service output caching where possible. If there are any web services that always return the same results for any given set of input parameters this should be investigated. See here:

  • If a web application makes web service calls, increase MaxConnections in machine.config to handle the level of concurrency the application needs to support. See "Threading Explained" here: However, don’t take this as an absolute rule. For MaxConnections it is OK to go higher if the middle tier makes lots of long running web service calls.

  • Change the IIS threading configuration in machine.config to support better scaling for long running requests. See "Threading Explained" here:

  • If ASP.NET Ajax is being used, ensure <compilation debug=”false”/> is set in web.config. This avoids very costly parameter validation on both the client and server. See a full description of how to do this here:

  • If  concurrency locks are needed on commonly accessed resources that are read mostly (e.g. configuration) make sure ReaderWriterLockSlim is used to protect them. Using a standard .NET lock on commonly accessed, read mostly resources can seriously impact scalability.

SQL Server and web services

  • Run SQL Profiler against the solutions database whilst hitting all the key web pages. Identify all SQL operations that have high durations or CPU values and review them with an eye to optimising them. Also identify how many SQL operations are involved in the rendering of each page and see if any of them can be coalesced – aim for the goal of at most one SQL call to render any page. The new Tier Interaction Profiler (TIP) in Visual Studio 2010 is excellent for finding and measuring the SQL calls used to render individual pages:

  • Look at any web services that are called, make sure they are called only once for any given set of inputs and cache any relevant data they return if that data needs to be used again. 
  • If several web service calls to the same backend are needed to complete a request, can a new single web service be implemented to return all the data in one call? The fixed overheads for making web service calls can be very high which includes the cost of serializing\deserializing the request\response, establishing a TCP session to the server and actually sending the request\response over the wire. For lightweight requests the time for executing this overhead can easily exceed the time for executing the request itself! To at least partially mitigate this overhead, if WCF, IIS 7.x and private web services are being used, look at implementing endpoints using NetTcpBinding which avoids all the serialization and normally has a smaller footprint on the wire. The following article discusses how to setup a NetTcpBinding endpoint:

The golden rules for scalable, high performance applications

So here we are at my three core rules for writing fast, scalable applications. Beneath each rule I’ve included an example describing a typical scenario the rule applies to. You will find that at least one will apply to every scenario I list above, (try seeing if you can work which rule(s) apply in each case! I leave it as an exercise for the reader…). These rules apply to all applications, not just web apps. If they are kept in mind when designing and implementing any application, you’ll be a long way down the road to producing a fast, scalable application out of the starting blocks!

Never do anything more than once.

Cache everything that is likely to be used more than once. It is a cardinal sin to retrieve the same data from a SQL Server, web service or configuration file twice!

Don’t do anything you don’t need to.

Don’t be tempted to reuse an existing stored procedure or web service which returns large quantities of data when only a small subset of what it returns is actually needed – create a new stored procedure or web service which gets just the data required.

Get everything in one go.

Don’t use lots of fine grained web service or database calls when one chunky call can be used instead – the fixed overheads of making such calls are very high and repeated use should be avoided. Aim for the panacea of one web page executes at most one web service or SQL call.

  Written by Richard Florance

Comments (16)

  1. Drew Marsh says:

    On the connection and threading configuration, please note that this is no longer an issue (unless you *REALLY* want to fine tune). As long as you're using .NET 2.0+ and have <system.web><processModel autoConfig="true" /> set, it will take care of tuning it for you.…/7w2sway1.aspx

  2. Yaya Rabiu David says:

    I enjoyed your article. Thanks so much for an excellent work. However in your golden rules. The first one advices us to cache configuration file values. This could be misleading as .Net already caches that at the first call.

    Overall a good read. Thanks.

  3. Excellent tips for boosting up performance of applications.

  4. Dial Afc says:

    "One web page executes at most one web service or SQL call". I would like to see what web page codes like this. With N-tier architecture there are bound to be several sql-calls. Lets say you at the front-page would like to fetch articles, and maybe user-info, I fail to see how one practically can use only one sql call..

  5. Jagdip says:

    I agree with Dial Afc. I dont see how you can have one web page execute at most one SQL call. especially when you are trying to follow OOP. Maybe an example of this would help?

  6. rtpHarry says:

    I have seen some debate on the caching of configuration files recently (…/web-config-is-cached.aspx). Yaya I think that in Richards defense he just says configuration files, this doesn't necessarily mean web.config.

    I would like to know your thoughts on manually caching appsettings variables in static values Richard. Do you think this is a common performance issue or an irrelevant optimisation?

  7. Matt says:

    Personally, I do not think 1 database call can be achieved for complex websites unless large, messy joined sets of data are returned – which in itelf is bad practise. I think the author of this article is saying that database calls should be minimised with the "holy grail" or target to aim for being 1 database call per page load. For simple sites this might be possible, let's not forget, even for complex sites it is possible too with the use of caching. Sure the page might actually need to make 5 database calls, but if that only has to happen once for 4/5 and 4 result sets can be cached then 1 call is achievable once the site is warmed up.

  8. Zak says:

    We've experienced start up problems on our applications for years that sound a LOT like bullet #2 under your Cold Start section. The request just chews and chews for 20-30 seconds before the page loads, then the app is as fast as expected. However, our applications are fairly run of the mill ASP.NET applications and don't have any signed assemblies I'm aware of. Could this still be our problem?

  9. hamids says:

    NGening doesn't actually improve cold startup times, in most cases it actually makes it worse. it does however improve warm startups.

    the reason is that the size of an NGened assembly will usually be 30%+ larger than the original assembly. and so is the time spent for DISKREADs to read that assembly from physical disk into memory after a cold startup of the w3wp process assuming the process has not been running since the machine was rebooted.

    so the increased DISKREAD time will be much higher than the time spent in JITing if the assemblies were not NGened.

  10. Gregor Suttie says:

    Great article – one thing that stick on my head with this though is the bit where you say "Aim for the panacea of one web page executes at most one web service or SQL call" – could not agree more but this just about rules out using objectdatasources, sql datasources with custom validators on your page – I've seen so many web pages using them and they hit the database far too often and is near impossible to get your code working without themn hitting the database a couple of times each request.

  11. Jitendranath Palem says:

    A=P+R principle..its better to merge static content into as few files as possible (e.g. only have one .js file and .css file). It is faster to download one big file that several smaller ones over the internet.

    try do research on this you will get to know.

  12. SoftElegance says:

    any guides in case my ASP.NET site runs with MONO?

  13. Richard Florance says:

    Regarding multiple Database and web service calls. Personally, (and remember I am looking at this from a purely performance point of view and not the perspective of an aesthetically pleasing application) its the main reason that I dislike business object and entity frameworks which abstract the database away from developers. If you are not careful, you can end up with a database that maps to your business objects well but is poorly structured from a performance view point requiring lots of database accesses to retrieve anything useful and you find that page response times degrade very badly under load as the database server struggles to keep up.

    Remember the database can't easliy be scaled out – you can easily add more web servers but you can't add more database servers without major effort – ultimately the database will be the performance bottleneck; the less strain you put on the database the better the ultimate scalabiltiy of your application.

  14. Debug true is evil. says:

    Always set debug to "false" no matter if you use AJAX or not. Setting it to true is evil for production applications.

  15. Adrian Gould says:

    With respect to you issue:-

    •If  concurrency locks are needed on commonly accessed resources that are read mostly (e.g. configuration) make sure ReaderWriterLockSlim is used to protect them. Using a standard .NET lock on commonly accessed, read mostly resources can seriously impact scalability.

    I have been involved in a large framework development project that was using ReaderWriterLockSlim – but after much performance testing and tuning we eventually moved away from ReaderWriterLockSlim back to using the standard lock mechanism as ReaderWriterLockSlim uses 256 times more memory per lock and is also at least 256 times slower than the standard lock mechanism.

  16. Kalyan says:


    Thanks for links and they are useful. Much improvements introduced in ASP.NET 4.5 with tools like web-essentials. Optimization can be done in each layer(presentation, business logic and database) but this post…/optimizing-the-website-performance-using-asp-net-4-5 outlines the steps for Presentation layer. The below check-list will also help you while building the web application for good performance…/performance-tricks-to-metro-style-web-applications