ASP.NET Thread Usage on IIS 7.5, IIS 7.0, and IIS 6.0


I’d like to briefly explain how ASP.NET uses threads when hosted on IIS 7.5, IIS 7.0 and IIS 6.0, as well as the configuration changes that you can make to alter the defaults. Please take a quick look at the “Threading Explained” section in Chapter 6 of “Improving .NET Application Performance and Scalability”. Prior to v2.0 of the .NET Framework, it was necessary to tweak the processModel/maxWorkerThreads, processModel/maxIoThreads, httpRuntime/minFreeThreads, httpRuntime/minLocalRequestFreeThreads, and connectionManagement/maxconnection configuration. The v2.0 .NET Framework attempted to simplify this by adding a new processModel/autoConfig configuration, which made the changes for you at runtime. With the introduction of IIS 7.0 and the ASP.NET integrated pipeline, we’ve introduced another element to the mix, a registry key named MaxConcurrentRequestsPerCPU. Lets start with a discussion of how things worked on IIS 6.0 before discussing the changes made in IIS 7.0.

When ASP.NET is hosted on IIS 6.0, the request is handed over to ASP.NET on an IIS I/O thread. ASP.NET immediately posts the request to the CLR ThreadPool and returns HSE_STATUS_PENDING to IIS. This frees up IIS threads, enabling IIS to serve other requests, such as static files. Posting the request to the CLR Threadpool also acts as a queue. The CLR Threadpool automatically adjusts the number of threads according to the workload, so that if the requests are high throughput there will only be 1 or 2 threads per CPU, and if the requests are high latency there will be potentially far more concurrently executing requests than 1 or 2 per CPU. The queuing provided by the CLR Threadpool is very useful, because while the requests are in the queue there is only a very small amount of memory allocated for the request, and it is all native memory. It’s not until a thread picks up the request and begins to execute that we enter managed code and allocate managed memory.

The CLR Threadpool is not the only queue used by ASP.NET when hosted in IIS 6.0. There are also queues at the application level, within each AppDomain. If there is a lot of latency, the CLR Threadpool will grow and inject more active threads. At some point we would either run out of threads, not have enough threads left over for performing other tasks, or the memory associated with all the concurrently executing requests would be too much, so ASP.NET imposes a cap on the number of threads concurrently executing requests. This is controlled by the httpRuntime/minFreeThreads and httpRuntime/minLocalRequestFreeThreads settings. If the cap is exceeded, the request is queued in the application-level queue, and executed later when the concurrency falls back down below the limit. The performance of these application-level queues is really quite miserable. If you observe that the “ASP.NET Applications\Requests in Application Queue” performance counter is non-zero, you definitely have a performance problem. These queues were implemented to prevent thread exhaustion and contention related to web service requests. The problem was first described in KB 821268, which I had published many years ago. The KB article has been re-written a few times since it was originally published, and I hope nothing has been lost during the translations.

For most usage scenarios, the changes recommended in the KB article are not necessary because v2.0 introduced processModel/autoConfig. However, the autoConfig setting may not work for everyone–it limits the number of concurrently executing requests per CPU to 12. An application with high latency may want to allow higher concurrency than this, in which case you can disable autoConfig and make the changes yourself. If you do allow higher concurrency, keep an eye on your working set. I believe the default works for about 90% of the applications out there. I do wish we had the foresight to name that setting maxConcurrentRequestsPerCPU, and allow it to be used to control concurrency, since that would be much easier to configure. I guess this is just another example of when business was just a little bit faster than the speed of thought.

When ASP.NET is hosted on IIS 7.5 and 7.0 in integrated mode, the use of threads is a bit different. First of all, the application-level queues are no more. Their performance was always really bad, there was no hope in fixing this, and so we got rid of them. But perhaps the biggest difference is that in IIS 6.0, or ISAPI mode, ASP.NET restricts the number of threads concurrently executing requests, but in IIS 7.5 and 7.0 integrated mode, ASP.NET restricts the number of concurrently executing requests. The difference only matters when the requests are asynchronous (the request either has an asynchronous handler or a module in the pipeline completes asynchronously). Obviously if the reqeusts are synchronous, then the number of concurrently executing requests is the same as the number of threads concurrently executing requests, but if the requests are asynchronous then these two numbers can be quite different as you could have far more reqeusts than threads. So how do things work, exactly, in integrated mode? Similar to IIS 6.0 (classic mode, a.k.a. ISAPI mode), the request is still handed over to ASP.NET on an IIS I/O thread. And ASP.NET immediately posts the request to the CLR Threadpool and returns pending. We found this thread switch was still necessary to maintain optimal performance for static file requests. So although you will take a performance hit if you’re only executing ASP.NET requests, if you have a mix of dynamic and static files, as we see with many large corporate workloads, this thread switch will actually free up threads for retrieving the static files. Finally, once the request is picked up by a thread from the CLR Threadpool, we check to see how many requests are currently executing. If the count is too high, the request is queued in a global (process-wide) queue. This global, native queue performs much better than the application-level queues used when we’re running in ISAPI mode (same as on IIS 6.0). There is very little memory associated with a queued request, and we have not entered managed code yet so there is no managed memory associated with it. And we respect the FIFO aspect of a queue, something we didn’t do with the application-level queues–if there was more than one application, there was no simple way to globally manage the individual queues. We did however have a difficult time trying to come up with a good configuration story for the IIS 7.0 changes.

When I discuss how to configure thread usage for ASP.NET/IIS 7.0 integrated mode, please remember that we have a lot of pre-existing code and configuration, and you can’t just create something new the way you would like to without introducing backward compatibility issues. In this new mode, the CLR Threadpool is still controlled by the processModel configuration settings (autoConfig, maxWorkerThreads, maxIoThreads, minWorkerThreads, and minIoThreads). And autoConfig is still enabled, but its modifications to httpRuntime/minFreeThreads and httpRuntime/minLocalRequestFreeThreads do nothing, since the application-level queues do not exist. Perhaps we should have tried to use them to configure the global (process-wide) queue limits, but they have application scope (httpRuntime configuration is application specific), not process scope, not to mention being too difficult to understand. And because of some issues with using the configuration system that I won’t go into right now, we decided to use a registry key to control concurrency. So for IIS 7.0 integrated mode, a DWORD named MaxConcurrentRequestsPerCPU within HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\ASP.NET\2.0.50727.0 determines the number of concurrent requests per CPU. By default, it does not exist and the number of requests per CPU is limited to 12. If you’re curious to see how much faster ASP.NET requests execute without the thread switch, you can set the value to 0. This will cause the request to execute on the IIS I/O thread, without switching to a CLR Threadpool thread. I don’t recommend this primarily because dynamic requests take a long time to execute relative to static requests, and I believe the overall performance of the system is better with the thread switch. However, and this is important, if your application consists of primarily or entirely asynchronous requests, the default MaxConcurrentReqeustsPerCPU limit of 12 will be too restrictive for you, especially if the requests are very long running. In this case, I do recommend setting MaxConcurrentRequestsPerCPU to a very high number.  In fact, in v4.0, we have changed the default for MaxConcurrentRequestsPerCPU to 5000.  There’s nothing special about 5000, other than it is a very large number, and will therefore allow plenty of async requests to execute concurrently.  One thing to watch out for is that when concurrency increases, your application will use more memory simply because there are more requests executing in managed code.  The CLR ThreadPool will still do a great job maintaining the number of threads in the ThreadPool, so there should be no concern about this adversly impacting synchronous requests.  I know there are people using ASP.NET 2.0 and developing Comet or Comet-like applications on WS08 x64 servers, and they set MaxConcurrentRequestsPerCPU to 5000 and increase the HTTP.sys kernel queue limit to 10,000 (it has a default of 1000).  The HTTP.sys kernel queue limit is controlled by IIS.  You can change it by opening IIS Manager and opening the Advanced Settings for your application pool and changing the value of “Queue Length”.

As a final remark, please note that the processModel/requestQueueLimit configuration limits the maximum number of requests in the ASP.NET system for IIS 6.0, IIS 7.0, and IIS 7.5. This number is exposed by the “ASP.NET/Requests Current” performance counter, and when it exceeds the limit (default is 5000) we reject requests with a 503 status (Server Too Busy).

-Thomas

 

UPDATE (Aug-18-2008): .NET Framework v3.5 SP1 released earlier this week and it includes an update to the v2.0 binaries that supports configuring IIS application pools via the aspnet.config file.  The aspnet.config file is not very well known.  It is the CLR Hosting configuration file, and ASP.NET/IIS pass it to the CLR when the CLR is loaded.  The host configuration file (aspnet.config) applies configuration at the process-level, as opposed to the application-level like web.config.  There is a new system.web/applicationPool configuration section which applies to integrated mode only (Classic/ISAPI mode ignores these settings). The new config section with default values is:

    <system.web>
        <applicationPool maxConcurrentRequestsPerCPU=”12″ maxConcurrentThreadsPerCPU=”0″ requestQueueLimit=”5000″/>
    </system.web>

There is a corresponding IIS 7.5 change (Windows Server 2008 R2 only) which allows different aspnet.config files to be specified for each application pool (this change has not been ported to IIS 7.0). With this, you can configure each application pool differently.  The maxConcurrentRequestsPerCPU setting is the same as the registry key described above, except that the setting in aspnet.config will override the registry key value.  The maxConcurrentThreadsPerCPU setting is new, and allows concurrency to be gated by the number of threads, similar to the way it was done in Classic/ISAPI mode.  By default maxConcurrentThreadsPerCPU is disabled (has a value of 0), in favor of gating concurrency by the number of requests, primarily because maxConcurrentRequestsPerCPU performs better (gating the number of threads is more complicated/costly to implement).  Normally you’ll use request gating, but you now have the option of disabling it (set maxConccurrentRequestsPerCPU=0) and enabling maxConccurentThreadsPerCPU instead.  You can also enable both request and thread gating at the same time, and ASP.NET will ensure both requirements are met.  The requestQueueLimit setting is the same as processModel/requestQueueLimit, except that the setting in aspnet.config will override the machine.config setting.  All of this may be a little confusing, but for nearly everyone, my recommendation is that for ASP.NET 2.0 you should use the same settings as the defaults in ASP.NET v4.0; that is, set maxConcurrentRequestsPerCPU = “5000” and maxConcurrentThreadsPerCPU=”0″.

 

UPDATE (Sep-12-2011):  The only relevant change to .NET Framework v4.0 (as compared to 3.5 or 2.0) is that the default for maxConcurrentRequestsPerCPU was increased to 5000.  5000 is also the value you should use in versions 2.0 and 3.5, which have a default of 12.  Also, IIS 7.5 is identical to IIS 7.0 as far as threading is concerned.  The only difference between IIS 7.5 and 7.0 that is relevant to this blog post is the support to configure different aspnet.config files for each application pool.  You do this by setting the CLRConfigFile attribute for the application pool.  You can then use the system.web applicationPool configuration mentioned above to set different values for maxConcurrentRequestsPerCPU, maxConcurrentThreadsPerCPU, and requestQueueLimit, if desired. 

In general, running with default configuration works best.  However, applications that have measurable latency, say latency of 100 milliseconds when communicating with a backend web service, will perform better with a few configuration changes.  Let me tell you what configuration changes you should make on IIS 7.0 and IIS 7.5 in integrated mode in order to handle a large number of concurrent requests to an application that has backend latency. By large number of concurrent reqests, I mean between 12 and 5000 per CPU.

  1. For v2.0 and v3.5 set a DWORD registry value @ HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\ASP.NET\2.0.50727.0\MaxConcurrentRequestsPerCPU = 5000.  Restart IIS
  2. For v3.5, you can alternatively set <system.web><applicationPool maxConcurrentRequestsPerCPU=”5000″/></system.web> in the aspnet.config file.  If the value is set in both places, the aspnet.config setting overrides the registry setting.
  3. For v4.0, the default maxConcurrentRequestsPerCPU is 5000, so you don’t need to do anything.
  4. Increase the HTTP.sys queue limit, which has a default of 1000.  If the operating system is x64 and you have 2 GB of RAM or more, setting it to 5000 should be fine.  If it is too low, you may see HTTP.sys reject requests with a 503 status.  Open IIS Manager and the Advanced Settings for your Application Pool, then change the value of “Queue Length”.
  5. If your ASP.NET application is using web services (WFC or ASMX) or System.Net to communicate with a backend over HTTP you may need to increase connectionManagement/maxconnection.  For ASP.NET applications, this is limited to 12 * #CPUs by the autoConfig feature.  This means that on a quad-proc, you can have at most 12 * 4 = 48 concurrent connections to an IP end point.  Because this is tied to autoConfig, the easiest way to increase maxconnection in an ASP.NET application is to set System.Net.ServicePointManager.DefaultConnectionLimit programatically, from Application_Start, for example.  Set the value to the number of concurrent System.Net connections you expect your application to use.  I’ve set this to Int32.MaxValue and not had any side effects, so you might try that–this is actually the default used in the native HTTP stack, WinHTTP.  If you’re not able to set System.Net.ServicePointManager.DefaultConnectionLimit programmatically, you’ll need to disable autoConfig , but that means you also need to set maxWorkerThreads and maxIoThreads.  You won’t need to set minFreeThreads or minLocalRequestFreeThreads if you’re not using classic/ISAPI mode.
  6. If your application sees a large number of concurrent requests at start-up or has a bursty load, where concurrency increases suddenly, you will need to make the application asynchronous because the CLR ThreadPool does not respond well to these loads.  The CLR ThreadPool injects new threads at a rate of about 2 per second.  This is true for all versions of the CLR (v1.0 thru v4.0) at the time of this writing.  If concurrency is bursty and the request thread blocks (e.g. on a backend with latency), the injection rate of 2 threads per second will make your application respond very poorly to this work load.  The fix is to stop blocking on threads by using asynchronous I/O to communicate with the backend with latency.  If you cannot make the application asynchronous, you will need to increase minWorkerThreads.  I don’t like to increase minWorkerThreads.  It has a side effect on high-throughput synchronous requests that don’t block on threads, because the thread count is artificially high.

 

-Thomas


Comments (71)

  1. Everyone seems to be coming off vacation and my book is done so now to list the resources to make us

  2. Two Threads per Request In .NET 3.0 and 3.5, there is a special behavior that you would observe for IIS-hosted

  3. Folks were hassling me in the comments for not posting the picosecond that .NET 3.5 SP1 came out (or

  4. ASPInsiders says:

    Folks were hassling me in the comments for not posting the picosecond that .NET 3.5 SP1 came out (or,

  5. sjuranek says:

    This was a great write up that I found very helpful for the application I’m working on. My application was only handling 24 simultaneous requests until I made the registry change. However one thing that hung me up a while is that the registry key must be: maxConcurrentRequestsPerCPU with CPU in all caps. You have ‘Cpu’ several times above and Process Monitor helped me find that w3wp wasn’t finding it.

    Also, I tried using the aspnet.config technique since I have .NET 3.5 SP1 loaded but it doesn’t seem to work.

  6. Registry key names are not case sensitive, so maxconcurrentrequestspercpu will work as well as MaxConcurrentRequestsPerCPU.  In 3.5 SP1, you can use aspnet.config, as shown in the post.  Be sure aspnet.config is located in the same directory as the framework:

    x86: %windir%Microsoft.NETFrameworkv2.0.50727aspnet.config

    x64: %windir%Microsoft.NETFramework64v2.0.50727aspnet.config

    Also note that <system.web> must be contained in the <configuration> section.

  7. 최근 진행된 SCOM/SSRS 관련 Troubleshooting에서 처음으로 64bit Windows 2008에서 작동 중인 IIS 프로세스 디버깅을 시도해었습니다. 물론 Managed

  8. RiSingh says:

    maxConcurrentRequestsPerCPU  setting in registry has to be spelled correclty.

    I wasted a lot of time in setting it to maxConcurrentRequestsPerCpu and it didn’t work. Once I changed the dword to maxConcurrentRequestsPerCPU , all my 50 concurrent requests started coming in, instead of just 12 earlier.

    I would suggest the author to write a official KB article on this.

  9. When hosting an ASP.net web application on IIS7, and you have the application running in integrated pipeline

  10. When hosting an ASP.net web application on IIS7, and you have the application running in integrated pipeline

  11. When hosting an ASP.net web application on IIS7, and you have the application running in integrated pipeline

  12. stombrown says:

    any behavior changes in WS08?… was wondering about the "this change has not been ported to WS08" comment.

  13. In reference to IIS allowing different aspnet.config files to be specified for each Application Pool, I found out today that it was not ported to WS08.  The change was only made to Windows 7.

  14. etrich says:

    About the aspnet.config per application pool setting:

    1) where is it located under Windows 7?

    2) Was it ported to Windows Server 2008 R2?

  15. etrich says:

    Found my answers in this Powerpoint:

    http://download.microsoft.com/download/8/C/2/8C21BAFE-3432-48D1-962A-F7A9DD54A2AC/Extend%20Your%20Web%20Server%20-%20What‘s%20New%20in%20IIS%20and%20the%20Microsoft%20Web%20Platform.pptx

    2008 R2, ApplicationHost.config:

    <applicationPools>

    <add name=”DefaultAppPool” CLRConfigFile=”c:myConfigCLRConfigFile.txt” />

  16. Marek G. says:

    Hi ,

    sorry for my englisch…. I looking to this ppt . but I understand this … I have webservice  which is called from app 200x . But in the w3wp is still 30 threads . Where I set permit number of threads or connection ?

    thanx

  17. Marek, the number of connections used by System.Net to make web service requests is limited by maxconnection (see connectionManagement configuration section).  In an ASP.NET application, when autoConfig="true" (see processModel configuration section), maxconnection is set to 12*N, where N is the number of CPUs.  I do not think you should change the default values of these settings.

    Thanks,

    Thomas

  18. JohnFring says:

    Hi Thomas,

    Read your post but the one thing I couldn’t figure out is how to change the maximal thread count for Asp.Net 2 with Framework 3.5 SP1 on IIS 6.0.

    Searched all over the web but everything leads back to this post.

    Thanks in advance,

    Jonathan.

  19. In general you should not change the default settings for Asp.Net 2 with Framework 3.5 SP1 on IIS 6.0.  They are automatically set for you, but if you really want to cause yourself some trouble you can disable this by setting processModel/autoConfig to false.  The full list of settings controled by autoConfig are listed on MSDN in the description for that attribute.  Thread counts are controlled by maxWorkerThreads and maxIoThreads.  Read KB 821268.  Best of luck!

  20. JohnFring says:

    Hi Thomas,

    Thanks for the quick response – in our case I guess that the autoConfig values weren’t working in our favor. Playing around with these values actually solved an issue we’ve had with ASP.Net performance.

  21. gecon27 says:

    Dear Thomas,

    Thank you for the detailed explanation.

    I’d like to ask what is the relation between requests/threads, as described in your article, and the ThreadPool, as described in the following link:

    http://stackoverflow.com/questions/1453283/threadpool-in-iis-context

    More specifically, you mention about 12 requests/threads per CPU (assuming synchronous operations), while in the link above it says that there is one thread pool per process (w3wp.exe) and this thread pool has a default size of 250 worker threads per available processor.

    Moreover, for a machine with 2 Intel Xeon E5450 processors, what is the total number of requests allowed by default? 2×12, or 2x(4 cores each)x12?

    Thank you,

    Ioannis

  22. Hi Ioannis,

    I think your question about ASP.NET requests and threads is answered in my post at http://blogs.msdn.com/tmarq/archive/2010/04/14/performing-asynchronous-work-or-tasks-in-asp-net-applications.aspx.  I also mention in that post that ASP.NET sets the maximum number of threads (both worker and i/o) in the ThreadPool to 100 per CPU.  

    On your dual quad-core server, there are a total of 8 cores.  The number of CPUs (or cores) is never hard to determine, just open Task Manager and select the Performance tab.  It will show you how many CPUs (or cores) you have.  In classic mode, ASP.NET will use at most 12 threads per CPU concurrently, so ASP.NET will use at most 8 * 12 = 96 threads.  If the requests are non-blocking (e.g. you don’t make web service requests to a remote server from your ASPX page), this will be more than enough.  If the requests are blocking, you should switch to making the web service request asynchronously as described in the other post.  You can execute a lot more requests than the maximum number of threads, just like you can juggle more than two balls with your two hands.  In integrated mode, (note on v2.0 and 3.5 you should set MaxConcurrentRequestsPerCPU = 5000, the new default in v4) ASP.NET does not have a limit on its thread usage by default, only on the number of requests.  The ThreadPool is very good at adjusting the number of active threads for the particular workload.

    Thanks,

    Thomas

  23. dzhafar says:

    Hi Thomas,

    thanks for the write up – very useful for us. I have a question about using some of the recommendations in your blog in our web site. Our website is configured as WCF service using jsonpBinding. many of the requests to our service result in our service calling the external XML web service synchronously. During peak times we start running into scalability issues and many other requests are taking longer to execute. The WCF service is configured to run .NET 3.5 SP1 and running on Windows 2008 R2 servers (dual quad core – 8 CPUs) with dedicated application pool. I was thinking of following your recommendation to set MaxConcurrentRequestsPerCPU = 5000.

    My question is would it help if I will configure my aplication pool to run with multiple worker processes? How multiple worker processes may help to resolve this problems?

    Thanks in advance,

    Geoff D.

  24. I do not like web gardens, which is what we call it when you have multiple processes running for the same application pool.  Technically, if you’re running out of a resource that has process scope, then adding more processes will give you more of those resources, and perhaps help.  The reason I don’t like web gardens is because they require more memory and don’t really solve any problem–at best they might work around a problem that should really be fixed in a different way.

    You need to determine why your web server doesn’t scale under load.  I’d start with performance counters.  Is "Processor% Processor Time" at or greater than 90%?  Is ".NET CLR Memory% Time in GC" above 10%?  If the value of "ASP.NET ApplicationsPipeline Instance Count" and/or "ASP.NET ApplicationsRequests Executing" is pegged at 12 * #CPUs, in your case 12 * 8 = 96, then the pages that make synchronous web service requests might be preventing any other requests from running.  Hard to tell for sure without looking in a debugger or digging through the W3SVC logs.

    I think you should make the web service requests asynchronous.  Also, it is possible that you need to set the TcpTimedWaitDelay registry setting, which has a default of 2 minutes, as well as the MaxUserPort setting, to more appropriate values for your load.  You can determine whether this is necessary by running "netstat -n" and looking to see how many connections are in the TIME_WAIT state.  See KB 328476 for a better description of this.

    Please do set MaxConcurrentRequestsPerCPU = 5000, which is the new default in v4.0.  The v2.0/3.5 default of 12 is too low.  However, you also need to make your web service requests asynchronous.

  25. dzhafar says:

    Thomas,

    Thanks for the prompt response.

    Regarding your questions about performance counters – I cann tell you that memory usage is very low; however processor Time at those times is almost 100%. I do not know about the other counters – we rolled back our code changes, but surely I will collect these counters when the code will be deployed again.

    We will make calls async, this is on our to do list for some time.

    Thanks again

  26. Pierre says:

    Hi Thomas,

    Thanks for writing this great article. I'm currently developing a few web-services using ASP MVC 2 and i'm getting really poor load-testing performance. My web-services will mostly be IO bound (execute a couple HTTP back end calls + DB write per request), so just for the heck of it i created a very basic service that just "Thread.Sleep" for 1.5 seconds and returns a line of text.

    I deployed that service on a (dual-core) Win2008 machine running IIS7 and when i load test it, no matter how many thread (users) i put on i cannot get above 6-7 req/s. I've looked at various perf counters on the server while stress testing and the only thing i see is the request processing time taking increasing as the number of users grows -for instance, with 50 users sending a request without "think time", request time averages 7.65 seconds, yet i do not see any queued request (unless i'm not looking at the right counter) and the cpu is far from maxed out…

    I tried setting the "maxConcurrentRequestsPerCPU" to 5000 as you recommend but that didn't change anything.

    Do you have any ideas that could help me troubleshoot?

    Thank you,

    pierre

  27. Pierre,

    First, you might want to eliminate client issues by applying a load to a very simple Hello World page, just to make sure that you're able to maximize the CPU.

    Now if you're not having any client issues, then usually you can determine why the CPU is not maximized by applying the load and then breaking into the process with a debugger.  You will often see where the requests are blocked just by looking at stack traces for all the threads.  I would use windbg and sos for this, which you can learn about by doing a search online.  This should be quite easy if the % Processor Time is really low.

    Alternatively, you could use a call attributed profiler, but this will be harder to setup, and probably cost you some money unless you've already purchased a profiler.  A sampling profiler often won't help in this situation, although in general, I prefer them over call attributed profilers.  Anyhow, when I can't maximize the CPU, I'll just break in with a debugger and see what the threads are doing.

    Thanks,

    Thomas

  28. Pierre says:

    Thanks for you response Thomas.

    Yes, when testing with a very simple page, i can easily get > 1,000 req/s (CPU close to be maxed out). What i'm testing against though, is a stub service that just sleeps (Thread.Sleep) for 1,500ms and then returns, without  processing anything.

    When i run perfmon on the server, i see the "Request Execution Time" to be slightly above 1500ms which makes sense, but my load tester (runs on a separate machine, same network) cannot get above 6 or so req/s, no matter how much load i set.

    I can't explain it. If i run the stub and load-test in local and break as you suggested, i always end up on the Thread.Sleep(1500) line, nothing in the call stack ([external code]). The CPU is sleeping, no request queued but my load tester shows a Page Response Time of >80 seconds (which explains the low output).

    pierre

  29. Pierre,

    If all threads are in Sleep and you're only seeing 7 rps on this dual core server, my guess is that your load client is only issuing about ten requests concurrently.  Otherwise, the thread pool work item queue would have entries and the thread count would grow (because the CPU utilization is so low) along with the number of concurrently executing requests.  I have a few questions:

    1) What's the value of the "ASP.NETRequests Current" performance counter when you see 7rps?  

    2) Is the application pool using integrated or classic mode?

    3) What version of ASP.NET is this?

    4) Is the web service ASMX or SVC?

    Thanks,

    Thomas

  30. Pierre says:

    Hi Thomas,

    I understand what you are saying, that's why I'm confused in the first place. To make sure it wasn't something in my application, I created a brand new ASP MVC 2 project (project -> new -> "ASP.NET MVC2 Web Application", based on .NET 3.5 SP1), added a "TestController" with the following action:

    public class TestController : Controller

    {

       [HttpGet]

       public ContentResult LoadTest1()

       {

           Thread.Sleep( 1500 );

           return Content( "Done" );

       }

    }

    I deployed this on my dual-core test server, win2008 running IIS7, app pool set to integrated mode. I have to following in my C:WindowsMicrosoft.NETFramework64v2.0.50727Aspnet.config:

    <system.web>

    <applicationPool maxConcurrentRequestsPerCPU="5000" maxConcurrentThreadsPerCPU="0" requestQueueLimit="5000"/>

    </system.web>

    I verified i was able to access my test action from a regular browser and started a load test from another machine on the same network: user load 500, no "think" time.

    The following is what I observe on the machine running the loadtest after about 8 minutes:

    * req/s: 5.98

    * avg response time: 78.4 seconds

    * no errors whatsoever

    * note that the local CPU is 1-2%

    On the test server, this is what I observe using perfmon:

    * request current: flat line at 501

    * request execution time: flat line at 1506 ms avg => this makes sense though I don't understand why I see the request coming back after 78 seconds at the other end

    * % proc time avg < 1%

    * request queued as well as all others asp.net v2.0.50727 counters at 0

    I don't get it. I believe the expected behavior would be to see about 333 req/s (500/1.5) with a avp response time of 1.5xx seconds…

    Any ideas? I can send you the source code if needed (it's really easy to reproduce) 🙂

    Thank you,

    pierre

  31. Milind Amin says:

    Hi, We have server 2008 x64. so we have to add DWORD (32 bit) or QWORD (64bit) for maxConcurrentRequestsPerCPU?

  32. Pierre,

    Can you set this DWORD registry value, restart IIS, and run the test again?

    HKLMSOFTWAREMicrosoftASP.NET2.0.50727.0MaxConcurrentRequestsPerCPU = 5000

    It's not clear if you have v3.5 SP1, which is required in order to set the value in aspnet.config, so please try the registry change and restart IIS.

    Thanks,

    Thomas

  33. Pierre says:

    Thomas,

    I do have .NET 3.5SP1 installed but in any case already had that registry entry during my previous tests (I tried the registry before changing the aspnet.config).

    Also in one of my tests I tried to prove that the aspnet.config was indeed used by setting the maxConcurrentThreadsPerCPU to 1 and keeping the user count to 500: I immediately saw 500 requests queued when i started the load test, which is a very different behavior then the test I described in my previous post.

    I know it's a lot to ask and I thank you for keeping up with so far but if you have a load tester handy I would really appreciate if you could try to reproduce. I can send you my test project if needed but it would be almost faster to create from scratch (the extent of the code is this one action I pasted in my previous post) 🙂

    Thank you,

    pierre

  34. I can't help, but I bet you can narrow this down yourself.  You might alternatively try Microsoft Product Support, or one of the forums, like those on http://www.asp.net or http://www.iis.net.

    Thanks,

    Thomas

  35. Randhir says:

    Hi Thomas,

    Does IIS 7 integrated mode ignore maxConnection attrib (under connectionManagement) as well? Is it possible that Pierre's number of outgoing TCP/IP connection is throttled at 2?

    I am trying to make sure the optimum settings for my app, other than setting the maxConcurrentRequestsPerCPU.

    Thanks!

    Randhir

  36. In all ASP.NET applications, the maxConnection attribute is set automatically to 12 * #CPUs when autoConfig="true" in the processModel configuration section of machine.config.  By default, autoConfig="true".  So for example, on a dual core server, it is set to 24 automatically.  

  37. Mikhail says:

    Hi Tomas,

    The 3-rd paragraph of the article begins with the following sentence:

    "The CLR Threadpool is not the only queue used by ASP.NET when hosted in IIS 6.0. There are also queues at the application level, within each AppDomain."

    The 5-th one also says:

    "When ASP.NET is hosted on IIS 7.0 in integrated mode, the use of threads is a bit different. First of all, the application-level queues are no more. … So how do things work for IIS 7.0 integrated mode? Similar to IIS 6.0, the request is still handed over to ASP.NET on an IIS I/O thread. And ASP.NET immediately posts the request to the CLR Threadpool and returns pending. … Finally, once the request is picked up by a thread from the CLR Threadpool, we check to see how many requests are currently executing. If the count is too high, the request is queued in a global (process-wide) queue. This global, native queue performs much better than the application-level queues used when we’re running in ISAPI mode (same as on IIS 6.0)"

    So paying attention to what i quoted below i realized that both IIS 6 and 7 implementes the following logic:

    Get requestion on IO thread -> Queue it to CLR thread pool (the same as avaiable from c# code as static class ThreadPool) -> if some threshold of the threads in the thread exceeds it puts newly accepted requests to a) in case IIS 6 application level queue and b) in case IIS 7 global process-wide queue.

    What a bit convused me are

    1) What is application level queue you wrote about in case of IIS 6 (I can't found any article where it is described or even mentioned on the Internet)? Also I thought CLR thread pool you wrote about is the same thread pool that can be accessed from C# code as static class 'ThreadPool' as and it is AppDomain-wide. Am I right here?

    2)  Am I right that  processModel/requestQueueLimit and applicationPool/requestQueueLimit restricts the total numer of requests recieved by IIS but have not processed yet, that is for IIS 6 it is the sum of object count in all queues (Thread Pool and application level), for IIS7 – thread pool and process-wide queues respectively?

    3) And i'm just wonder how you identify the threshold when you should switch to process-wide queue in case of IIS 7 by maxConcurrentRequestsPerCPU (and maxConcurrentThreadsPerCPU). You wrote:

    "Finally, once the request is picked up by a thread from the CLR Threadpool, we check to see how many requests are currently executing. If the count is too high, the request is queued in a global (process-wide) queue." but I still can't understand the details.

    Tons of thanks in advance!

    Mikhail Mikheev

  38. Mikhai,

    1) On IIS 6 and in IIS 7 classic mode, each application (AppDomain) has a queue that it uses to maintain the availability of worker threads. The number of requests in this queue increases if the number of available worker threads falls below the limit specified by <httpRuntime minFreeThreads=/>.   When the limit specified by <httpRuntime appRequestQueueLimit=/> is exceeded, the request is rejected with a 503 status code and the client receives an HttpException with the message "Server too busy."   There is also an ASP.NET performance counter, "Requests In Application Queue", that indicates how many requests are in the queue.  Yes, the CLR thread pool is the one exposed by the .NET ThreadPool class.

    2) The requestQueueLimit is poorly named.  It actually limits the maximum number of requests that can be serviced by ASP.NET concurrently.  This includes both requests that are queued and requests that are executing.  If the "Requests Current" performance counter exceeds requestQueueLimit, new incoming requests will be rejected with a 503 status code.

    3) On IIS 7 integrated mode, if the total number of excecuting requests is greater than maxConcurrentRequestsPerCPU * #CPUs, then new incoming requests will be inserted into the queue.

    Thanks,

    Thomas

  39. Mikhail says:

    Tomas, thanks for the explanation it's quite clear now.

    Mikhail Mikheev

  40. Randhir says:

    Thanks Thomas. So essentially as I understand it, if app A connects to a web service S, then there can not be anymore than 12 calls (per CPU core) in flight from A to S. Other calls to S from A will block on the acquire connection call till either timeout happens or one of those 12 calls completes and closes the connection.

    Is it the same case if A is calling S asynchronously? Can 13th call be placed on the network while other 12 are waiting on IOCompletion port?

  41. Randhir,

    It is unfortunate that we have the constraint set by the maxconnection attribute (in connectionManagement section).  It is really a pain to scale with System.Net.HttpWebRequest.

    To answer your question, ASP.NET sets maxconnection to 12 times the number of CPU cores automatically when processModel/autoConfig is set to true.  It does not matter if you use System.Net.HttpWebRequest synchronously or asynchronously, the number of connections between a client and a server is limited by the value of maxconnection.  To change the config value, you have to disable autoConfig or set System.Net.ServicePointManager.DefaultConnectionLimit programmatically, once, after the ASP.NET AppDomain starts.

    To scale with System.Net.HttpWebRequest, you will also probably need to change TcpTimedWaitDelay (to 30 seconds) and MaxUserPorts (to 0xFFFF), but before you do that you might want to determine if this is an issue.  See blogs.msdn.com/…/powershell-script-troubleshooting-for-port-exhaustion-using-netstat.aspx for some diagnosis steps.

    Once you've solved the network bottle necks, you're probably going to run out of threads next.  Keep an eye on the thread count and the ThreadPool thread limits.

    Finally, if you've removed all these bottle necks, you will very likely run into memory issues.  Keep an eye on % Time in GC and if it is above 5% (it most likely will be), use the CLR Profiler to reduce allocations on a per request basis.  One of my first blog posts explains how to use the CLR Profiler.

    Making web requests to a backend tends to be so expensive and complicated to configure that I suspect most people scale by adding additional web servers to a web farm.

    Thanks,

    Thomas

  42. John Lemp says:

    Thomas, Do you know if  the maxConcurrentRequestsPerCPU settings and/or defaults changed in .Net 4 and/or Server 2008 R2?

  43. John, The default for maxConcurrentRequestsPerCPU in v4.0 is 5000.  With WS08 R2, you can also set the value in the aspnet.config file as described in the blog post.

    Thanks,

    Thomas

  44. TomW says:

    Hi Thomas. I wanted to verify that I have the correct steps in applying the settings

    Is this value also needed in the registry? – maxConcurrentThreadsPerCPU="0".  

    Do I need to add the DWORD in the registry AND update the ‘aspnet.config’ or is the registry the only location needed?

    Thanks!

  45. RandomEngy says:

    Just wanted to thank you for writing this article. This was very enlightening to me and helped me get to the bottom of a couple of troublesome bottlenecks in our code.

  46. Just want to ask two questions. I am using ASP.NET 4.0, IIS 7.0 integrated mode, 1) If I need data from a remote database which takes a lot of time to respond. Whether I use synchronous API or asynchronous API? Is my other request get some benefit If I use asynchronous API for the current request. 2) How do you relate ThreadPool.GetMaxThreads(or ThreadPool.SetMaxThreads) with maxConcurrentRequestsPerCPU? I mean when maxConcurrentRequestsPerCPU=5000 and ThreadPool.GetMaxThreads =100 then what happens?

  47. TomW, sorry for the very long delay, for Windows Server 2008 R2 you can set the registry key or aspnet.config.  aspnet.config will override the registry key.  Settings that you do not specify will assume their default values.

  48. anonymious – It would be better not to block threads while waiting for a slow backend (more than ~100 milliseconds) to respond.  Asynchronous applications will respond better to backend latency and fluctuations in client load.  Your user experience will be better with an asychronous application if the backend has latency.  The downside is that asynchronous programming is more complex and more difficult to debug and maintain, but once written you don't have to revist until it breaks or you need to update it.  To answer your question about the thread pool, when the application is asynchronous it will not use threads when waiting for the backend to respond, so you can have higher request concurrency and use very few threads.

  49. scott says:

    Have been suffering from a restriction to the number of REQUESTS that will process at any one time against a single WORKER PROCESS. Loading IIS , WORKER PROCESS , VIEW CURRENT REQUESTS i would generally see 0-10 concurrent. I use performance monitor to view CONCURRENT CONNECTIONS against the process WEB SERVICE and when they hit 600+ the number of CURRRENT REQUEST jumps from 0 – 10 too 100s or 1000s slowing the application. (IIS7, ASP.NET, mixture of synchronous and asynchronous requests).

    Have tired amending machine.config, registry and aspnet.config as below:

    ——————————————————————————-

    HKEY_LOCAL_MACHINESOFTWAREMicrosoftASP.NET2.0.50727.0  

    MaxConcurrentRequestsPerCPU (SET TO 0)

    ——————————————————————————-

    ASPNET.CONFIG AMENDMENTS:

    C:WindowsMicrosoft.NETFrameworkv2.0.50727ASPNET.CONFIG

    C:WindowsMicrosoft.NETFramework64v2.0.50727ASPNET.CONFIG

    <system.web>

      <applicationPool

         maxConcurrentRequestsPerCPU="0"

         maxConcurrentThreadsPerCPU="0"

         requestQueueLimit="500000" />

    </system.web>

    ——————————————————————————-

    machine.config =

    <system.web>

    <processModel

    autoConfig="true"

    maxIoThreads="50000"

    maxWorkerThreads="50000"

    minIoThreads="500"

    minWorkerThreads="500"

    />

            <httpRuntime

                minFreeThreads="176"

                minLocalRequestFreeThreads="152"

            />

    ——————————————————————————-

    But to date these changes have not resolved the issue or have appeared to influence it at all.

    Have attempted to increase the MAXIMUM WORKER PROCESSS against the app pool to 2 (creating web garden) and now see 2 x app pools as expected. Have enabled all web site features in the hope that this will resolve but reading this blog suggests it probably wont.

    Thanks for any help.

  50. Liming says:

    Hello Thomas:

    I know you explained the differences of IIS on different .NET framework. My question is, could you shed some light in terms of IIS in the context of different OS?

    For instance, I just read this blog

    blog.stevensanderson.com/…/measuring-the-performance-of-asynchronous-controllers

    and it stated

    "Don’t even bother trying to load test your asynchronous controllers using IIS on Windows XP, Vista, or 7. Under these operating systems, IIS won’t handle more than 10 concurrent requests anyway, so you certainly won’t observe any benefits."

    So are you talking about IIS 7 in say Windows Server 2008, but not in the context of say Windows Vista or 7?

    Thanks a bunch.

  51. Hi Liming,  

    The server operating systems are configured and tuned to run services, and the client operating systems are configured and tuned for the user application experience.  The client also has limitations on the number of remote connections, such as a maximum of 10 TCP/IP connections.  Yes, you would not load test a client operating system.  Do your scale, performance testing, and production deployment on a server operating system.

    Thanks,

    Thomas

  52. Jim says:

    Thomas,

    I have been exercising the concurrency settings and processModel in both IIS 6 and IIS 7 and have learned a lot.  Thank you for a great explanation.  However, I am still unclear about how to view the queue depth in the native global queue.

    For IIS 7 with an integrated app pool under .NET 2.0, if I set MaxConcurrentRequestsPerCPU to 5000 in the registry I can see I am no longer bound by this concurrency setting but instead by the asp.net processModel.  However, what I've noticed is that in the default processModel config on a single core VM where there is 100 maxWorkerThreads and 1 minWorkerThread (as obtained by System.Threading.ThreadPool.GetMaxThreads and System.Threading.ThreadPool.GetMinThreads, IIS becomes slow to respond as a large amount of traffic goes up (using WCAT) – essentially I'm waiting for the CLR Thread Pool to allocate threads (I know how to set the minWorkerThreads to account for a burst of traffic, but please read on as this is not what I'm confused about)  I'll see ASP.NET Apps v2.0.50727Requests Executing slowly increase but no queuing in ASP.NET v2.0.50727Requests Queued.  I can see the .NET CLR worker threads slowly being allocated using System.Threading.ThreadPool.GetAvailableThreads AND IIS is slow to respond until more threads are allocated, but no queue depth is visible.

    That said, if I understand correctly, requests are queuing in the native global queue until the .NET CLR can spin up the required worker threads, but I don't know what perfmon counter I can view to see this global native queue depth, if there is one available.  So, until IIS 7 hits max worker threads we're slow but I can't see how many are queued.  (In IIS 6, while threads are slowly being allocated in the .NET CLR Thread Pool I can see the queuing).

    PS.  If I do the same test with using the MaxConcurrentRequestsPerCPU set to 12 or other low value, once I hit request concurrency, the ASP.NET v2.0.50727Requests Queued.counter starts going up by the expected delta.

    Hopefully the info provided is clear.  I really want to understand how I can see the queue depth of the global native queue you referred to.

    Thanks!

  53. Jim,

    In v2/v3.5 on IIS 7, the "Requests Queued" performance counter is not incremented when we post incoming requests to the CLR ThreadPool.  It is only incremented when we put requests in our native queue.  In other words, we receive the request on an IIS thread and call CorQueueUserWorkItem to switch to a CLR thread, but we don't increment "Requests Queued" before we call CorQueueUserWorkItem.  For v2/v3.5 on IIS 6, we do, so on IIS 6 you'll see "Requests Queued" increase if there is a sudden burst load but no available CLR threads to invoke the callback.  When the callback is invoked on IIS 6, it is then that we decrement "Requests Queued".

    In v4.0 this was "fixed".  So in v4.0 and later on IIS 7, you will see "Requests Queued" increment for the situation when CorQueueUserWorkItem is called and decremented when the callback is invoked.  We also increment "Requests Queued" if we add a request to the native queue.  

    Note that the IIS 7 native queue is not used unless the MaxConcurrentRequestsPerCPU or MaxConcurrentThreadsPerCPU limits are exceeded.  So these burst loads that you're experimenting with are not going to cause the request to be inserted in the IIS 7 native queue.  First the requests are put into the CLR ThreadPool work item list by calling CorQueueUserWorkItem.  It could take a while for the callback to be invoked if there aren't enough threads.  When the callback is invoked, the concurrency limits are checked and the request will either execute or be put into the IIS 7 native queue.

    Thanks,

    Thomas

  54. Tobias says:

    (continued questions from previous comment)

    3) From you question response 8 Dec 2011, I understand it that “Requests Queued” is use both to measure the number of requests that are waiting as well as the number of executing requests in the ThreadPool + those requests that possible (unlikely if correctly configured) hit the native queue. It seems to me then, that it is quite difficult to distinguish the scenario when there actually are requests in the native queue.

    — (a) Would that be possible in .NET 4.5 / IIS 7 integrated mode? (how if so? 🙂

    4) You mentioned in comment 22 april, 2010 and 28 oct, 2010 that if you run out of (ephemeral) ports it can be good to decrease the TcpTimedWaitDelay registry setting (as well as MaxUserPort). For us I don’t think this is a problem since we use TCP keep-alive (persisted) connection for the web service requests.

    — (a) However, in order to find out if this was a problem, would it not be enough to check the performance counter [TCPv4.Connection Established] and see if this value is close to the maximum amount of available ports (which should be quite high on Windows 2003/2008 by default).

    Further more, I think decreasing TcpTimedWaitDelay is a good option although perhaps one need to consider other types of TCP reliant I/O high latency requests running on the server (eg web services and even database calls) if lowering this number.

    — (b) Do you agree?

  55. Tobias says:

    I am awfully sorry.. Since I tried posting two comments after one another this blog engine seems to have choosen to post only the second follow up comment. Here's the starting story as well:

    Based on my reading here, it seems the solution is to set the ServicePointManager.DefaultConnectionLimit programmatically in production code as you suggest (since we use autoConfig set to true). However before I try this in our production environment (initiating a redeploy) I have some remaining categorised questions with regards to what I’ve read (sublabeled (a), (b) etc to make your answering easier):

    1) appRequestQueueLimit config:

    — (a) Do I understand it correctly that the httpRuntime setting appRequestQueueLimit configuration only is applicable for IIS 6 and 7 when running in classic mode? I.e it sets the limit for the old application scoped queue (with “miserable performance” as you write)?

    2) (questions a-f below)

    When our servers gets over loaded and don’t scale (CPU only at 20%), we get 503 error code in response to the original request + that the async. web service requests ends up with quite a few exceptions: “SocketException: An operation on a socket could not be performed because the system lacked sufficient buffer space or because a queue was full”. A problem with this type of error message hint is of course  that you are not sure which queue may be full (if it is the queue).

    — (a) So how many queues exists on the server in the (total) request pipeline?  From you description, in our scope, there is both a (1) process-wide native queue as well as the (2) CLR ThreadPool that naturally can be regarded as a queue. But you also mention the HTTP.sys kernal queue.

    — (b) Is the HTTP.sys kernal queue the same queue as (1) process-wide queue?

    — (c) If not, could you perhaps explain breifly what the purpose of this HTTP.sys queue is?

    — (d) Are there any other lower level queues that can become problematic? (eg network card driver)

    However, in our case, if the problem is actually due to connectionManagement/MaxConnections, I guess the web service error message is quite logical. If there is no possibility for the web service to get access to the connection (keep-alive), the number of requests waiting to execute in the ThreadPool should increase of course until the maxium number of threads is hit. A poor solution would be to increase the number of maxThreads in the ThreadPool…

    — (e) Is there a performance monitor for showing the current number of threads in the ThreadPool?  

    Also, there seems to be a performance counter called [Web Service.Current Connections] that should really hit a max pegged value (12*core count) if this is the problem.

    — (f) Is this correct (below the (e) question)?

  56. Tobias says:

    …and here is the very, very start of the second comment… My cut'n paste bad I'm afraid. 🙁

    Thanks for some great explanations and for taking the time to answer all these questions. It really clarifies things. In our case, we have a problem in production with an .NET 4.5 / IIS 7.5 web server running integrated mode when we get a high load of requests that utilises a high degree of outgoing asynchronous web service requests (stack trace reveals HttpWebRequest usage beneath the hood of the HttpClient usage).

    Based on my reading here, it seems the solution is to set the ServicePointManager.DefaultConnectionLimit programmatically in production code [see comments above for rest]

  57. Tobias says:

    I have reposted the three comments above with nice markup in the IIS forum here: forums.iis.net/…/1. If I find any answers on my own, I'll write them there instead.

  58. Tobias, in response to the post you gave on the IIS forum:

    1) Yes, appRequestQueueLimit only applies to IIS 6 (also 7 when running in classic mode).

    2a) IIS 7 and later have the queues that you mention.

    2b) The HTTP.sys kernel queue is not the same as the ASP.NET process-wide queue.

    2c) The HTTP.sys kernel queue is essentially a completion port on which user-mode (IIS) receives requests from kernel-mode (HTTP.sys).  It has a queue limit, and when that is exceeded you will receive a 503 status code.  The HTTPErr log will also indicate that this happened by logging a 503 status and QueueFull.

    2d) I do not know the details of how HttpClient or HttpWebRequest are implemented.  You need to ensure that you are closing/disposing all System.Net objects properly.  You likely need to increase connectionManagment/maxconnection in the config file or increase it programmatically via ServicePointManager.DefaultConnectionLimit.  You may also need to modify the default registry values for TcpTimedWaitDelay and MaxUserPorts if your connections are sitting in the TIME_WAIT state or you do not have enough ports available.  Be careful with these registry values–you need to know what you're doing, and why you're doing it.  Perhaps the System.Net folks have a forum?

    2e) "Process(w3wp)Thread Count" and the ".NET CLR LocksAndThreads" performance counters will help a little, but ultimately you will need to resort to the debugger (windbg) and the sos.dll debugger extension.  It has a !ThreadPool command that will tell you how many threads are active in the pool and what the maximum limits are.

    2f) "Web ServiceCurrent Connections" is the number of connections to IIS.  This has nothing to do with your outbound System.Net connections.

    3) ASP.NET v4.5 has a performance counter in the "ASP.NET" category specifically for the native queue.  This is new to v4.5.

    4a) Perhaps, but I'm not familiar with the "TCPv4Connection Established" performance counter.

    4b) Yes, I would be careful about changing TcpTimedWaitDelay and/or MaxUserPort.  You need to know what you're doing, and why you're doing it.

    Thanks,

    Thomas

    PS. I moved away from the ASP.NET team several years ago and am not actively posting to this blog.  I recommend that you post questions on the ASP.NET forums, which is actively used by many ASP.NET folks.

  59. Chintan Jhaveri says:

    Hi,

    When we talk about any of this setting which needs to be applied per CPU, is it physical or logical CPU?

    for e.g. minfreethreads = 88 * # CPU

    Here # CPU is physical or logical?

  60. daveblack says:

    The number of CPU's recognized by .NET, and by extension ASP.NET, is the number of *logical* CPU's

  61. Krunal Maniar says:

    Hello,

    I am having a application running on IIS, i want to make it capable of handling 50,000 request per minute.

    Currently when the number of request goes above 30 thousand request per minute i am getting error 503.2 service unavailable which is due to request limit exceeded.  

    I am having a HP DL series server with 2 processors with 12 core each.

    Can any one suggest me the best IIS settings for this application.

    Thanks in advance.

  62. Marthinus Botha says:

    Greetings I would like to know what the  HTTP.sys queue limit should be running a 32-bit application pool on IIS 7.0 with Server 2008 running SQL server 2008. Any advice will be appreciated.

    Thanks in advance.

  63. Andrew says:

    Thank you very much for this great article. It gave my team and me the opportunity to fix a production issue we were facing for almost one year.

  64. Niranjan says:

    thanks for writing this good article.

    I am setting following configuration and http.sys queue length to  10.

       <applicationPool

           maxConcurrentRequestsPerCPU="13"

           maxConcurrentThreadsPerCPU="0"

           requestQueueLimit="15" />

    I was expecting to get 503 after 25 or 35 requests. but it is able to accept more than 1000 requests "requests current ' perfcounter was showing  more than 1000. I also modified  process model element of machine.config but it did not work. Acutally maxConcurrentRequestsPerCPU and maxConcurrentThreadsPerCPU are working but requestQueueLimit is not being honored.  Am I doing anything wrong. any suggestion to debug.

  65. Niranjan says:

    Ignore my above comment, i was able to verify it. It was accepting more than queuelenght for the first time, later on everything went fine.

  66. bjg says:

    Note: Max connections for aspnet under autoconfig changed for net 45 – go to the MS referencesource website (for .net45 source code) and find "SetAutoConfigLimits" private function of HttpRuntime class, System.Web. See the maxconnections limit under auto config is now defaulted higher to Max Int (previously 12*cores as per update above)

  67. pregunton says:

    Sometimes, in IIS 7.5, Windows Server 2008 R2 in WebSite for ASP.NET 4.5.1, in AppPool Classic, CLR 4.0, I get "Server Too Busy"

    Which is good patterns and practices about it ? web.config configuration without performance issues?

    Thx a lot.

Skip to main content