I seem to get this question a lot and come across many customer environments where
they have enabled web gardening thinking that it will automagically improve the performance
for their site/application.
Most time, that is not the case. The funny thing is that once I finally convince
them that web gardening is not the way to go, they try to apply that same knowledge
to other sites and applications in their environment. When this happens, I’ll
get an e-mail or phone call asking for some guidelines on when to enable web gardening.
We typically recommend using Web Gardening as a stop-gap (or workaround) for when
a customer has a core issue that is limiting their website and web application scalability.
For example, if a customer has a memory issue that is causing OutOfMemoryExceptions
in their main website – we may recommend web gardening to spread the load across multiple
worker processes while we assist them in resolving the core memory issue. Please
note that this would also increase the memory and processor utilization on the server
and in some cases might not be viable.
As a best practice, create Web gardens only for Web applications that meet the following
criteria (taken from here):
The application runs multi-instantiated, so that a different instance of the application
can be assigned to each worker process.
The Web application is not CPU-intensive. If the CPU is the bottleneck, then adding
worker processes cannot help improve performance.
The application is subject to synchronous high latency. For example, if an application
calls a back-end database and the response is slow, then a Web garden supports other
concurrent connections without waiting for the slow connection to complete.
A good discussion of why not to use Web Gardening can be found here as well: http://blogs.technet.com/b/mscom/archive/2007/07/10/gardening-on-the-web-server.aspx