How do the ServicePointManager.Expect100Continue and ServicePoint.Expect100Continue properties change the behavior of my HttpWebRequest? This is a question that seems to come up from time to time and I thought I would give a little more information.
First, lets talk about the difference between the two versions of this property. ServicePointManager is the master object that controls the creation and lookup of ServicePoints. As a general rule, properties set on ServicePointManager apply to newly created ServicePoints from that point forward. If you set the property on ServicePointManager after a ServicePoint is created, it will not be propogated to the already existing ServicePoints.
Now, having said that, lets look at what this property does. In a nutshell, it tells the HttpWebRequest object whether or not to send the "Expect: 100-continue" header with the request on the wire. What does this mean? When this header is sent on the wire this tells the server that the client is going to delay sending the body of the request for some period of time because it wants the server to give the client the OK (a 100 continue response) to upload the data. In current implementations, the HttpWebRequest object waits 350 milliseconds. Why is this important? There are many things that can go wrong during the HTTP request process. The server could redirect the user to a different location, it could require authentication, the connection could have been dropped in between requests, etc. In the case of redirection or authentication, this allows the client to try to avoid uploading the data (which it knows is useless) if possible. One example of where the client can avoid uploading the data is when the client sets the HttpWebRequest.SendChunked property to true. This means that the client will not tell the server the upload size, but will send the data in chunks. At the beginning of each chunk is information on how big the chunk is and the client sends a zero length chunk to advise the server that there is no more data coming.
What about when the connection is dropped between requests? Because HttpWebRequest keeps a pool of connections to any given server, the connection could be closed by the remote server without the client knowing that the connection has been dropped. Due to the fact that upload requests cannot be assumed to be idempotent, the client cannot safely retry the request if an error occurs beyond a certain point in the request process. If the client has sent even one byte of the upload body on the wire, then HttpWebRequest will fail if an error occurs in the connection. How does HttpWebRequest know if it has sent the body on the wire? It uses the delay between sending the headers (including the "Expect: 100-Continue" header) and sending they body as it's way figuring it out. If Expect100Continue functionality is enabled, this gives a small window where upload requests can be retried safely. If it is disabled, then an error that occurs when sending the request will cause the entire request to fail. This is true even if the data never hits the wire -- all HttpWebRequest knows is that it sent the data down to the underlying socket layer and can't tell if the error occured before or after the data went out on the wire.
I typically suggest that developers leave Expect100Continue set to true unless they are desperate for performance gains or are trying to work around some issue with the server or client.