What’s New in WCF 4: Channels

As we get closer to the release of .Net 4 it becomes easier to talk about exactly what new features we’re planning to provide. This list is a bit different in places from what you’ve seen in the beta 1 release and there’s always a chance that things might change again.

The first part of the list comes from the area of WCF channels.

  • Non-destructive queued receive: the current support for queuing has a choice between transacted messaging and purely best effort messaging. It’s hard to reason about systems that are best effort while it’s hard to achieve high volume and scale while using transactions. We’ve introduced a new queued receive pattern that is amenable to having multiple consumers work off of a single queue while retaining the ability to do transactional processing in your application.
  • Extended protection of secure channels: extended protection is a new mechanism in Windows for binding one security system, such as an SSL stream, to another security system, such as Kerberos or HTTP authentication. By associating the two security systems together and ensuring that the association is preserved over time, it becomes harder for an attacker to get a foothold even when they’re able to intercept and modify your network traffic.
  • Simple byte stream encoding: when talking to something other than a web service you often need to be able to directly influence the stream of bytes that is being exchanged with the network transport. The byte stream encoder is a low overhead way of enabling your application to work with messages that are essentially binary objects.
  • Automatic HTTP client decompression: compression is an approach for reducing the amount of network traffic in exchange for spending more CPU time on the client and server. Many web servers have support for compression modules but the client and server first need to agree to exchange data in a compressed format. We’ve made that easier when using HTTP by allowing the client to automatically negotiate using gzip or deflate compressed streams and then automatically decompress them.
Comments (8)
  1. Richard Collette says:

    I am wondering if in this release, some additional consistency around the handling of the address and listenUri end point attributes is employed across the various Microsoft provided behaviors.

    For example, if you create a service with the enableWebScript behavior the main end point sets up correctly.  However, the two metadata endpoints for /js and /jsdebug only use the address attribute.  They also do not pickup the addressFilterMode that was set on the service endpoint.  The result is that it it does not appear possible to have an address of:


    and a listenUri of


    Note that the public address is Https while the internal address is http and the external address also introduces a virtual directory as well.   All this having to do with the use of SSL Load Balancers and authentication managers for the public facing address.

    In this scenario, even manually applying addressFilterMode.Any to the metadata endpoints is not enough because no listener is being set up for the listenUri.

  2. flalar says:

    Hi there,

    Firstly I must say that adding gzip and deflate compression is a much needed feature!!!! This used to be easy with the old asmx/IIS model.

    I’m a bit curious with regards to the decompression features. Will these only be for available for the client side? And further we have a scenario where we would like to use the compressed message before it is deserialized or decompressed and return the compressed xml to the client. Will this be an easily achievable scenario with the new features?


  3. Kevin says:

    Today I can use compression to send HTTP Response from ASP.NET to client (browser).

    In WCF, would you insert in request too or only in response?

  4. Hi Kevin,

    The compression feature would only be to support automatic decompression of responses.

  5. Hi flalar,

    Automatic decompression will only be available for the client and happens before the client has any ability to see the data.  Therefore, you won’t be able to get access to the compressed stream or see how the message was transferred.  The change to the HTTP accept header will also be automatic.

  6. Hi Richard,

    Since you’re using IIS I believe all messages need to have an internal address that is a subpath of one of the service base addresses (not necessarily the base address of the corresponding service endpoint though).  Otherwise, IIS won’t deliver the message to us.  We’ll also have some additional flexibility around metadata to make it easier to configure separate internal and external addresses.

  7. flalar says:

    Hallo Nicholas,

    Thanks for your reply! Out of curiosity I’m wondering why you are not planning on supporting automatic compression on the server side? Cheers

  8. Hi flalar,

    The decompression feature is primarily intended for use with web servers that can dynamically compress application responses.  We might add support for server compression in the future to cover other scenarios, such as when you have your own host or when you’re not using HTTP.

Comments are closed.

Skip to main content