The TFS "extranet" ISAPI filter mechanics

What's this ISAPI filter chupathingy you're talking about?

Here's some background via links:

The quick version?  The filter lets you support/require Basic (or Digest) authentication for particular clients (usually ones coming "from the internet").

 

Why do you need to use Basic or Digest, though? What's wrong with good ol' NTLM (Intergrated Windows Authentication) over that there intarweb?

Ostensibly, nothing.  CodePlex is an obvious example of it working fine, as is Dev'garten, who goes through an ISA reverse proxy.  However, there are cases where the particular network topology causes problems with an all-NTLM solution. Since he provided me with a great description of the problem, I pass along this explanation from our own Bill Essary:

The TechNet article that is linked from Rob’s blog post has a good explanation of the issues:

https://www.microsoft.com/technet/prodtechnol/WindowsServer2003/Library/IIS/523ae943-5e6a-4200-9103-9808baa00157.mspx?mfr=true

The problem is not NTLM through an ISA Proxy per se, but through network devices that do not respect the stateful nature of the protocol.  NTLM will work in some cases, but it will fail in others.  We had examples at Company X, for instance, where 4 out of 5 people coming in through private ISPs could access a TFS server running NTLM behind an ISA reverse publishing proxy.  That same server was completely inaccessible when coming out of the Company X corporate network through a particular proxy device to hit the server.

So, I don't expect that everyone hosting their TFS in a DMZ or on the internet will need this ISAPI filter to help out their "extranet" scenario, but many will due to such limitations.

Ok, great, now I get what it does. But how does it do it?

While you can configure the ISAPI rules to be enforced on pretty much any given set of IP addresses, the most typical case is that these are client requests coming "from the internet".  For the purposes of this example, we'll have a specific meaning for the phrase "from the internet". Specifically, it will mean the client IP addresses that you've configured the ISAPI filter to consider them "external".  Mechanically, this will mean the clients that get the restrictions enforced (moved to Basic/Digest and potentially requiring a secure port).

Another thing to keep in mind is that with this implementation, you configure the relevant web sites to support *both* NTLM (Integrated Windows Auth) and the "other" method (Basic/Digest), not either/or.  Why do we need to leave NTLM on?  One reason is that we require NTLM in our own internal TFS calls (we call other TFS components through their web service interfaces), including connections from (for instance) the TFS version control proxy.  Another is that for many cases, you'll still want your intranet (internal) clients to use NTLM, as they're already on your (hopefully secure) corporate intranet and won't be going through the network proxies that would typically cause NTLM problems.

Ok, with that out of the way...

If a client request is "coming from the internet" then the ISAPI filter does a couple things:

  1. If the RequireSecurePort setting is turned on, then we verify that the request is coming over a secure port.  What does that mean?  Typically, it means that the request came over https instead of http, but strictly speaking, our check is against the SERVER_PORT_SECURE server variable that IIS uses.  If it fails this check, we don't continue processing the request, and the client will get a failure back.
  2. Assuming #1 passed, then when we're sending our 401 response back to a client (use fiddler or ethereal or whatever to check an NTLM auth'd web request some time if you get the chance), we strip out NTLM from the list of authentication mechanisms sent back via the WWW-Authenticate header.  As a reminder, the web site is actually configured to support *both* NTLM and the "other" method (Basic/Digest).  We need to strip NTLM out of the list of supported authentication mechanisms to force the client to respond with non-NTLM (Basic or Digest) authorization.

Through these 2 mechanisms, we can enforce that our "internet-based" clients are both a) coming in through secure (encrypted) ports (very important if we're using Basic authentication) and b) using Basic or Digest (non-NTLM) authentication (based on how we configured the web sites).  This way, we get around any problems our intermediate proxies may have with NTLM.

 

What mechanism are you using to determine where the client is coming from?

For the purposes of this filter, we're using the REMOTE_ADDR server variable that IIS provides.

 

Ok, now I get why it's useful and what it does. Now, how do I configure it?

The walkthrough is a great doc to read for this (the "Configuring the ISAPI Filter" section in particular), so I'll just focus on the contents of the configuration file.  Quick hint: make sure when you create the configuration file, you do so such that it can be read by IIS.  If we don't have permissions to open the file, we can't read out the configuration very well. :)

The configuration file assumes you'll fall into one of three scenarios.

  1. "Reverse Proxy" scenario: your TFS server sits inside a corporate intranet, and one or more "reverse proxy" machines (ISA has reverse proxy support, as does Apache and many others) bring in client requests "from the internet" to your TFS server.
  2. "DMZ" scenario: your TFS server can be accessed "directly" (not through a proxy server, although perhaps through a firewall which may be doing NAT or port forwarding) by both internal (should be NTLM) and external "from the internet" clients.
  3. "Out on the internet": your TFS server is *only* being accessed by external clients.  In this scenario, the only calls we want to be considered "internal" are the ones coming from the server itself (the TFS-internal web service calls)

Reverse Proxy scenario

For the "Reverse Proxy" configuration, we're telling the ISAPI filter how to identify "from the internet" clients by use of the ProxyIPList configuration value.  The IP address(es) listed in the value should contain the (internal) addresses for the proxies.  Since proxies typically have both internal and external IP addresses, it's important to configure this as the internal IP address(es).  This is because when the request comes into the web site, the REMOTE_ADDR will be filled with the internal IP of the reverse proxy, since that's the IP address that the request appears to be coming from.

As an example, let's say we had reverse proxies set up at internal IP addresses of 10.11.9.110 and 192.168.3.22

[config]
RequireSecurePort=true
ProxyIPList=10.11.9.110;192.168.3.22

Important points about this:

  • the separator for multiple proxy IP addresses is the semi-colon (';' character)
  • the SubnetList configuration value isn't specified.  Even if it had been, it would have been ignored.  If ProxyIPList is present, SubnetList is fully ignored.  If you're wondering why, read on for the next section which should help make it clear.

DMZ scenario

For the "DMZ" configuration, we're telling the ISAPI filter how to identify "from the internet" clients by opposite logic as we did in "Reverse Proxy".  This is an extremely important point, and is the cause for most of the confusion people have in trying to just throw in the configuration values they expect to "Just Work".

Why is this the case?  Well, back in the "Reverse Proxy" world, all requests came from actual internal IP addresses.   We just needed a list of the ones that we should treat as if they were external, and we did that by knowing which ones were reverse proxies.

In the DMZ world, things are different.  Now we're getting hit by "real" internet IP addresses.  This means that the set of client IP addresses we might see isn't just the handful of internal subnets we have defined.  No, now it's exploded to a huge set.  There's no way we can define all those external IP addresses that may be hitting our server.

So, instead, we don't try to.  Instead we go the opposite route and define what isn't coming "from the internet" - we define all the subnets internal to our company (that may hit this server).  We're going to tell the ISAPI filter "ok, assume EVERYONE is coming from the Evil Internet, and only let them off the additional-restriction (Basic/Digest/RequireSecurePort) hook if they're in our list of configured-as-internal subnets."  In a perfect world, this setting might be called InternalSubnetList instead, but as-is it's called SubnetList.

As an example, let's say our TFS server is in the DMZ and our internal subnets are 10.0.0.0 and 192.168.0.0

[config]
RequireSecurePort=true
SubnetList=10.0.0.0/255.0.0.0;192.168.0.0/255.255.0.0

Important points about this:

  • the separator for multiple subnets is (surprise!) the semi-colon (';' character)
  • ProxyIPList must NOT be included.  Since the logic approach for the two scenarios is totally opposite, mixing them together doesn't make sense
  • the second part of each subnet is the netmask that should apply.
  • We are NOT currently supporting CIDR addresses.  If you put in something like 10.10.1.32/27, we're going to attempt to parse the "27" as a netmask (27.0.0.0) and not as 255.255.255.224, which is what you intended.

As with most IP calculations like this, we apply the mask to the client IP address and then see if it matches the specified subnet.  If it matches any, then the client is considered internal and we leave it alone (don't check the secure port, don't strip NTLM).

On the internet scenario

Off-hand, you would think this would be the easiest, as you could just have a config file of "RequireSecurePort=true" and be done. Unfortunately, the TFS 1.0 SP1 version of the ISAPI filter doesn't automatically know which calls are TFS-internal calls (this is already fixed for Orcas), so we need to add one more line to tell it that the internal calls should not be moved to Basic/Digest auth.  Because of that, assuming the public IP address of our TFS server is 1.2.3.4, the correct config is:

[config]
RequireSecurePort=true
SubnetList=1.2.3.4/255.255.255.255

All traffic from everywhere will get challenged with this configuration, *except* for the TFS-internal calls (or, strictly speaking, someone using the Team Foundation client while logged into the application tier itself).  As mentioned above, in Orcas you won't need this SubnetList line as we figure out the TFS-internal calls automatically and the ISAPI filter always leaves them alone. 

 

How do you handle "weird" / broken configurations?

  • If neither ProxyIPList nor SubnetList is configured, we warn and don't have anything to do - the filter will effectively not be there.
    • Note that in Orcas, this will be a valid scenario, as it will be the configuration for "out on the internet"
  • If both ProxyIPList and SubnetList are configured, we ignore SubnetList.  See above for details.
  • If entries are configured with invalid values, they get ignored.

How can I see what the filter's doing?

We output some debug strings during the initialization and configuration of the filter.  To view them, you can use DebugView from sysinternals, a very useful utility for seeing debug strings in all kinds of programs.

In Orcas, we added a ton of additional useful debug strings so you can see during each web service call what the filter is doing.  It should be far easier to debug bad configurations with that version.