Authorizing Services


If you look at the default authorization model for WCF, you will notice that it expects you to implement centralized authorization. While centralized and pluggable authentication makes a lot of sense to me, that’s not the case with authorization.


Does it really make sense to decouple authorization from implementation? Are you only going to check access rights at the perimeter and leave the rest of your code wide-open? Can you even fully authorize an operation at the perimeter?


My claim is that for a secure and maintainable solution you should answer no to all these questions. In this article, I’ll will explain why I think so.


While it makes a lot of sense to decouple authentication (who you are) from implementation, authorization (what you can do) is tightly coupled with what you are trying to do. Put in other words: Authenticating a user requires no knowledge of the context; either the user’s credentials are acceptable, or they are not. Authorizing an action, on the other hand, requires knowledge of not only the identity, but also the operation being attempted. As documented, the OperationContext provided to a ServiceAuthorizationManager contains information about which service and operation is being invoked, so you could certainly implement centralized authorization logic; basically, it’s going to be one big switch statement.


Working with centralized authorization will mean having to switch back and forth between the authorization manager and the implementation code. To me, that’s not particularly productive, when you can tie authorization and implementation together as simple as this:


[PrincipalPermission(SecurityAction.Demand, Role = “Administrator”)]
public void MyMethod()
{
    // Implementation goes here…
}

With the PrincipalPermission attribute, you can define role-based security declaratively. This has the benefit of succinctly defining security in the most intuitive place. You could implement the same authorization logic in a centralized authorization manager, but it would require imperative code inspecting message properties. From a code maintainability perspective, decoupling authorization from implementation doesn’t make a lot of sense to me.


Is checking access rights only at the perimeter a good idea, then? While the case could be made for turning away unauthorized calls at the service boundary, securing a system only at its perimeter is not considered to be particularly effective. In their book Writing Secure Code, Michael Howard and David LeBlanc endorse defense in depth as an important principle in secure systems. Multiple checkpoints should exist at separate layers of an application. This helps thwart a would-be attacker, who would then otherwise have full control of a system once he or she gets past the perimiter.


Pluggable authorization logic implemented at the perimiter doesn’t follow this principle. What is worse is that since a WCF authorization manager is pluggable, a service operator may even accidentally deploy a service without correctly configuring the authorization manager.


Another scenario deals with extensibility. Imagine that your service delegates all work to a business logic component, and that you would like to reuse the business logic in other contexts: A web site, admistrative application, ETL job, etc. In these scenarios, your business logic will probably not be hosted by WCF, but rather accessed by in-process calls, so a WCF authorization manager will be bypassed in these cases.


Authorization logic should in my opinion be implemented as close to the resource it’s protecting as possible; most likely in the data access layer itself, so that even if an application bypasses the default business logic component, authorization is still being checked. For that reason, checking only at the perimeter is not particularly secure.


As it turns out, it’s not even possible to perform any sort of authorization check at the perimiter, since additional data may be required. A simple role-based access check as in the example given above can also be performed at the perimeter, but more complex access rules cannot. Consider this example:


Imagine a service providing details about registered users. This service contains a GetUserData operation that returns user data for the requested user ID. The authorization logic for such an operation could be something like this:



  • A member of the Administrator role can request data on any user.

  • A member of the User role can only request his or her own data.

However, in this example, user IDs are particular to the application and not equal to the callers’ credentials. The callers’ credentials may represent a Windows account, while users IDs are defined be a table key in a database. While there’s a one-to-one correspondence between credentials and user IDs, this relationship is defined at the data level. As such, the message available to a perimeter-based authorization manager contains the requested user ID, as well as the caller’s credentials, but no data indicating any relationship between the two are available at that level.


In imperative code, however, you could implement the above authorization rules like this:


public UserData GetUserData(Guid userId)
{
    UserData user = DataAccess.FindUser(userId);
 
    PrincipalPermission administratorPermission =
        new PrincipalPermission(null, “Administrator”);
    PrincipalPermission selfPermission =
        new PrincipalPermission(user.DisplayName, “User”);
    administratorPermission.Union(selfPermission).Demand();
 
    return user;
}

In this case it’s necessary to first get the data from the data store, since a security decision requires more data than supplied by the caller. Notice that administratorPermission allows any user to make the call if he or she is a member of the Administrator role. On the other hand, selfPermission only allows users through whose credentials are the same as the DisplayName of the user they requested. The Union of these two IPermission objects is then demanded, ensuring that at least one of them is satisfied.


Authorization logic like this example cannot be implemented at the perimeter, since required data is not available.


By now, I hope I have demonstrated why I think centralized authorization managers don’t make a lot of sense in a complex, n-layer architecture.


Note that the use of PrincipalPermission and PrincipalPermissionAttribute requires that Thread.CurrentPrincipal is populated with an instance of IPrincipal representing the caller. Using IPrincipal and thread local storage to represent and flow information about the caller is considered a .NET best practice, since the framework itself contains support for it (such as declarative role-based security).


Interestingly, a WCF ServiceAuthorizationManager is a very correct place to map foreign user credentials to IPrincipal objects, as I’ve described in an earlier post.

Comments (4)

  1. Garry Trinder says:

    Nice post.

    With the design I use for message-based communications, it is quite easy to expand the abilities of perimeter checking. Some examples are described here (http://udidahan.weblogs.us/2007/04/01/service-layer-separation-of-concerns/)

    "The use of this strength is that it allows for a strong separation of concerns in the message handling logic. Need to do some pessimistic lock checking first? No problem – have a separate message handler class that does that. Want to add some custom auditing before and after all other processing, configure in a couple more message handlers. Have some complex validation logic that you’d like to keep separate from the rest of the business logic? Put it in its own message handler class."

  2. ploeh says:

    Hi Udi

    Thank you for your comment. Depending on implementation, the design you describe sounds like the Pipeline or Chain of Responsibility patterns? It makes a lot of sense when you need to implement cross-cutting concerns in a configurable manner.

    Obviously, you could implement even ACL-based authorization using your design approach – I think the keywords in your post are "Content Enrichment" 🙂

    However, what I’m trying to say is that I don’t think authorization should be pluggable, since security should be considered a part of the functional requirements for a library.

  3. Garry Trinder says:

    "I don’t think authorization should be pluggable"

    What if you did some custom compression on the message first? You’d need to unwrap that before you could get at the authorization information. The same is true for custom streaming behavior.

    There are scenarios where controlling when and where security checks take place is very important both from a functional and performance perspective.

  4. ploeh says:

    It may be that my mindset is too hooked up on WCF these days, but I’d say that stuff like compression, encryption, streaming, etc. should be happening in the channel stack, and I don’t think authorization fits into the channel very well.

    At the very least, authorization should take place as close as possible to the resource you are trying to protect. That’s not at the service boundary, but at the resource access point (typically in a data access component).

    That doesn’t preclude authorization logic at other levels of an application as well. In fact, that would fit very well with the principle of Defence In Depth, where you have security checkpoint at multiple levels of the application.

    While this makes sense to me, pluggable authorization still doesn’t – mostly because if you can plug in authorization logic, you can also unplug it, and what would be the point of that?

    Although that’s my opinion on authorization, a pluggable model for authentication makes a lot of sense to me, particularly in a service-oriented world where Federation is starting to play a part. In such cases, it makes a lot of sense to map heterogenous authentication data to a common shape (e.g. IPrincipal) at the service boundary layer, but authorization should still be implemented as close to the resource as possible to prevent anyone from circumventing the security checks.

    Since authentication and authorization are often discussed together as two facets of software security, there’s a tendency to model those facets together – and I think the fonetic similarities of the words help confound the issue. My point is that although both are important, you should deal with them in quite different ways, at different places in the application. They are both cross-cutting concerns, but they are also cross-cutting concerns in respect to each other.