When you provide an assembly that will be called by partially trusted callers, you need to make sure that you do a thorough security audit of that assembly — especially if it’s an APTCA assembly. One of the primary reasons this security review is required is that you don’t want your assembly unwittingly allowing partial trusted code to elevate its privileges and perform an operation that it normally would not be allowed to.
Unfortunately, if your assembly’s object model grows to any decent size this review can be very time consuming. Additionally, as the object model grows the reviewers grow more likely to overlook a potential problem. Accidentally satisfying a LinkDemand falls into that last category — reviewers need to ensure that they understand the LinkDemands on every API being called and determine if they need to do a demand themselves, or it it’s ok to satisfy that particular demand.
Tools such as FxCop (with rules such as “Do not indirectly expose methods with link demands”) help to solve this problem by automatically finding certain types of potential problems. In addition to an updated and improved set of FxCop rules, Whidbey introduces a powerful new tool to make life even easier for library developers. This feature is known as transparency.
Transparent code is code which voluntarily gives up its ability to elevate the permissions of the call stack. That means that the following rules apply:
- Transparent code cannot Assert for permissions to stop the stack walk from continuing.
- It cannot satisfy a LinkDemand. Instead, any LinkDemands on APIs called by the transparent assembly will be automatically converted into full demands.
- Transparent code cannot automatically use unverifiable code, even if it has SkipVerification permission. Instead, any method that contains unverifiable code will have a demand for UnmanagedCode permission injected into it.
- Similarly, calls to P/Invoke methods that have been decorated with the SuppressUnmanagedCodeAttribute will cause a full demand for UnmanagedCode.
Due to these restrictions, transparent code has the effect of running with either the set of permissions it was granted or the set of permissions its callers were granted, whichever is smaller. Because of that, fully trusted transparent code essentially runs in the same security context of its callers, since the caller’s permissions are necessarily less than or equal to FullTrust. (You can see where the name came from – from a security perspective this code is transparent on the call stack).
Note that even though a transparent stack frame will not be able to elevate the permissions of the call stack, it can still cause the stack walk to fail. For instance, given the following call stack:
If the method being called in Dangerous.dll did a demand for FullTrust, the demand would fail even though the only stack frame that would not satisfy the demand was transparent. This goes back to the rule that transparent code runs with the lesser permission set of what it was granted and what its callers have.
In this instance, the Trans.dll stack frame would be unable to prevent the FullTrust demand from hitting App.exe and failing — Trans.dll is running with the effective permissions of its caller.
From the security system’s perspective the opposite of transparent is critical: a stack frame that is not security transparent is considered to be security critical. So in the above examples, the Trans.dll stack frame is transparent while the frames from App.exe, Util.dll and Dangerous.dll are all critical.
Even though they compliment each other nicely, transparency and APTCA are independent concepts — you can have a transparent assembly that is not marked APTCA just as easily as you can have an APTCA assembly that is not transparent.
Finally, and I can’t emphasize this enough, even though transparency makes it easier on code reviewers, it does not eliminate their jobs. You still should ensure that you run FxCop over all your shipping code and do code reviews of any assembly that could be used by external callers. Even though transparent code cannot elevate permissions of the stack from a CAS perspective, it could still be doing dangerous operations from a different view point (such as role based security or your application’s security model). Exposed code still needs to be audited for correctness, ability to handle bad input, etc.
Now that we understand the basics of transparency, next time I’ll go into detail showing you exactly how to use it. Also, Stephen Fisher, one of the guys behind the transparency work, has promised a series of blog entries in the coming weeks describing the details of exactly how transparency works under the covers. I’m looking forward to those posts, and I’ll make sure to provide links here when he gets them up.