Some tips on testing managed code Security.

<Disclaimer>

The text below represents author's personal opinion and does not necessarily reflect Microsoft recommended best practices. Author does not assume any responsibility caused by the use of the following information.

</Disclaimer>


Some quick tips on testing managed code Security.

So now you are developing or testing some managed application. You’ve heard lots about .NET Security, but unfortunately did not have much time to read enough about it. You are starting to worry if Security would cause you any troubles and what should you do to verify you are OK here. In this case, this article may help you. Well, it will not teach how to become a real Security expert; and the steps outlined below are definitely not enough to ensure that your code if fully secure, but at least it will help to make sure you’ve covered some mostly typical issues.


Introduction:

Managed code has Security built-in as inevitable part of it. Almost every API call goes through some of the Security API. This represent several essential differences from how old unmanaged code works; moreover, ensuring that Security is properly involved in right places and right times is really critical for any application.

For non-experts in .NET Security there are quite a few things to keep in mind though.

First, the idea of Security in general is that not everything what compiles is always allowed to run. Different restrictions may apply in different situations. For example, traditional Windows NT Security decides whether or not permit some execution based on who is current user.

.NET Security [and this is the second thing] makes its' decisions based on the identity of the code, not the user [that's why it is named CAS -- Code Access Security]. That means, in order to determine what this or that piece of code is allowed to do, it looks at such things as: where from the code came; does it have anything to point to it's author [e.g., Authenticode certificate]; what is the hash value of the code, and so on. It does not mean though that Windows security gets overridden -- no, it still applies. .NET Security is just like another independent dimension of hedging against the attacks.

There is virtually infinite number of real life Security configuration that may affect execution of managed code, and obviously we won't be able to cover all of them. However, there are several simple scenarios where Security most likely gets involved.


Typical issues:

"I built an application which updates Registry. It works fine when I run it from C:\, but fails if I start it from the network share. What am I supposed to do, my customers are waiting?"

This is probably the most common issue. And unfortunately for the author, it is expected. Any code that comes from the network [and share is considered a network] gets less trust then the one starting from local machine. For instance, by default such code is not allowed to access the Registry, and this seems reasonable: why would you let somebody you don't even know to write to the sensitive part of your machine?

Another kind of thing you might see is that some DLLs allow only callers that are digitally signed with the key known to them. So no matter what, you won't get access if you don't bear the evidence that you come from Microsoft, for example. This thing is harder to test, but at least it should be kept in mind.


How to test:

Here I assume that your primary area is not the Security itself but rather something else. In this case, we can outline the set of most basic things to verify.

First, let's introduce the concept of Trust Level. Normally, the level of trust is applied to application or function, and is expressed in terms of what it is permitted to do. For example, some code may be allowed to do File IO to given file, do unlimited networking operations, read environment variables, but not allowed to write to the Registry.

It is very important to realize that every API in the .NET has its requirements on the trust level needed in order to invoke them. For instance, if you are trying to open a file via FileStream class, before doing anything it will verify that the caller of the API can do it; otherwise, it will fail.

The granularity of different trust level requirements is really tremendous. However, for the sake of simplicity we can split it into three major categories:

1. FullTrust: with this trust, everything is permitted by CAS. If some API is protected by FullTrust requirement, it most likely may be doing something really dangerous. Example of such thing is Process class that needs FullTrust to be created. By default everything that runs from the local machine gets FullTrust, so if you run your application from C:\ only, you are not really testing the Security of it.

2. Partial trust: This could be really granular, ranging from the ability to show UI to calling private methods through Reflection or doing File IO. Usually any function that does such kind of things asks for the caller about ability to do them [but not for the FullTrust, it's not "all or nothing" model]. Quite many functions in .NET libraries have different partial trust requirements on them. Normally, applications that come from Internet or Intranet run with various degrees of partial trust.

3. No special requirements (Execution only): anybody can call this method as it does not represent any Security risk. Example of such API may be Math.Sqrt().

Using the classification above, it should be quite easy to achieve good basic level of Security testing coverage even for non-Security experts in 3 easy steps:

1. Figure out is what is the actual trust level required by the design of API, scenario or application you are testing. In most cases just finding which bucket of the 3 above corresponds to it is enough; however, if you have more specific idea of what is needed [e.g., File IO] -- it is even better.

2. Run your target with the trust level required or higher. If it does not run -- you are having Security overenforcement bug.

3. Run it with the trust level below the required level. If it runs -- this is a Security hole.

The next question is: how do I change that trust level?


Tools:

There are several relatively simple ways to manipulate Security for testing purposes. In fact, these are more tips rather than some systematic approach, but for quick testing it should help:

1. [Very simple]: suppose that your machine name is Box1, that its' IP address is 111.111.111.111, that your App.exe application lives on C:\ drive and that your .NET Security policy and IE Security settings are in default state. In this case, the following command

\\Box1\C$\App.exe

will effectively run App.exe in LocalIntranet zone, with trust level greatly reduced. This will give you an idea of how you application would behave if somebody runs it from the share -- very quick and cheap test.

Further, the following line:

\\111.111.111.111\C$\App.exe

will start your application as if it was run from the Internet zone which has even lesser trust. In this situation, you'll get execution environment pretty close [although not exactly the same!] to what will be if application is run in Internet Explorer. Again, this is just a s nice way to check basic things right away.

2. More complex, but more flexible: change Security policy using caspol.exe or .NET Framework Configuration tool. To get 100% out of it, you might need to know Security deeper, but some sandboxing scenarios are quite easy to accomplish.

For example, the following steps will assign any permissions you'd want to any application that starts from C:\TMP directory:

2.1. Start .NET Framework Configuration tool [either by running mmc.exe and adding the proper snap-in, or through the shortcut in Administrative tools];

2.2 Expand "Console Root -> .NET Framework Configuration -> My Computer" nodes if they are not expanded.

2.3. Expand "Runtime Security Policy -> Machine -> Code Groups" nodes.

2.4. On the node "All_Code", rightclick "New...".

2.5. Give some name for the code group in the wizard that comes up, press "Next".

2.6. From drop-down menu, choose "URL" as membershipcondition.

2.7. In "URL" box, type

file://C:\TMP\*, press "Next".

2.8. Either use some of the existing sets, or create a new one. I'd really encourage you to play with the new sets creation as it would let you to test your application with all the kinds of Security settings you might be interested and to learn what permssions are shippedn with .NET. Creating new set goes though the wizard and is really easy and self-descriptive process.

2.9. After you've finished creating the group, find it in under "All_Code" hierarchy, rightclick for "Properties" and check on the box "This policy level will only have the permissions from the permission set associated with this group".

Now, if your application starts from C:\TMP, it will get those -- and only those! -- permissions that you have granted to it.

Don't forget to reset the policy back to default state after you finished your testing. It could be done either though .NET Framework Configuration tool or by running script "caspol.exe -pp off -all -reset".

However, what if you need to test not the whole application but rather one DLL, or even one function? Is there any way you can easily tweak the trust level of it? The answer is definite Yes.

3. Assembly level requests are our friends.

These are constructions that live at the beginning of an assembly and look as follows:

[assembly: {Some permission | PermissionSet}Attribute(SecurityAction.Request{Minimum|Optional|Refuse}, ...)]

For testing, the most interesting is RequestRefuse which basically tells the Policy: "this application should never be granted the named permission". So, for example, if the following line is there:

[assembly: RegistryPermissionAttribute(SecurityAction.RequestRefuse, Unrestricted = true)]

that would mean that assembly with it will not be granted any form of Registry access, even if run from the environment that otherwise would let it.

One interesting note here is that FullTrust with any, even the most little permission "subtracted" this way is not FullTrust anymore. So everything that requests FullTrust must start failing now -- good test!

More useful technique is combining RequestMinimum with RequestOptional. If your assembly has such requests [say ReqMin for set A and ReqOptional for B] that would mean that:

a) Assembly will NOT start if environment grants it less then A, AND

b) It will NEVER be granted more than unioning of A and B [what, when run from local machine, ends up with permissions grant set equal to union of A and B].

So for instance these lines will make sure your assembly runs with the smallest privileges ever possible in the Runtime -- with the right for execution only:

[assembly: SecurityPermissionAttribute(SecurityAction.RequestMinimum, Execution = true)]

[assembly: PermissionSetAttribute(SecurityAction.RequestOptional, Unrestricted = false)]

The technique above is quite powerful and covers many of the Security testing scenarios you may encounter in real life. However, to use it one needs to be familiar with the syntax of assembly level requests and with syntax of permissions used with them. Fortunately, this is not a problem as MSDN has plenty of information on this, at least for most common cases.


4. Stack walk modifiers.

This is actually something more advanced that allows to alter the trust on class/level method. The modifiers useful for testing are Deny() and PermitOnly(), and they are methods that live on Permissions and PermissionSet classes. This example shows how to make sure that everything that executes inside Foo() method, and everything that is called from Foo(), gets the right only to execute and to pop up File Save/Open dialogs:

// This is the method we control and use to call into Foo:

public void Bar()

{

PermissionSet pSet = new PermissionSet(PermissionState.None);

FileDialogPermission FP = new FileDialogPermission(PermissionState.Unrestricted);

SecurityPermission SP = new SecurityPermission(SecurityPermissionFlag.Execution);

pSet.AddPermission(FP);

pSet.AddPermission(SP);

pSet.PermitOnly();

Foo();

}

// Method that we test

public void Foo()

{

//...

}

However, there are several caveats to keep in mind here, such as:

a) In some cases modifiers effect can be overridden by applying other modifiers;

b) There are types of Security checks [like LinkDemand] that are not affected by modifiers;

So to use it, some level of Security expertise is actually required, which could be achieved by reading such MSDN topics as "Code Access Security", "PermissionSet", "SecurityAction", and materials on various Permissions used.