Thanks for joining us for the final day of our series on cryptography in ASP.NET 4.5! Up to now, the series has discussed how ASP.NET uses cryptography in general, including how the pipelines are implemented in both ASP.NET 4 and ASP.NET 4.5. We introduced APIs to give developers fuller control over the cryptographic pipeline and to drive consumers toward a wider pit of success. In today’s post, I’ll discuss advanced usage scenarios and answer some common questions that we anticipate developers might have. The series outline is copied below for quick reference.
- Background regarding the use of cryptography in ASP.NET 4.
- Changes that were introduced in ASP.NET 4.5.
- Usage notes and miscellaneous Q&A (this post).
Throughout the series I’ll refer to a sample solution. This Visual Studio 2012 solution contains projects that demonstrate many of the core concepts mentioned here. It can be downloaded from http://sdrv.ms/T4aMyg.
We have tried to create a system that leads to success automatically. However, it is always good to be aware of the right way to use these systems, and to that end we want to arm you with the knowledge to be successful. You may find these tips helpful when developing your applications.
- Use the 4.5 project templates to get the new runtime behaviors automatically. If you have an existing application which you are considering migrating to 4.5, consider whether opting in to the new behaviors is right for your application.
- If possible, move away from the MachineKey.Encode / Decode APIs and call the Protect / Unprotect APIs instead.
- If you do call the Protect / Unprotect APIs, we strongly recommend that you provide a purpose string. Remember: MSDN provides good guidance on choosing an appropriate purpose string for your call site.
- Auto-generated keys are stored in the HKCU registry, hence they are tied to the particular user account the web server is running under. If you’re running multiple applications on the same box, application isolation matters! If not, the applications are going to be able to read each other’s keys or otherwise affect each other. KB2698981 offers guidance for how to isolate applications on the same system from one another.
- In 4.5 compatibility mode, the auto-generated machine key contains 256 bits of entropy for the symmetric encryption algorithm and 256 bits of entropy for the message authentication algorithm. The KDF can generate an appropriate amount of key material for the algorithm in use, but the KDF cannot increase the amount of entropy. If your application uses an algorithm that takes a longer key, e.g. HMACSHA512, consider using an explicit key instead of an auto-generated key.
- In 4.5 compatibility mode, the PRF used in the KDF is HMACSHA512, which has an output size of 512 bits. Configuring the system to use a symmetric encryption or message authentication algorithm which takes keys greater than 512 bits in length will not significantly increase the security of the system. See FIPS PUB 198 [PDF link], section 3 for more information.
There is one other project in the solution – AdvancedDataProtectorDemos – which demonstrates using DPAPI:NG for data protection. One advantage of DPAPI:NG is that key management can be pushed to Active Directory, which eliminates the need for the application to maintain this information. (There is a caveat – recall the ScriptResource.axd / WebResource.axd exclusions mentioned yesterday – but these payloads are generally less sensitive than ViewState, forms authentication, and the like.) DPAPI:NG is available on Windows 8 / Server 2012 machines, and AD key management requires that the application identity be a domain identity and that the AD servers be running Windows Server 2012.
Why not use PBKDF2 for key derivation?
The .NET Framework has two built-in KDF types. One is PasswordDeriveBytes (PBKDF1), which is insufficient for our needs since PBKDF1 can only derive keys of at most 160 bits in length. The other built-in type is Rfc2898DeriveBytes (PBKDF2). Unfortunately this particular implementation of PBKDF2 is hardcoded to use HMACSHA1. While this is perfectly sufficient for low-entropy input sources such as passwords, it’s unacceptable for the high-entropy input sources that we would be providing to it and could actually end up weakening the cryptographic key material. See RFC 2898, appendix B.1.1 for further discussion.
At this point we resigned ourselves to the fact that we’d have to implement the KDF in our own layer rather than calling any existing APIs. (Windows provides BCryptDeriveKeyPBKDF2, but that function doesn’t exist on all platforms that ASP.NET 4.5 targets.) We ended up choosing NIST SP800-108 [PDF link] due to the fact that it is designed to derive a new key from an existing high-entropy key rather than from a low-entropy password, and this behavior more closely aligns with our intended usage.
Why not use an authenticated encryption mode of operation?
Authenticated encryption modes of operation (CCM, GCM, etc.) are generally geared toward streaming protocols, and the particular constraints that these modes of operation put on inputs like nonces reflect this. Consider a high-traffic cloud-hosted web site using AES-GCM (see NIST SP800-38D [PDF link]). A purely random nonce would be unacceptable since a high-traffic web site can serve 232 requests in mere days, and a (machine ID, invocation ID) tuple-based nonce would likewise not suffice since machines in the cloud may be transient, precluding the ability to uniquely identify machines.
However, this problem is not unsolvable. One possible solution is to have a small cluster of machines responsible for cryptographic services and to have the web servers delegate to this, allowing the tuple-based nonce to work. Another solution is to have each server responsible for rolling its own keys at application startup (thus the nonce can be a simple counter since no other machine uses that specific key) and on a regular basis thereafter, and provide each server in the cluster the ability to infer which key was used for a particular payload. We prototyped both of these solutions using custom DataProtector-derived classes, so we are confident that developers have the extensibility hooks necessary to go this route if desired. But when compared to the standard encrypt-then-MAC implementation, authenticated encryption modes of operation proved to be too ill-suited for us to provide an in-box implementation.
What about membership?
There has been a recent resurgence of discussion about the appropriate way to store passwords on a server, precipitated by breaches of several high-profile sites. Many of these discussions have focused on ASP.NET’s built-in routines, like those in SqlMembershipProvider.
We support configuring custom hash algorithms specifically for membership, and there exist demonstrations of plugging PBKDF2 into this pipeline. We could have considered codifying this in SqlMembershipProvider for 4.5, but as it turns out that wouldn’t have been terribly useful. It is likely that the majority of applications using SqlMembershipProvider going forward are those that require compatibility with earlier versions of ASP.NET, so they are bound by the algorithms supported by those versions. On the other hand, the Microsoft.AspNet.Providers types are out-of-band so are free to improve on their own schedule rather than being tied to .NET Framework releases. The 4.5 project templates all use Microsoft.AspNet.Providers instead of SqlMembershipProvider.
As always, the best protection is for users to follow good password selection criteria such as choosing strong passwords and not reusing passwords between sites. If a user chooses a weak password, the server’s mitigations act only as a hurdle. It might force an attacker to slow down, but if he is sufficiently determined he will crack it eventually.
Will this work be backported?
This work is unlikely to be backported to ASP.NET 4 or earlier. For starters, much of the code we have takes advantage of the fact that we have the full range of .NET 4.5 types at our disposal, and we’re also making assumptions about the underlying OS’s base capabilities. These assumptions aren’t necessarily valid when running on Windows XP / Server 2003. Furthermore, while we believe the new crypto stack is better than the legacy crypto stack, that doesn’t mean that the legacy stack is bad. We just thought it would be beneficial to implement an improvement.
Does this obsolete protected configuration?
Not at all! Protected configuration is still a valuable tool if you want to store cryptographic keys in the application’s Web.config but don’t want the keys themselves to appear as plaintext within that file. MSDN has a walkthrough on enabling this feature.
I hope I have successfully conveyed the utility of the cryptographic changes. These posts have demonstrated how the new code paths promote good practices, improve usability, and offer even greater extensibility than before. And we already have consumers of the new APIs: MVC’s anti-XSRF helpers and the ASP.NET OAuth package call into the new APIs when running on 4.5. WIF also calls into the new routines in certain cases.
Finally, by being transparent with these changes, we hope to offer some insight as to how the team operates when security issues are reported to us. Our goal is not to develop in a vacuum, nor is it just to patch and move on. We want to take the time to learn from our mistakes, revisit our assumptions, and continually improve both ourselves and our product. It allows us – and hopefully you – to be confident in the quality of the framework.
Thanks to Angela, Barry, Suha, Nazim, Bob, Tolga, and others for all your feedback! I really appreciate you guys slogging through this. Also, thanks to Brock and Mike for getting some early coverage of the new MachineKey APIs in their blogs. We love our developer community! 🙂