I've designed quite a few software systems in my career, and there's an interesting dynamic that happens every time. We're there with the Direct Project now, and I'm pretty confident the only thing that will get us beyond it is real live projects successfully exchanging messages (which should start early next year --- HealthVault users will have their addresses by the end of January!).
Here's the thing. The basics of Direct --- like most software problems --- aren't that complicated. We want to securely exchange health information between endpoints that have a legitimate need and existing right to share (e.g., for a referral or to share information with patients). There are a zillion (I counted) ways to make that happen.
The hard part is wading through all of those possibilities to find the approach that best optimizes all of the tradeoffs that show up in the real world. Just a sampling of these often-competing considerations:
- Re-use as much existing/proven technology as possible
- Usable by small practices with limited HIT and large practices with huge existing HIT investments
- Inclusive of both structured and unstructured information
- Flexible enough to coexist in multiple overlapping policy environments
- Able to scale with the state of the art, e.g., as individual providers are routinely granted secure credentials
- Attractive to developers
- Etc. etc. etc... the list goes on and on
Now this is actually the fun part, because it's just like solving a big puzzle. A great thing about Direct was that since its earliest days, we've had a core group of people who were really involved in the details of these discussions --- which is critical, because what inevitably happens is that when you push on one of these issues, side effects ripple through the others. Only by keeping the whole mess in your head can you find that best-fit point of optimization.
But of course, the rest of the world hasn't taken that journey with us. And because we've moved so quickly on this one, our documentation still leaves a lot to be desired, so the system can feel more "messy" than it really is. Arien, Janet, Rich and the team are working hard on this --- a thankless but essential task. During this limbo period --- it's not always obvious to folks that the tradeoffs we chose are the right ones. It's reasonable for folks to feel that way --- and only definitive way to prove out the system is to get it working and deployed for real. Until that happens, we should expect a lot of questions that just take time to answer.
One such discussion is the discussion of using mutual TLS or S/MIME for message transport. John Halamka raised the issue today in a blog post, and Arien did a great job of responding with our thinking (which was all about optimizing for the first and fourth bullets in the list above).
Another one that I am particularly passionate about is the use of DNS for certificate discovery. I actually don't care specifically about DNS. But I really care about having a single, universal mechanism for certificate discovery --- without that, I fear we're going to be in a world of hurt.
In order for S/MIME to work between you and me, I need to have your public key and you need to have mine (I might have my own key, or there may be one for my whole organization --- that's not really important here). Somehow we have to exchange those public keys. One of the core weaknesses of S/MIME is that it doesn't specify how that happens. As a result, it's a total pain for me to manage all my relationships.
Once upon a time, we had this problem with web addresses. TCP/IP doesn't know anything about names like www.dartmouth.edu --- it works off of IP addresses (such as 18.104.22.168 for Dartmouth's web site). Before DNS, every computer that wanted to talk to Dartmouth needed to have a "hosts" file that mapped the name to its IP address. If you didn't have that entry in your hosts file, you were out of luck.
This was obviously unacceptable, because if I told my buddy to go visit www.dartmouth.edu --- there was no guarantee her computer would be able to find it. As a result, a bunch of hacky and brittle mechanisms for trading around hosts files started to emerge. Happily, DNS came along and made this problem go away. DNS is really a magical thing --- a distributed service that we all agree on and quietly sits behind the scenes, making sure that when I type ANY web address into my browser I get to the right place. Even if Dartmouth changes their IP addresses, DNS is smart enough to make sure I always have the right one. It's pretty cool.
We need the same thing for Direct! This is the kind of issue where under-specification is a big negative. If a third of the country decides to use DNS, and another third uses LDAP, and the final third just shares certificates by hand --- our goal of universal connectivity is gone before we even get started.
And for what? We've shown in our threat models that DNS is a secure means of exchanging certificates for Direct. We know it scales --- boy does it scale. The specification we're using has been around since 1999, so we're not pushing any envelopes there. And it's already supported by the dominant DNS software package out there (bind). We even built up our own "baby" DNS server to make it easier for folks that don't want to think about it.
If there is a reason to reject DNS --- ok, let's have that conversation. But we have to get to a universal mechanism if we want to succeed.
This kind of thing will become obvious in practice. But as the length of this post attests to, in the abstract it's a complex conversation. I appreciate the great critical review that HITSC and others are giving to our work ... but hope that those conversations won't derail things before we see what happens in the real world --- I am really confident that we're all going to be super-happy with the results.