Would a DMARC reject record have prevented Donald Trump from getting elected?


One of the reasons I just wrote that four part series on where email authentication is helpful against phishing, and where it is not-so-helpful, is because I wanted to examine the John Podesta email hacks.

In case you’re not aware, John Podesta was the Chair of the Democratic Campaign to elect Hillary Clinton for President of the United States. Earlier this year, his email was hacked by an unknown party, and his emails were leaked to Wikileaks. This caused a tailspin in the election campaign of Hillary Clinton.

Opponents of Clinton seized upon some of the more sensitive (?) emails that showed the party colluding against Bernie Sanders in the primary, and purportedly showed some of the negatives of the Clinton campaign overall. Proponents of Clinton sought to downplay this as the content not being that bad as it’s how politics work, or that the criticisms were overblown, or that the Trump campaign was benefiting from their campaign not being hacked by a foreign power and thus not having a chance to have their own inner workings exposed.

Some (perhaps many) believe that this affected the outcome of the election by demotivating enough voters to not show up and vote, thus giving the election to Donald Trump. While there are other factors that contributed to the result, it’s probably true that removing some of them could have caused a different result. And it may be true that removing this one may have caused a different result.

Thought Bubble

I understand that after the results of the 2016 US Presidential election, some of you reading this blog reacted like this:

2016-12-23-homer-simpson-celebrate

But others of you reacted like this:

2016-12-23-homer-simpson-depressed

In this blog post, I’m not going to debate the merits or drawbacks of the results of the election.

Similarly, depending on what side of the fence you are on:

  • If you were a Clinton supporter, you probably believe that hacking of various high-level Democrat operatives and leaking it to Wikileaks (while simultaneously not exposing any Republican dirty laundry) played a pivotal role swinging a handful of swing states to Trump instead of Clinton.
  • By contrast, if you are a Trump support, you may not even believe that Democrat leaders were hacked by an Advanced Persistent Threat. And if you do believe it, you may think that it played little to no role in flipping the election results (that is, it didn’t make enough of an impact); or, you may indeed believe they were hacked by a foreign adversary but think they did a public service in that it exposed the inner workings of another party, and thus tipped the election in your favor.

I’m not going to debate the pros or cons of that, either.

So, in the comments there’s no need to post ideological rants, there’s a whole rest of the Internet for that.

Thanks, Thought Bubble.

Let’s assume for a moment that had Podesta not been hacked, Hillary Clinton would have won [1]. How could Podesta have avoided being hacked?

When I first started reading in my Facebook feed [2] that Podesta had probably clicked on a phishing scam, entered in his username and password, and that’s how the hackers got into his account, I saw someone post “If the spoofed domain had published a #DMARC record, he would have never been hacked.”

Is that true?

I went and started doing some investigation.

First, I assumed that the message Podesta presumably clicked on was a direct phishing message. That may not be the case. Instead, here’s what happened:

  1. Podesta got a phishing message from “Google <no-reply@accounts.googlemail.com>” indicating someone had his password, and that Google blocked the sign in from an IP address . The IP address was geo-located to the Ukraine, and that he should change his password immediately. There is then a link to a bit.ly URL that redirects to a phishing page. It is not clear that Podesta acted on this email although it sure looks like a real Google notification.
    .
  2. An email thread then ensues between an IT representative of the Clinton campaign with the above phishing message forwarded inline. His advice is that it is “a legitimate email [3] and that Podesta should change his password immediately.” He then advises to change the password at https://myaccount.google.com/security. In other words, he provided the correct advice.
    .
  3. The reply got forwarded around, eventually going to Podesta as well as another Clinton staffer, who replies that they will get Podesta to change his email address and also use two step verification to sign in.
    .
  4. At some point, someone (Podesta, in all likelihood) clicked on the link to reset his password but it appears he clicked on bit.ly link, and not the actual Google link.

Let’s look to see how technology could have helped.
.

First, DMARC wouldn’t have helped

I couldn’t find the original email message (the direct phishing) that was sent to Podesta, I could only find the email chain that contained the forwarded phishing message. Thus, I don’t know what IP address it was sent from.

However, we can see that it was spoofing accounts.googlemail.com.

As of today, accounts.googlemail.com does not publish a DMARC record. However, the parent domain googlemail.com publishes a DMARC reject record, with a subdomain policy of quarantine:

googlemail.com | “v=DMARC1; p=quarantine; sp=quarantine; rua=mailto:mailauth-reports@google.com”

I did a quick search of our own email logs, and on March 19, 2016, googlemail.com had a DMARC record published. So, Google didn’t just add it after this hack was announced, it was in place at the time of the original phish.

Since this was a spoofed message, it would have failed DMARC and gotten marked as spam. So, unless the recipient of the message went digging through their spam folder and thought it was a real message, Podesta should never have seen it in the first place.

Now we move into speculation territory. I don’t know why I can’t find the original email, I can only find the forwarded version between the campaign staffers. How did this even come across someone’s eyes to begin with?

I know that sometimes with senior executives in corporations, both an administrator and the executive have access to the exec’s inbox. They do this so they can sort through their messages and separate out the less important ones, so that the exec is only focused on the important messages. I haven’t bothered to do the research in this case (I’m just a blogger on the Internet), but if this is the case here, then did a staffer dig into the spam folder, find this message and mistake it for a real message, and advise Podesta to change his password?

People digging through spam folders, rescuing malicious messages, and getting compromised is extremely common. That’s why we add messaging to our Safety Tips in Office 365 about why we marked it as spam or phish.

The only way DMARC would have helped is that instead of publishing a subdomain policy of sp=quarantine, the domain published sp=reject (or no subdomain policy at all, so any *.googlemail.com domain would inherit the parent domain policy of p=reject). But then again, Google doesn’t necessarily reject all messages with that record that fail DMARC (neither does Office 365), they sometimes go to the Junk folder. So even that is not a guarantee.
.

Second, I do think that the IT department made a big mistake

The one big mistake I do think the IT department made (assuming that the message was not originally in the spam folder and subsequently rescued and forwarded [or even if it was]) was not “defanging” the malicious URL.

“Defanging” is my term for making a dangerous URL not dangerous. For example, suppose this was a malicious URL:

http://malicious.example.com

A defanged URL might be this:

http://malicious [dot] example [dot] com

The above link is no longer clickable. You can see that the IT person did provide the correct URL to Google’s password reset page, but Podesta clicked on the wrong one. The IT person no doubt thought he was providing the right advice about changing the password, but he left the dangerous content still in the message. There was still room for error, and in this case it mattered.

Before forwarding the message, he should have either deleted the link entirely, or defanged it. That would have totally prevented Podesta from doing the wrong thing.

It’s unclear whether two-factor authentication was ever set up. Many (most?) people don’t use it, but right from Day 1 there ought to have been a policy in place to require it, especially for executives.

Third, I don’t blame Podesta for clicking on the URL

I was reading on Slashdot and some of the commenters were calling Podesta an idiot for ignoring the actual URL and instead clicking on the bit.ly link.

Yet if he were an average Internet user using a mobile device, and was advised to change his password by people on his own team, it’s natural to assume he would scroll down the page, see the Google sign-in page, and gloss over the details in the middle. We all rely upon mental shortcuts, and all of us also know that high-ranking executives don’t read email in detail (I spend a long time editing my emails when I want an executive to weigh in on something).

Besides which, on a mobile device, it’s not like he can hover-to-uncover where the link goes to.

2016-12-23-gmail-phish

So for someone to be told to change his password, and then while scrolling down quickly he were to see the picture, it’s not a stretch for most of us to click it.
.

Fourth, even if nobody fell for this hack, there’s still plenty of other ways to get hacked

My guess is that this original message was marked as spam due to email authentication, but somehow it was rescued and still managed to trick the user. But even if the phisher wasn’t spoofing googlemail.com, they could have spoofed Google in any number of ways such as random IT phishing attacks, weakly protected domain attacks, and impersonation attacks).

Would Podesta himself have fallen for this? Would his staff? It’s unclear.

But one thing we know for sure, the attackers would have kept hacking until they finally did get access. If not on Podesta himself, then someone else.
.

Fifth, this is not the first time I have seen a hack like this, and a combination of technologies is required, along with a security policy

Earlier this year, I saw an attack where a phisher sent a message with a malicious link to an executive and it got through to him. He forwarded it to his assistant where she clicked on the link and got infected with malware. The original target wasn’t compromised, but someone else within the organization was.

This Podesta phishing attack doesn’t seem to have fooled the recipient, but still succeeded by accident.

Thus, an attack has multiple paths to success.

One thing we do at Microsoft is apply policy. I can’t check my corporate email on my phone without two-factor authentication; I have an iPhone SE [4] and I had to install an app from Microsoft and put in a PIN number which was verified with a phone call. I have to renew that authenticator app every so often. I can’t access my work email on my laptop unless I am using Windows 10, and it forces me to login using my fingerprint. So there’s multifactor authentication that way.

You can see my IT department has taken the decision out of my hands, and that it is a corporate policy. It’s still possible to hack me, but it’s way harder.

People in high ranking positions need to be aware they are under attack, and their security departments need to implement policy that make it easy for them to get their word done. This is my personal recommendation to all government departments – I preach the virtues of email authentication, and that’s important. But securing the endpoint is also important because attacks can succeed indirectly.

Even by accident.


[1] Yes, yes, I know that’s not necessarily true. See the Thought Bubble.

[2] As we all know, our Facebook feeds are not the most reliable source of accurate news.

[3] There’s a story floating about that the staffer who wrote “This is a legitimate email” meant to write “This is an illegitimate email”, and that’s the reason why Podesta clicked on the link. Had he wrote it the first way, Podesta never would have clicked. I doubt that, the crux of the message was that he had to change his password, not whether or not the message was legitimate. I think the URL should have been defanged.

[4] Yes, Microsoft employees can have iPhones.


Comments (5)

  1. TMiq says:

    Excellent article. I just finished implementing SPF, DKIM, and DMARC with the “reject” policy (in Office 365) for our company, and feel much better about hindering spoofing, but risks in handling email are present (as your article indicates). And so even though we’ve improved our domain’s legitimacy through DMARC, I also realize my users “still” need to be continually reminded that risks in processing remain.

    Also, your article underscores that options such as multi-factor-authentication really are critical to tightening security. I also glean from this article is regardless of whatever technology gets implemented, mindset as a user remains a factor that is very important. I’ll bet that Podesta did not “think” he was a target per se. This sentiment seems to be a constant with high profile individuals; they feel that someone else, not them, are a target. A careless approach to processing email has proven to create problems for many. For sure it ultimately increases security costs.

    I hate to admit this, but I was the victim of one of the Nigerian phishing bank scams. Our company controller received an email purporting to be our CEO, which requested funds be wired to some charity’s bank account in Texas (from our bank in California). As was our company’s custom, I was Cc’d on this request, and our controller forwarded this email request to our bank. Our bank called me on the phone to verify it was legit. I approved of this transaction because the fraudulent message, in tone and action, “sounded” like something our CEO would do. Stupid assumption on my part (even though similar legit transactions like that occurred in the past. Not only was email technology a problem in this case, our internal process for such transactions sucked. Fortunately, the CEO called me a few moments later (he happened to see the email on his smartphone) to ask me what this was all about. I briefly explained to him what happened and immediately called the bank back and successfully canceled the transaction. Again, even though I consider myself careful with this stuff, I was not careful enough in this instance.

    Your article reinforces the idea that improving security is multifaceted and there is currently no one magic method…yet. Someday maybe that will happen, but for now keeping on top of the incremental improvements and implementing new tools is key. That, along with continually educating the end user is what is needed.

    1. tzink says:

      Thanks for your story, TMiq. Really informative.

      It’s true there is no magic method… yet. Maybe in the next couple of years, Artificial Intelligence will come up with something.

      I don’t know how I feel about user education. I think it works in some capacity, I have lots of friends and family who ask me if a certain email is secure. I usually answer “yes”, but the problem is that legitimate emails can be malformatted, and illegitimate ones can look good. There is no simple heuristic to differentiate between the two. If human intelligence can’t differentiate, can AI do better?

  2. TMiq says:

    Tzink, Can I ask you if your response (to my comments) represents an evolution in your thinking that is not insignificant? You replied saying, “I don’t know how I feel about user education.” That seems to be a bit of a departure from comments you’ve made in the past, and may reflect the dynamic change taking place in the complexity of determining what email is authentic and what is not.

    In the article you wrote back on September 12, 2014, titled:
    “Why does spam and phishing get through Office 365? And what can be done about it?” Here’s the link: https://blogs.msdn.microsoft.com/tzink/2014/09/12/why-does-spam-and-phishing-get-through-office-365-and-what-can-be-done-about-it/

    you seemed to emphasize the importance of user education, strongly.

    In that 2014 article, you state, “#4 – Invest in User Education – User education is one of the most important aspects of anti-phishing. While technology is one component, users need to be aware of the risks.” And you go on to say, “A combination of technology plus user education is the best method of preventing falling for phishing scams.”

    Your response to me today seems to show a change from the above. Today you state, “The problem is that legitimate emails can be malformatted, and illegitimate ones can look good. There is no simple heuristic to differentiate between the two. If human intelligence can’t differentiate, can AI do better?”

    Your 2017 view seems to have changed from the view you held in 2014.
    • In 2014, you said, “User education is one of the most important aspects of anti-phishing.”
    • Today, in 2017, you are grappling with and say, “…[there is] no simple heuristic to differentiate between the two. If human intelligence can’t differentiate, can AI do better?”

    It seems as if humans, at some point, cannot differentiate and so the solution must ultimately be technical.

    I know you’ve diligently grappled with the complexity of email security for the last 10+ years, and your comments seem significant. I hope I am not drawing inferences from you that are not there, but the above comments do seem to show the dynamic at work in the complexity of email security as time marches on, and even helps to emphasize the importance of technical solutions to our future, even our nation’s future.

    I’ll close by saying that your question about AI almost provides some relief to me as an admin. Why? Because trying to train people to be careful is a never ending battle. And that’s because people are not machines; you cannot “set them and forget them.” Thus, it is ever more important for me as an admin to focus on utilizing newer technologies and methods to improve email safety for my users.

    Ultimately, AI does seem to be the ultimate answer. In a simple analogy, if credit card transactions can be determined to be legit or not (and that industry’s security does seem to be improving), then it seems email should eventually be elevated to that same sort of level of authentication, too…someday.

    I know this conversation can quickly rise to a level of which I am not qualified to speak. However, this is the kind of stuff that you may want to consider addressing in your future articles; it is fascinating stuff.

    Thanks for your blog and comments. I find them very useful and insightful.

    1. tzink says:

      The big change between 2014 and 2017 is the how we deployed our feature on antispoofing (which I talk about here.

      The history of that feature is that any domain can prevent spoofing by setting up a DMARC record and going to p=quarantine/reject. If that’s too much, they can do p=none and create an ETR for inbound email to mark as Junk except for allowed senders. Yet despite all the calls I had with customers and how we advertised the feature, we got too little uptake. DMARC itself has minimal uptake, even at large organizations.

      So we deployed antispoofing and opted everyone in automatically, doing the necessary legwork behind the scenes so that domains would experience as little degradation as possible (and make use of existing overrides). This allowed us to go from single-digits protection of Exact-Domain spoofing to 100% protection for all domains. This was a technical solution that didn’t require customers/users to take action.

      That’s the big change in my thinking. User education does have its place but it requires users to change their behavior, and it is optional. Making it optional means that you’ll never get full compliance, even if it is 100% effective because some people will think they don’t need it, others will believe it’s too expensive or inconvenient.

      Contrast this to the antispoofing feature which requires nobody to opt-in (only to provide overrides when the system makes mistakes). Coverage is far better. So the shift is explained by how effective we think a solution can be. Humans do change behavior, but Artificial Intelligence will hopefully evolve to the point that it accounts for human behavior and reacts accordingly, as counting on human behavior to modify in order to reduce risks will get mixed results. I still think user education has its place, though.

      I liken it to how my brain works; I don’t have to think about whether I need food, my body/brain sends me signals through hunger pangs. I don’t have to think if my body is being damaged, my body/brain send me pain signals. And so forth. So, I lean towards trying to automate security as much as possible without requiring the user to need to take action. We can’t always get away with this, and there’s sometimes an upfront cost.

      1. TMiq says:

        Tzink, interesting stuff. Thanks for the response.

Skip to main content