Code Repurposing


 

[Ed: I’ve now posted a follow
up entry
to this blog that talks about some strategies you can use to
mitigate the kinds of problems outlined in this blog entry]

<sigh>

Code repurposing really blows.
And it sucks. It sucks and it blows. And not necessarily in that order!

This blog could be short, or it
could be long. It depends on how much I ramble and how many side-topics I need
to bring in to talk about this most heinous of concepts. Hope you can stay
along for the ride.

So what is code re-purposing? I
talked about it a bit in a previous
blog
, but I’m just going to brain-dump again so please forgive me if I
repeat myself.

Myself.

But before I start, I want to
talk a bit about the difference between dangerous
code and malicious code. Eric
touched on this subject in his blog a
while ago
, but in case you didn’t read it or in case you’re still not sure,
I’ll start with a bit of a story. I’m doing this is so I can talk about the
difference between installing bad code, and having good code get repurposed
(something that a lot of really smart people don’t understand).

[Ed: Word just crashed so I have to re-type this paragraph… argh!]

If I told you that the other
day I discovered some code on my machine that could erase all the files on my
computer and any network shares I have access to, you might be a bit alarmed.
If I told you that I also found some code that could send e-mails to all the
people in my Outlook contacts, you might start getting nervous. If Mr.
Paranoid
can’t even keep his machine virus-free, then how can the rest of
the world? Run to the hills!

Well… someone just jumped to
a conclusion :-). I never said I had a virus; I said I had software that could
delete files (it’s called the DEL command),
and I said I had software that could send e-mail (it’s called Outlook). The DEL
command is pretty dangerous — it deletes data! — but its operation is well
documented and only trusted entities can use it to delete files. It’s not
malicious. Same for all recent versions of Outlook — it can send mail, but
only when I want it to.

Installing vs Repurposing

At work when we’re talking
about security problems, we often invoke a mythical ACTIVEX CONTROL OF ULTIMATE DESTRUCTION,
a control that will proceed to format the user’s hard drive as soon as it is
initialised.

This control is talked about in
two different contexts:

1)    
The ACOUD is not installed on my machine,
but I visit a web site or open a document that attempts to download the control
for malicious purposes

2)    
The ACOUD is installed on my machine and I
visit a web site or open a document that attempts to invoke the control for
malicious purposes

There is a significant
difference between these two scenarios, although quite often I have to expend
large amounts of time and effort trying to explain this difference to people to
show why they do (or do not) have a security problem with their designs.

In the first case, we assume
that the control is 100% pure evil and has been developed by a hax0r. The
purpose of the control is to be downloaded to your machine and do as much
damage as possible as quickly as possible. When you visit the web page, the
browser attempts to download the control, and one of three things can happen:

i.                   
The control is
blocked by your security settings

ii.                  
You are prompted
to download the control

iii.                
The control
installs without prompts

The first one is most likely to
occur if the code is unsigned — it’s the default behaviour of IE — and it’s A
Good Thing. It will also happen if you browse the net in “High”
security mode, or if you are not running as an Administrator. In this case you
are safe from the ACOUD because it will never be installed and hence it can
never do its dirty work.

The second one is likely to
occur if the control is signed (remember, the
bad guys can get certificates, too
!) or if you have lowered your security
settings to allow unsigned controls to be downloaded with a prompt (not a good
idea!). In this case, you are prompted for your permission / consent to
download and install the control. If the browser can determine any interesting
security properties about the control (ie, its signature) then it will show you
this information; otherwise it will just kind of shrug its shoulders and go
“eh?” at you. Now if you decide to install and run the ACOUD, then it’s your own fault! Yes, that’s right! You and you alone are
responsible for deciding whether or not to install code on your computer. If
you make a bad trust decision (trusting unsigned code, or code from an
untrustworthy publisher, or code from someone you’ve never heard of like
“Permissioned Media”) then there’s nothing Windows (or any other OS,
for that matter) can do to protect you.

Aside: It’s quite funny in a
sad kind of way how all the ABMers (Anything But Microsoft) think
that the point of Palladium (aka NGSC,
Next Generation
Secure Computing Base
)
is to control what software people
install on their machines
. They think it’s some nefarious plot to stop them
installing Linux or Mozilla or OpenOffice.org or whatever on their PCs.

[Ed: Oh dear, he just provided links to competitors’ sites.
He’ll probably get fired tomorrow!!!]

Boy would I sleep well at
night if I knew people couldn’t install bad software on their PCs. No more
viruses or buggy user-written code to worry about! Of course I’d also be out of
a job, because a PC is useless if you can’t install arbitrary code on it. Of
course the public just likes to talk about the OS, a browser, and a
productivity suite (usually in the context of Windows / Internet Explorer / Microsoft Office vs. Linux / Mozilla
/ OpenOffice) but any given corporation could have tens or hundreds or even hundreds of thousands (no, that’s not a
typo) of custom applications that they need to run their business on every day.
And if they couldn’t install or use those applications, they would never move
to the next version of Windows, which would mean I’d be out on my ear and
deported back to Australia.

Sure, we could definitely make
the experience much better than the dog-ugly and confusing AuthentiCode dialog
we have today, and it’s something we really need to work on in the future, but
at the end of the day the user has to take on some responsibility. Driving a dump
truck into a bank vault and then telling the police “I didn’t know how to
use the brakes!” doesn’t work very well (go on, prove me wrong!), so installing
malicious software and claiming “I didn’t know how to stop myself from doing
it!” shouldn’t work well either. (I’m deliberately being harsh here — I
really do believe we must do a better job of helping users make informed
decisions when it comes to their computer use — but the fundamental problem
remains: if people want to take actions of questionable merit and / or don’t
want to take the time to understand even the basics of computer security,
there’s not much we can do to help them).

Anyway, back to the story. If
you tell IE to install the ACOUD then you are toast, plain and simple, and it’s
not our fault. Time to reformat your machine
(oh wait, the control already did that for you — how helpful!) and start
again.

And of course if the third one
happens, it’s just like second one except you were saved the inconvenience of
clicking “Yes” :-).

So in this case, the problem is
getting the control on the user’s
machine
. If the bad guy can trick the user into installing the bad code, or
they can trick the computer into installing it automatically, they’ve already
won. What we try to do with our product designs is of course make it impossible
for the attacker to do this without the user being made aware of the possible
consequences.

So far, so good.

[Ed: I just had a really bad espresso experience — note there is no “x” in
“espresso”
! Although I have a pretty decent coffee
machine
, I just switched beans and so the grind wasn’t right. Grrrr]

Now the second big scenario I
mentioned was when the ACOUD is already installed on your machine. In this
scenario, we assume that the ACOUD is actually a well-designed piece of software
that you installed and perhaps regularly use. This might sound shocking, but
remember the example of the DEL command.
I as a user may have a common requirement of formatting hard drives (especially
if I’m in a tech support role at a large company) and in order to make my job
easier, I may have written a tool that automatically formats drives without any
kind of prompting or warnings, because only I should be able to access the
tool, and I presumably know what I’m doing. It’s actually VERY HARD to convince people — even really smart people — that
the ACOUD is a legitimate piece of software in this instance. It intuitively
seems to go against everything we know about security. “You mean as soon
as I call into this control it formats my hard drive? Without prompting?!? How
can that be a good idea?!?” But trust me, there’s nothing wrong with such
a control if it is properly designed and
protected
. (This also applies in the case where you have some slightly
less risky control that just happens to have an exploitable bug in its
initialisation routines
, but again the absolute coolness of the ACOUD
trumps such a boring example in this case).

So anyway, we have the ACOUD
sitting on our machine, and now the attack we are worried about is a web page
or Office document that somehow manages to create and initialise an instance of
that control without our consent.
Holy whack exploitable software, Batman. We’ve got a problem. This is where the
Safe
for Initialization
(SFI) attribute is used by Internet Explorer — by
default it will not try and initialise controls with data embedded in the web
page because, in general, controls are not very good at protecting themselves
against malicious input (overly large values that cause buffer overruns,
spoofed URLs that accept leaked information, etc). So unless a control says
“Yep, it’s OK to feed me random garbage!” IE will not initialise a
control with untrusted data.

In this case, the problem is allowing unauthorised agents to access more-privileged
software in unexpected and dangerous ways
.

If we design software that
somehow fails to honour the semantics of SFI then we are in deep, deep trouble.
And this is very hard to explain to someone —
yes the user installed the ACOUD, but it doesn’t pose a
security problem until YOUR CODE
decides to activate it on behalf of www.hackmeplease_noreallyimbeggingyou.com.
Eventually the point is made, often via analogies with knives, scissors, axes,
teddy bears (yes, teddy bears) and other real-world objects, but it takes a
tremendous amount of effort. Which is one of the reasons why I’m writing this.

Another aside — I initially
tried searching for Safe
for Initialisation
(the correct spelling)
and ended up with a bunch of links to research papers, presumably written by
people in Microsoft UK :-).

So there is a big difference
between a hacker installing malicious code of their own design, and a hacker
coopting otherwise benign software for evil doings of their own design. In once
case we throw up our hands and say “the user made a bad trust
decision!” and there’s really nothing else we can do (other than provide
additional mitigations via a defence-in-depth strategy), and in the other case
we throw up our hands and say “Ouch, we gotta fix that before we
ship!” (and also consider other mitigations as part of a defence-in-depth
strategy). For some reason I’ve started writing “defence” as
“defense”; I don’t know why. I wish I would stop doing it, but at
least Word corrects it for me so I don’t look silly when this finally goes up
on the web.

[Ed: Don’t kid yourself — you DO look silly]

Anyway, since I’ve rambled
about random stuff for quite some time now, it only seems fair that I should ramble
some more. Here’s a common thread that you might have found on one of the
Microsoft scripting newsgroups a few years ago:

FrustratedDev: I built a cool
ActiveX control for my company, but it won’t load in the browser. I keep
getting security errors. Help!

HelpfulPoster1:
Just lower your security settings and it will work!

Me: Nooooo,
don’t do that! You’ll open yourself up to malicious code in the future.

HelpfulPoster2:
Just sign your control with a certificate!

Me: Nooooo,
don’t do that! Signatures have nothing to do with whether a control will load
in IE or not; it determines whether the control can be downloaded and installed
or not

HelpfulPoster3:
Just mark it “Safe for Scripting”!

Me: Nooooo,
don’t do that! Your code almost certainly isn’t safe for untrusted callers.

The point being, well…. I
forget now. But the moral of the story is don’t sign ActiveX controls or mark
them as Safe for Scripting unless you really REALLY need to enable arbitrary
users to download your code and have it run against arbitrary web pages. And unless
you’re Microsoft
or Macromedia
or Apple or a similar
type of company trying to reach millions of desktops across the world, and you
have the resources and expertise to thoroughly review your code for security
vulnerabilities that probably means “not you”. (And even if it is
you, you probably missed a bug or two along the way).

In case you care, more info on
building secure ActiveX controls can be found here.

Get to the point already!!!

[Ed: Yeah, you rambling fool! Get to the point!]

OK, so something about
repurposing ActiveX controls. Let’s start with an example. A rather extreme example
that is used quite often internally here in our team at Microsoft to get across
the general idea of the badness of repurposing.

Let’s say that OTG (our
internal IT department) builds a “Format your Drive” web page, using
a signed control that is marked “Safe for Scripting” (so that it
loads without any errors). When you go to the Helpdesk web site and click on
the “I need to format my drive” link, it downloads the helpful web
page for you. The document has some informational text about what formatting
means, why it is a dangerous thing to do, and so on. It also contains two buttons
labelled “Yes” and “No”, and it links to the code that does
the actual formatting. The code is signed with the internal corporate
signature, so all users at Microsoft implicitly trust it. When the user clicks
the “Yes” button, it calls into the code to format the drive, which
is perfectly acceptable and the right thing to do, since the user has been made
aware of the consequences and has made an informed decisions.

The problem is that some pesky
trickster creates a copy of the web page and replaces all the text about hard
drive formatting with some text about downloading pictures of famous
celebrities in various stages of undress. They maintain a link to the original
formatting ActiveX control (which, remember, is signed and trusted), and send a
link to the page around to all the people in their team. An unsuspecting user or
three hundred decides to open the link and click the “Yes” button, in
eager anticipation of seeing Margaret Thatcher in her winter underwear. (My
apologies to those readers who just passed out). This of course invokes the
original FormatDrive() function in
the trusted control, and the user is toast.

A more subtle (but perhaps more
realistic) scenario is where a less obviously dangerous piece of code gets
repurposed from another document that is expected to contain code. For example,
let’s say that the IT department builds a new kind of budget forecasting web
page that links to custom ActiveX control to implement certain functions
similar to an Excel workbook. Let’s also say that the HR department has built a
web page (using another control) that enables users to balance their benefits
(for the non-US people: that means your non-salary compensation, such as a
medical plan, gym / health club membership, life insurance, etc.). And let’s
also assume that the HR control takes the URL of the web server it should
contact as a parameter to the <object> tag in the HTML document.

Now assume that malicious user
Bob wants to know how much money Alice earns, so he modifies a copy of the HR web
page so it looks like the budget web page (by formatting so that it looks like
a budget spreadsheet) but he updates the <PARAM> tag to point to his own
web server and sends the document to Alice with a note saying “Please check
these estimates and get back to me”.

Now we see the problem. Alice opens the web page,
and instead of the Budget code being executed, the HR code starts executing and
uploads her personal information — salary, stock options, health coverage,
etc. — to the web server that Bob controls. Remember the HR code is already
signed and trusted; this is not a malicious code injection attack, but a
repurposing attack. Even assuming that Alice‘s
machine was setup to prompt before executing code, she expects the budget web page to execute code and so will most likely
consent to the request, even though it’s executing the wrong code.

I’m sure you can imagine other
scenarios. The possibilities are endless! 🙂

[Ed: I think that should be “:-(“]

As I explained in my previous
blog entry
, insofar as VSTO goes we try somewhat to stop this attack from e-mail
attachments and random web sites
, but it doesn’t help if users copy
documents to their desktop or start publishing such malicious documents to
trusted internal web sites
.

And now you know why all my
blogs are posted past midnight
— I can’t sleep with all these nightmares about code repurposing going on in
my head!

One last thing for the evening
— a real life tale of a security bug that could have been… in my very own
WordBlogX!

The current version of WordBlogX
on GotDotNet
is pretty bad, mostly because it is lacking useful features. One feature I’ve
already added in the build on my machine is the ability to put the URL of the
blog server in a config file, thereby enabling people to post blogs to servers
other than GotDotNet without recompiling the source code. Another feature I
really want to add (but haven’t done so yet) is a “remember my
password” feature that securely
stores my credentials
on the local machine so I don’t have to type them in
every time I make a post. I’m going to implement this feature RSN (Real Soon Now; often used
sarcastically to mean “never” but in this case I really do hope to do
it soon!).

Can anyone see the problem with
this design? Anyone? Beuller?

The problem is that anyone who
gets a copy of the assembly can put it on a server of their control, modify the
config file to point to a web server of under their control, send an arbitrary
document that links to this code to anyone who uses WordBlogX, and then wait
for the cached passwords to flow in. The attacker simply needs to have a button
on their document with the programmatic name of cmdPost
and somehow socially engineer the user into clicking
it (“Free life-time supply of dental floss — click here now!”).

[Ed: Note that the root badness here is trusting the URL in the config
file; sending usernames and passwords is just a bad side-effect that drives the
point home]

Is this a bad design? Yes, it
is.

Would the average developer,
faced with similar requirements, come up with the same design? Quite probably.

Would this same developer
threat model their code and try to mitigate against this attack? I don’t think
so.

Luckily for me I have the
assembly for WordBlogX “installed” in a particular directory on my
system, and it will only ever try to load the configuration file from that
directory. The assembly is not signed and I do not trust it to load from any
other locations (eg, malicious web sites), so I should be safe from this kind
of attack for the time being (as should anyone else who installs WordBlogX in
the future). But if I was the IT department in a large corporation, I’d
probably just sign the code with the corporate certificate, and the code would
be trusted from any location inside the LocalIntranet Zone (or perhaps on a
server where malicious users have some degree of write access), in which case
it wouldn’t be long before the code was exploited.

I will probably add a
confirmation dialog box to WordBlogX that displays the remote URL before retrieving
credentials or sending any information to the server just to make sure, but
this will no doubt annoy some users
and they will turn off the confirmation.

(A lot of people downplay the
likelihood or severity of internal attacks because, well, you only ever hear
about high-profile public attacks in the news. But don’t kid yourself —
attacks from people inside the
firewall are much more common and much more likely to cost you big money than attacks
from faceless hackers sitting half way across the world).

Thus ends tonight’s effort.
It’s been long and random, and I probably scared the willies out of some
people, but that’s the way life is.

P.S. I managed to talk about
teddy bears in the context of security when talking to a fellow PM about
trusting the “Safe for Scripting” flag on controls. His argument was
that SFS buys you nothing because hackers will always just set that bit and so
therefore you are hosed when the control loads in IE. This is a concrete
example of failing to understand the differences between installing malicious
code and repurposing trusted code, as I tried to outline in the ACOUD example.
Anyway, we had been talking about knives and how he wouldn’t trust his young
child with a big kitchen knife, so in a moment of desperation I came up with
“The Teddy Bear Defence”. When you buy a teddy bear from the shop, it
comes with a safety tag that says “suitable for children 3 years and
older” or something similar. As part of the purchasing act, you
(implicitly) trust that the manufacturer is not lying about the toy being
suitable for anyone 3 years of age or older. If the toy was manufactured by an
evil company, all bets are off as to whether it is even safe for ANYONE to have
the teddy bear, no matter what their age — maybe it spontaneously combusts on
the third Sunday of the month. In this case you made a bad purchasing / trust
decision. But if the toy was manufactured by a responsible company, you can use
the information printed on the warning label to deduce that it is OK to give the
teddy bear to your 5-year-old but not your 2-year-old. And there you have it.

[Ed: This guy is ccccrrrraaaaaaaazzzzzyyyy!]

P.P.S. Who’s this
“Ed” fellow, and why does he keep popping up in my blog?

[Ed: Mmmmmuahahahahahaaaaa…….!!!!!]

Comments (11)

  1. Valery Pryamikov says:

    Excelent story, Peter!
    Have you ever thought about writting a book? I’ll buy it!
    -Valery.

  2. Bob says:

    Why does everyone always use me as an example of a malicious user? 🙁

  3. Jeroen says:

    Excellent piece man, keep it up!

  4. Alice says:

    Why is Alice always the dumb one who blindly does stuff?

    (It’s not always Bob that’s malicious. Sometimes, it’s Charlie. And, he’s one mean mutha.)

  5. Peter Torr says:

    Thanks Valery 😉 Eric Lipper already gave some good reasons why writing a book just isn’t a very appealing idea in one of his blog entries; so did Raymond I believe. There’s less room to be creative and you have annoying things like deadlines and they expect what you write to actually _make_sense_. I’ve sometimes thought about maybe publishing a book of all my most useful newsgroup posts, but it would only sell 3 copies (including me buying Christmas presents for my family).

    Yeah, Charlie is the bad dude always trying to break in on Alice and Bob’s conversations. That dude should really get a life! 🙂

  6. Eric Lippert says:

    Raymond had a blog entry on the subject; my comments that you’re thinking of were in an email, not a blog entry. But to keep you from being a liar, I shall do so.

  7. Siew Moi Khor says:

    Other notable players:
    Eve — eavesdropper
    Mallory — malicious attacker
    Trent — trusted third party

    I’ve a fondness for cryptosecurity protocols. If you do too, you might the following entertaining:
    http://downlode.org/etext/alicebob.html

    I sleep better with you doing all the worrying for me 😉 I’ve always wondered when you’re going to let Ed emerge into the open. Glad to see that you even named him!
    Another great post Peter! Thank you!