You Want Salt With That? Part Three: Salt The Hash

Last time we were considering what happens if an attacker gets access to your server's password file. If the passwords themselves are stored in the file, then the attacker's work is done. If they're hashed and then stored, and the hash algorithm is strong, then there's not much to do other than to hash every string and look through the password file for that hash. If there's a match, then you've discovered the user's password.

You don't have to look through the vast space of strings in alphabetical order of course. An attacker will start with a dictionary of likely password strings. We want to find some way to make that attacker work harder. Setting a policy which disallows common dictionary words as passwords would be a good idea. Another technique is to spice up the hashes a bit with some salt.

System #3

For every user name we generate a random unique string of some fixed length.  That string is called the “salt”.  We now store the username, the salt and the hash of the string formed by concatenating the user’s password to the salt. If user Alpha's password is "bigsecret" and the salt is "Q3vd" then we'll hash "Q3vdbigsecret".

Since every user has their own unique random salt, two users who happen to have the same password get different salted hashes.  And the dictionary attack is foiled; the attacker cannot compute the hashes of every word in a dictionary once and then check every hash in the table for matches anymore.  Rather, the attacker is going to have to re-hash the entire dictionary anew for every salt.  A determined attacker who has compromised the server will have to mount an entire new dictionary attack against every user’s salted hash, rather than being able to quickly scan the list for known hashes.

Salting essentially makes it less feasible to attack every user at once when the password file is compromised; the attacker must start a whole new attack for each user.  Still, given enough time and weak passwords, an attacker can recover passwords.

In this system the client sends the username and password to the server, the server appends the password to the salt, hashes the result, and compares the result to the salted hash in the table. 

This answers the original question posed by the JOS poster; the salt can be public because it is just a random string. Ideally, both the salt and the salted hash would be kept private so that an attacker would not be able to mount a dictionary attack against that salt.  But there is no way to deduce any information whatsoever just from the salt.

And of course, it's better to not get into this situation in the first place -- don't allow your password list to be stolen! But it's a good idea for a security system to not rely on other security systems for its own security. We call this idea "defense in depth". You want to make the attacker have to do many impossible things to compromise your security, so that if just one of those impossible things turns out to be possible after all, you're not sunk. 

But what about the fact that the password goes over the wire in the clear, where anyone can eavesdrop? That's now the weak point of this system. Can we do something about that? Tune in next time and we'll see what we can come up with.


Comments (21)

  1. Mr Blobby says:

    Defense in Depth?

    That would be, for example, not allowing a web browser to access operating system functionality?

  2. Lance Fisher says:

    Blobby, you could unplug your computer from the network if you want the ultimate security, but most people care about functionality more than security. So we as programmers need to provide the functionality with the best security feasible.

  3. mike says:

    I believe Mr. Blobby is referring to running under least privilege.

  4. Carlos Beppler says:

    We concatenate the "user name" before the password before hash it. This way even users with same password have different hashes.

    What is the advantage os using "salt" over our algorithm?

  5. Eric Lippert says:

    The "Principle of Least Privilege" and the "Principle of Defense In Depth" are different principles of secure design.

    By "Least Privilege" I mean that a system should grant the smallest number of privileges possible. Some examples:

    * If you’re running a bank, only give the combination to the safe to people who absolutely positively need it — not the tellers, the security guards, the customers…

    * If you’re running a script from a web page, the web browser should restrict that script to the smallest possible set of functionality that allows the script to get safe work done.

    * If you’re logging in to check your email, log in from a "regular user" account, not from an "administrator" account. If you don’t need a privilege, don’t be in a situation where it’s going to be granted; if an attacker manages to get you to run a Trojan horse, you’ll run it with regular user privileges and not administrator privileges.

    * If you’re writing a managed application that needs to assert a privilege, then don’t assert Full Trust. Just assert the privilege you’re going to need, and revert the assert when you’re done with it. That way if an attacker takes advantage of a security hole in your program, they get the right to, say, read an environment variable, rather than full trust.

    The principle of Defense In Depth by contrast, is the principle that secure systems are built in layers, and each layer is designed so that if every other layer has failed, it is still reasonably secure. You want to keep a resource safe, so you protect it with a password. What if an attacker gets onto the server? Protect the password file so that it can only be read by administrators. What if the attacker defeats that protection? Encrypt the file. What if he breaks the encryption? Store hashes rather than passwords. What if he runs a dictionary attack against the hashes? Salt them to make dictionary attacks harder. What if an individual user’s password is being attacked? Ensure that all users choose long passwords that are hard to brute-force. Etc, etc, etc.

    The idea of a defense-in-depth security system is to make the system sufficiently strong that the expense of mounting a successful attack is higher than the value of the resource being protected.

  6. Carlos – actually that user name would work as a "salt" value for your algorithm. But, you have the potential problem that if you change the user name through other means, you won’t be able to successfully evaluate your hashed password anymore.

  7. Eric Lippert says:

    > What is the advantage os using "salt" over our algorithm?

    You are using a salt, you’re just using the user name rather than a random salt.

    The problem with using user names is that they are highly predictable. And if they’re highly predictable, that means that attackers can precompute large dictionaries of username-followed-by-password hashes which can then be used to attack the system.

    Also, if you use a random salt then you can choose how much entropy is added to the password by choosing the salt length.

  8. William says:

    "We concatenate the "user name" before the password before hash it. This way even users with same password have different hashes."

    The only problem some may have with that is if multiple sites use the same method, then you have the user’s password equiv on all those sites too (i.e. banks, etc). It is just as easy to use a random salt and store the salt with the user. This does not slow down dictionary attacks on that one user, but does make all user/pwd combos unique *and unique to your site only. The algo could then become:


    PWEquiv = SHA1(username + pw)


    SHA1(DBSalt + PWEquiv)

    Note hashes should *only be sent encrypted on the wire as it is too easy to dictionary attack simple hashes off wire. Even passwords like "Sunlight;12" can be cracked in seconds. So the encryption protects the wire all the way down to your verifier logic which does the decryption. A strong pw policy is also required to help protect against the inside dictionary attack.


  9. Mr Blobby says:

    You obviously have a good understanding of the basics of security design, Eric. Why doesn’t Microsoft actually implement such designs?

    Why does so much of Microsoft’s own software require administrator access to run when it could easily be adapted to only affect userspace, thus encouraging people to run as Administrator? Why does XP default to ‘Administrator’ status when setting up a new account?

    Why does IE need to access operating system functionality?

    Why is there no easy, simple way for a user to securely encrypt a file or directory for themselves?

    XP SP2 had security-focus and did quite well, but the underlying principles need changing. Yeah, I’m sure it will all be rosy in Longhorn, if it ever comes out…

  10. G. Man says:

    Mr Blobby, Why doesn’t Linux have any concept of a domain? Why doesn’t Linux support any concept of group policies or SMS? Sure Linux may make for a good web or FTP server, but in the IT enterprise… sadly lacking.

    Anyway, I can see this series going for awhile. Sure you can hash the password on the client and send that, but you are still susceptible to replay attacks. At this point it gets interesting, for me at least. I seem to remember something using digitally signed timestamps.

  11. Eric Lippert says:

    > Why doesn’t Microsoft actually implement such designs?

    I reject the premise of the question. Microsoft implements such designs all the time.

    Every product now must go through threat modeling during design and security-specific code reviews after coding.

    I will be the first person to admit that the Microsoft coding culture at large was not very security-conscious in the past. This has changed massively in the last few years.

    > Why does so much of Microsoft’s own software require administrator access to run when it could easily be adapted to only affect userspace,

    I reject the premise of the question. It is NOT easy to design software that works in partial trust situations.

    I work on Visual Studio. We have done a HUGE amount of research into finding ways to make Visual Studio work well for non-administrator developers, but it is a VERY HARD PROBLEM. Fundamentally, debugging a process is one of the most security-breaking things you can do to a machine, and it is therefore a highly privileged operation.

    We are working extremely hard in devdiv to come up with ways to write software developer tools that do not require admin to run, and we will get there eventually because this is a high-priority goal for the company.

    > Why does XP default to ‘Administrator’ status when setting up a new account?

    Because otherwise people can’t install video games.

    Video games are very high on the list of reasons why people buy computers, and if they can’t get their video game to install, do you think their first call is to the video game company to demand that they implement a user-mode installer, or to Microsoft tech support?

    I would very much like to see improvements to our security model so that video games can be installed and run by normal users, but this will require the cooperation of game companies. Historically the game industry has done a very poor job of implementing secure designs.

    > Why is there no easy, simple way for a user to securely encrypt a file or directory for themselves?

    I reject the premise of the question.

    Right click the file. Select properties. Select General. Select Advanced. Check "Encrypt contents".

    That’s five mouse clicks. How much easier do you want it to be?

  12. Mike Dimmick says:

    Blobby: actually, very little of Microsoft’s software – at least, recent software – requires administrator access.

    IE only requires the same access to operating system functionality as any other user mode process. It’s a common myth that IE is somehow integrated into Windows, using private features. The Internet Explorer we all know is pretty much just a frame around the WebBrowser control. The WebBrowser control is one of the components supplied with the OS. It runs on top of the WinInet library, which is also a component supplied with the OS. That in turn runs on top of Windows Sockets, which is a sanitizing wrapper over the raw AFD driver interface which _actually_ drives the TCP/IP stack in kernel mode.

    The WebBrowser control is also used by HTML Help, on Windows 2000 and later the Add/Remove Programs box, Windows Media Player’s built-in browser, WinAmp’s built-in browser, RealPlayer’s built-in browser, a number of third-party browser shells, and the RSS Bandit window I’m currently typing this comment into.

    To encrypt a file or directory, right-click the item in Explorer, choose Properties, click Advanced, tick ‘Encrypt contents to secure data’ and OK, then OK again. I’m not sure why you think that’s complicated. The Administrator gets a recovery key. The technical details are that the file itself is encrypted with a secret key algorithm such as DES, then the secret key is encrypted with your file-encryption public key and the Administrator’s recovery public key. The encrypted secret key blocks are attached to the file as well. If you grant permission to someone else, your encryption block is decrypted, the secret key is encrypted again with the new user’s public key, and the newly encrypted block is added to the file. If you revoke permissions again that user’s key block is deleted.

    If the option isn’t present your disk may not be formatted with the NTFS filesystem, or you’re running some version of Windows prior to Windows 2000.

  13. Eric Lippert says:

    Mike’s comment raises an extremely important point about encryption. Encryption is EASY, but KEY MANAGEMENT is a pain in the rear!

    There’s no easy encryption because encryption is not a simple panacea. To use encryption correctly you need to understand what is being encrypted, why, for how long, what attacks are likely, how you’re going to manage the keys, blah blah blah blah blah. It is just fundamentally a really hard technology to use correctly.

    Part of secure design is understanding your users. Users are not professional cryptographers. They don’t understand what they want, they don’t understand the attacks, they don’t understand any of that stuff. All they understand is "encrypted = safe", which is not even true.

    You can’t just build a general-purpose arbitrary-algorithm encryption system into a consumer-grade operating system and hope that people use it correctly, because they won’t. You have to very carefully look at the scenarios in which they will use it, and design a system which meets those requirements, and steers people down that path. The encrypting file system meets the needs of a very large segment of our customers and has a reasonably misuse-proof key management system so that’s where we focused our efforts.

  14. Fabulous Adventures in Coding assays this week a, well, fabulous adventure, in simple cryptography. I know enough to get myself in deep trouble with this subject, but Eric has put together three short and knowledgeable posts that begin easy and…

  15. Mr Blobby says:

    Thanks for pointing out the encryption thing, I didn’t realise! </blush>

    Regading IE, I don’t mean the process itself, I mean the way ActiveX controls can have control beyond the application.

    Regarding (for example) Visual Studio: Why do you need an Administrator account at all? Why not have a set of granular privileges, and something like su? SELinux is a start in the *nix world but still unfortunately not widely deployed. It’s excellent to hear it’s a priority.

    Linux certainly does have the capability to set group level privileges, but I get your point – yes, it has a lot to learn about security too. I reject the notion that (for example) RHES isn’t enterprise ready, but that is irrelevant here. Why should that stop me pointing out (what I perceive to be) weaknesses in Microsoft security?

  16. Phylyp says:

    <slightly off topic>

    Mike: That was a great summary on how encryption is implemented in Win 5.x. I’ve always wondered how it was that transparently implemented.

    Eric: you’re right, the easier the feature is to use, the harder it probably is to implement.


  17. Lance Fisher says:

    This is slightly off topic, but still related to cryptography.

    One of the most impressive cryptography things to me is the widespread use of SSL, and the encryption technology it’s built upon. What do you think it would take for secure email to become a reality? Somthing like all email clients managing keys seemlessly, and all email was encrypted.

    Are the biggest hurdles encryption, key management, adoption, or something political like it makes the NSA’s job harder?

  18. Drew says:

    To clear up some misunderstandings about EFS:

    On Win2k by default an administrator will have a recovery key for encrypted files.

    There is no EFS on XP Home Edition.

    On XP SP1+ and Server 2003 the default symmetric algo is AES 256 (much better than DES).

    An unfortunate side-effect of the way EFS converts from plaintext to ciphertext is that a plaintext copy is left on the raw volume. You ucan use "cipher /w" to wipe clusters that aren’t in use. And we recommend marking a folder for encryption and creating your files inside it – we don’t do the conversion in this case so there’s no risk of plaintext lying around in freed clusters.

    There seems to be some stale or even misleading advice about EFS on the web. Not only on 3rd party sites, but also here on the mothership. This looks fairly accurate:

    And this is informative:

    But even those seem a little outdated. The former has dead links and the latter is before we added AES (3 years ago?). Frankly it’s a little embarassing.

  19. An interesting alternative way to do this is embedded in the RADIUS protocol – there is a shared secret between the RADIUS server and each RADIUS client, and when users request authentication, their password is hashed together with the shared secret, producing a similar effect, but essentially eliminating the random challenge step in favor of a single secret exchange that is done offline.

    This has its pros and cons, of course. More can be read in RFC 2865 (

  20. Anon Hacker says:

    Some real balls being talked about MD5

    Salting only strengthens against rainbow table attacks – not brute force attacks.  With a brute force attack a binary salt of say 32 bits increases the search space by 32 bits… true… but thats not going to stop a hacker.

    Now heres where I open a whole can of worms.

    Conventional wisdom says that rainbow tables are the threat.  Thats balls.  A REAL hacker will brute force because its FASTER than a rainbow table lookup.  Yes!  I said it’s faster than a rainbow table … And yes, right now you’re probably all calling me an idiot.  Thats fine – I’m used to that.

    The difference here is that you’re thinking in a Von Neumann architecture.  Offload the task to an FPGA and the situation flips upside down.  Each potential hashed password (with or without SALT) requires only one run through the functional 64 blocks of the MD5 algo.  Due to the way the algo is constructed we get most of the processing for free (ROL’s are free, and the ABCD rearranging is free – since in an FPGA we do this with ONLY the signal path, no gate delay)

    Not only this, by pipelining, we can have 64 passwords in various stages of processing at the SAME TIME in a single MD5 pipeline.  And we can fit perhaps two or four pipelines per FPGA device.  The result is that the device is so much faster at cracking unsalted passwords than a rainbow table lookup… AND it fits in a matchbox.

    Now, since we are doing the entire block in one 64th of a stage, the SALT adds no additional overhead to round processing speed.  We can search a few trillion hashes per second with or without SALT… so, instead, we’re talking about a longer search and not a slower search… and since FPGA’s are cheap and the whole process is scaleable we can build a device based on a 4×4 array of FPGA’s and farm out the task across each.  Indeed, there is no limit to how fast we can search the space.

    Then you can stack those 4×4 blades and add to them as funds permit.  For not a great deal of capital you can crack 32bit salted hashes without any problem – in a matter of days.  And the interface ?  Well, we cheated … we seed the entire thing through a four wire JTAG to set up the ranges for each device on the boundary cells before starting her up… takes about 60 seconds to start up a 64x FPGA device ….

    After starting it up you get a latency of around 1500 ns before the pipelines fill up and the first hashes are spat out… then one every 20ns for each pipeline.   (Thats MD5 128 pipelines handling over 8000 MD5’s simultaneously and comparing 128 hashes every 20 nanoseconds).  Seriously, salted hashes are no problem at all.

    For the helpless, 128 hashes per 20ns is around 6.4 bn per second.  Thats a cool 23,040,000,000,000 per hour.  And once you factor a useable alpha/num/symbol set into it… well, it stretches much further than you’d think.

    The reason for this scary performance is that the MD5 algorithm fits a logic based design really well with no iterative steps.  Also, there are no multiplication or division (Which are costly in logic) and the adds, well, towards the end of the block most of them are redundant and reduce to single hardwired constant adds.  Essentially theres a whole lot of XORing and fixed ROTATES … and of course, a fixed rotate is a wiring problem and not a logic problem, so those carry no gate delay.  we’re left then with XORs and ADDs.

    With the ADDS using a carry lookahead the stage timing is real fast, and of course XOR’s are very cheap in terms of added gate delay.

    So, no…

    The people who say MD5 is reasonably safe if salted tend to be looking at the world through Von Neumanns glasses… and lets face it, Von Neumann engines, even the most modern multi-cored processors, are slow and a very inefficient use of transistors.  Yes, they may play half-life well… they may produce beautifully rendered landscapes in Bryce… but when it comes to raw crunching you can pay less money and crank out a a few million hashes in the time it takes a PC to stagger through round one (of sixty-four) of the first hash.

    Now, the next argument I’m bound to hit is ‘But who has time and skills to use FPGAs’ … easy.  Hackers.  Not the little script kiddies hanging out waiting for published exploits and coveting other peoples PoC code.  The REAL hackers.  The ones that ain’t scared off by apparent complexity but are attracted by it.

    And as for governments and large corporations.  Well, they can move that same FPGA design over to a much faster ASIC process… crank out a hundred thousand of the buggers, and crack SALTED MD5 in realtime faster and cheaper than a cluster of fully popped Crays can even dream of.

    But the great thing about my FPGA cluster is that it can crack RC4 one day, WEP the next and MD5 the day after… and then I can close the case and take it with me.  Try that with a Cray!

    So sorry, but all your hash are belong to us!

    Anon (Posted via TOR, So, IP me!)

  21. A recent question I got about the .NET CLR’s hashing algorithm for strings is apropos of our discussion

Skip to main content