Pushing the boundaries of cryptography in a security vulnerability report

A security vulnerability report arrived that implied that the finder had made an earth-shattering breakthrough in cryptography. Specifically, the finder claimed to have found an efficient way to factor the large numbers used in RSA cryptography.

This would be a remarkable breakthrough if true, but the description of the algorithm, while mathematically correct (and I bothered to read through it all and understand it), didn't actually produce an efficient algorithm. It boiled down to trying to find a collision among a population whose size is proportional to the smaller factor, with an additional optimization to reduce the amount of calculation to the square root of the population space. (I believe it was this additional optimization that the finder considered to be the groundbreaking discovery.)

Now, a geometric reduction in complexity is a great thing, but it's minuscule compared to exponential growth.

In 2013, Web browsers required a minimum of RSA-2048, which means that the collision space is around 2¹⁰²⁴, and the birthday paradox tells us that you'll need to generate around 2⁵¹² items to have a 50% chance of finding a collision. Applying the groundbreaking discovery reduces the number of items to 2²⁵⁶.

This is even worse than enumerating all the GUIDs. At least there are only 2¹²⁸ of those.

This algorithm for factoring an RSA-2048 number would require storing 2²⁵⁶ values. That's the square of the number of GUIDs.

We calculated earlier that storing all the GUIDs on SSDs would require 100 earth-sized planets. Storing all the values required for this algorithm to factor an RSA-2048 number would require, um, a lot more than that.

Current upper estimates for the mass of the Milky Way put it at 4.5 × 10¹² solar masses, or (rounding up) 10¹⁹ earth masses. If we need 100 earth masses to store 2¹²⁸ 128-bit values, then storing 2²⁵⁶ 1024-bit values will require around 2¹³² × 100 ≅ 10⁴¹ earth masses ≅ 10²² Milky Way-sized galaxies.

This seems impractical.

The finder, however, disagreed with our analysis and insisted that their trial runs with smaller values indicated that the running time was linear in the exponent. "I was able to factor numbers up to 64 bits in size, with the largest taking less than a second."

We decided to take a tip from a number theorist who had to deal with factorization algorithms submitted by crackpots and suggested to the finder that they use their advanced algorithm to factor one of the root signing certificates.

The finder replied back, "I know what you're trying to do, but I'm telling you that I cannot run the algorithm on numbers that large on my laptop. But you can certainly run it on one of your more powerful computers. I have demonstrated that the algorithm is linear in the key length, and my personal lack of access to supercomputers does not invalidate that fact. I have contacted the media about this discovery, but fortunately for you, they don't seem to be interested, which gives you more time to address the problem."

If the algorithm were truly linear in the exponent, then going from 64-bit numbers to 2048-bit numbers would take only 32 times as long. The 1-second run time would increase to just 32 seconds. So let it run for a minute. Five minutes just to be sure. Your laptop can certainly handle that.

But instead of replying, we decided to disengage. Never wrestle with a pig. You get dirty, and the pig likes it.

Comments (21)
  1. Damien says:

    “I cannot run the algorithm on numbers that large on my laptop.” – I guess that also means that their laptop is incapable of verifying any digital signatures generated by such certificates. They, personally, have bigger problems with their machine if that’s true.

    1. DWalker07 says:

      No; VERYIFYING digital signatures is not as hard a problem as factoring the large numbers used in cryptography. It is kind of amazing that the product of two “sufficiently large” primes is so hard to factor, but it is.

      1. Damien says:

        But factoring was easy, so this person claimed, if only they had a machine capable of dealing with such numbers. That’s the point I was trying to make.

        I spent a long time lurking on sci.crypt in the past. I’m well aware of the levels of crack-pottery that is out there, thanks.

  2. Mark Charsley says:

    Or to paraphrase:
    I have discovered a truly remarkable proof of this theorem which this laptop is too small to calculate

  3. 12BitSlab says:

    Given the realities of trying to factor very large numbers, the finder would likely have much better success in obeying the #1 law of encryption – “Encryption is easy. Key management is very hard.” In today’s world, every programming framework has all of the latest methods built in. One merely needs to call a few routines and, Bingo! High quality encryption is yours. The truly difficult part is managing keys. Keys can be easily compromised by malware. Easier yet, is the fact that humans are easily tricked. Social engineering works very well within the arena of penetrating a system.

    I give the finder credit for trying because we can learn quite a bit from our failures. The finder would be better off trying to take the easier path.

  4. kantos says:

    My general understanding of most crypto vulns at the moment is that they fall into several categories: Implementation mistakes, side channel attacks, and if you’re dependent on larger primes quantum computers as soon as they can figure out how to entangle more qbits to be useful (this gets stupid complicated quickly). The first two are patchable, the latter is something that is being worked on right now with a goal of getting a good and common solution in place long before quantum computers get to the point of being a problem. That said the line in the sand is usually what sort of computer you can afford to buy with a couple million dollars (e.g. what organized crime would have). You should always assume that governments can break any sort of crypto… they have the money and the resource to break it, or to execute the infamous knee-cap attack that works against almost every encryption scheme. Because people forget that they are the weakest link in the chain.

    1. Martin Bonner says:

      The classic name for that attack is “rubber hose cryptography” … but yes.

      1. nymv says:

        A similar Russian attack discovered independently is termed thermorectal cryptanalysis.

    2. Richard says:

      Organised crime is significantly more likely to go straight to the rubber hose approach to decryption.

      1. kantos says:

        Local organized crime yes, more “International” organizations have more resources and execute more sophisticated attacks. In fact some groups are hard to separate from some Advanced Persistent Threats and may be intertwined with them.

    3. Aged .Net Guy says:

      AKA the $5 wrench attack: https://www.xkcd.com/538/

  5. pm100 says:

    people dont get how explosive exponential growth is. Just like the king who promised 1 grain of rice on first square of chess board, 2 grains on next, 4, 8, etc

  6. Antonio Rodríguez says:

    You can not argue with an stupid. The argument will soon come down to stupidity, and there, s/he is going to defeat you.

  7. Kevin says:

    > “I have contacted the media about this discovery, but fortunately for you, they don’t seem to be interested, which gives you more time to address the problem.”

    I’m pleasantly surprised that the more hysterical and less careful “news” agencies were not fooled by this. Perhaps they’ve got a spam folder filled with crackpots claiming to have broken RSA?

    1. Erik F says:

      Either that or the person wrote their “proof” so opaquely that nobody in the newsroom could understand what they said (or, ironically, they sent it as a Word document, and it had a macro virus!) :-)

  8. Keith Patrick says:

    My immediate thought upon reading the first sentence was “Wait, this is the plot to ‘Sneakers’.” (I happened to watch it on Prime a couple of days ago)
    It also reminds me of a 3d algorithm I had brainstormed ages ago, before consumer-grade hardware acceleration. I would precalculate all possible triangles on the assumption/hope that rotation frames could be derived from some kind of simple equation to tell you which “cell” to hop to. I figured it would revolutionalize 3D rendering if it worked, but I didn’t even get to the algorithm because it took some crazy amount of (90s-era) memory to store them all; I optimized it to something like a third of that size, but it was still something like 100 MB, and I was still looking at the strong possibility of major computational expenses that probably wouldn’t even work.
    Those rabbit holes are so damn enticing that I can see where the finder is coming from, but you always have to listen at least a little to that voice of doubt telling you that reality is still there.

    1. Brian says:

      @Keith: Now that 100MB RAM is easily acquired, try implementing your algorithm again (either in software or in your brain). If 100MB had been feasible, would your algorithm have helped?

      1. Ben Voigt says:

        In the meantime screen resolutions have gone up, and 100 MB is no longer enough.

  9. cheong00 says:

    I wonder, you have to take care of security report on cryptographic algorithms today? I thought Microsoft would assign it to some “cryptography expert” to do the analysis and report back to whoever CS assigned to decipher their output to human readable language and send it back.

  10. JAS says:

    Is quantum computing MS’s attempt to Enumerate The Guids?

    1. Scarlet Manuka says:

      Just think of the monopoly power MS will be able to exploit if they have all the GUIDs!

      I’m also getting a The Nine Billion Names of God vibe from the idea.

Comments are closed.

Skip to main content