More proof that crypto should be left to the experts

Apparently two years ago, someone ran a static analysis tool named "Valgrind" against the source code to OpenSSL in the Debian Linux distribution.  The Valgrind tool reported an issue with the OpenSSL package distributed by Debian, so the Debian team decided that they needed to fix this "security bug".


Unfortunately, the solution they chose to implement apparently removed all entropy from the OpenSSL random number generator.  As the OpenSSL team comments "Had Debian [contributed the patches to the package maintainers], we (the OpenSSL Team) would have fallen about laughing, and once we had got our breath back, told them what a terrible idea this was."


And it IS a terrible idea.  It means that for the past two years, all crypto done on Debian Linux distributions (and Debian derivatives like Ubuntu) has been done with a weak random number generator.  While this might seem to be geeky and esoteric, it's not.  It means that every cryptographic key that has been generated on a Debian or Ubuntu distribution needs to be recycled (after you pick up the fix).  If you don't, any data that was encrypted with the weak RNG can be easily decrypted.


Bruce Schneier has long said that cryptography is too important to be left to amateurs (I'm not sure of the exact quote, so I'm using a paraphrase).  That applies to all aspects of cryptography (including random number generators) - even tiny changes to algorithms can have profound effects on the security of the algorithm.   He's right - it's just too easy to get this stuff wrong.


The good news is that there IS a fix for the problem, users of Debian or Ubuntu should read the advisory and take whatever actions are necessary to protect their data.

Comments (41)

  1. Anonymous says:

    As has already been commented several times in these discussions, the Debian maintainer did ask the package maintainers:

  2. Anonymous says:

    Nice bit of Schadenfreude there.

    "More proof that crypto should be left to the experts"

    With experts you mean Microsoft, right? Well, it happens to you guys too. I personally found a flaw in a Microsoft product which resulted in the same private key being generated every time, possibly due to faulty seeding. And then you secretly patched the flaw in a non-related Hotfix.

    Give me the openness of Debian/OpenSSL/etc. anytime.

  3. Anonymous says:

    Um, no.

    It wasn’t a security bug per se.

    There were two calls that were commented out. One call was folding uninitialized memory into random state (which wouldn’t hurt anything as it turns out) and one was being used to actually add to the random state.

    Commenting out the latter was the problem.

    Valgrind complained about the former. The former was "a rare case where you actually don’t mind uninitialized memory."

    However since both calls were very very similar, the maintainer erroneously commented both out.

    Better summary:

    So don’t just blame Debian. It’s more along the lines of "don’t do things you normally wouldn’t do without telling people why."

  4. Jack, did I EVER say that Microsoft was a crypto expert?  

    We do have crypto experts at Microsoft, and they review all crypto-specific work.  The SDL also has an entire section on cryptography which defines what can and can not be used in SDL covered products.

    But the vast majority of the developers at MSFT aren’t crypto experts, that’s why we rely on the crypto team to review our decisions that are crypto-specific.

    I don’t know which team you reported this to, but honestly I’m embarassed they shipped a security bug fix as a hotfix, that’s simply unacceptable in my opinion (it goes to the core of "trustworthy computing").

    AC: Apparently (as Brian mentioned) they only talked about the first of the two changes with the OpenSSL team.  The change they committed went beyond what they dscussed.  

    Brian: IMHO, any bug that affects the strength of a crypto cypher is a security bug.  

    And I’m not "blaming Debian".  I’d be more than happy to blame stupid teams at Microsoft who did something similar.  And I made certain to include a pointer to the Debian security advisory, which very nicely spells out the risks associated with the vuln and the mitigations.  

    Stuff happens, that’s why it’s important to have checks on developers making random modifications to code, especially on developers making modifications to code they don’t own.

  5. Anonymous says:

    "Apparently (as Brian mentioned) they only talked about the first of the two changes with the OpenSSL team.  The change they committed went beyond what they discussed."

    I thought the problem was that the first change was not a big deal, but the second one is the one that had been adding the entropy to the buffer.  The problem was they were both the same line of code, so the change to the first mistakenly was applied also to the second: the first iteration of that line read from uninitialized data, and the second iteration of the line was after the entropy code, and when the second one was commented out…oops.

  6. Anonymous says:

    Slight correction: valgrind is a *dynamic* analysis tool.

  7. Skywing says:

    Yeah, have to say that Debian, or at the very least, this package maintainer, gets a big demerit on security for this one.  It was definitely a fairly bad thing to see the ssh hostkeys for *all* of my Debian boxes showing up in their known-bad-keys database…

    Had a bit of fun today regenerating all my host keys, and changing all passwords and all other sensitive information that had been sent over SSH, ever, on my Debian boxes.  Not a cool thing to wake up to, certainly.

    Part of their security process seems to be a bit questionable here, too:

    – Debian shouldn’t be making functional changes to a package in the first place unless they’re either forking it or the upstream package is dead and the upstream provider isn’t maintaining it at all, anymore, IMO.

    – Somebody who is, as you say, clearly not a crypto expert or an expert in the code base itself seems to be making changes to security-sensitive bits like that.

    Furthermore, this also kind of reeks of "we’re getting a strange toolchain/debugger tool warning, so let’s do whatever we can to silence the warning without really understanding what’s causing it", which is practically always the *worst* possible thing to do in that scenario.

    I’m also a bit wondering how much they bother to test their security fixes, too.  Just last week, Debian put out a fix for a cacti SQL injection bug (DSA-1569-1) that will completely hose a default install of cacti.  They re-released it the next day, but that’s really… highly questionable, in my opinion; even the most basic of regression tests would have picked up on the fact that their security fix completely broke the entire program that it was "fixing".

    On their defense, however, checking for a broken RNG is probably not something that’s likely to be typically regression-tested, and is certainly a bit harder to test in a deterministic fashion than most other problems.  Nonetheless, I’ve been a bit less than highly impressed with Debian lately in that respect.

  8. Anonymous says:

    Actually, it looks like they did ask the OpenSSL folks, or am I wrong?

    Also, look how poor the compromised RNG is:

  9. Anonymous says:

    I’m also curious to hear more about this bug Jack. Since it’s fixed, can you give us more specifics on the exact scenario, affected component, what you specifically saw?

    As for the openness of Debian (and Ubuntu and the rest), it clearly did not matter at all – two years of people looking at the code and countless millions (billions?) of compromised keys. And for every 1 admin that regenerates new keys (and new certificates that cover how many SSL webpages, TLS connections, etc), 2 or 3 won’t even know to do it. The volume of this problem is mind boggling, as a simple hotfix doesn’t do *anything* to close your vulnerability – this is very different than most security vulnerabilities. It’s no different than saying you must change every single password in your organization.

    This is a good punctuation on the idiotic idea that "given enough eyeballs, all bugs are shallow".

  10. Anonymous says:

    Ha, ha. The limits of the ‘infinite monkeys’ approach of open source start to become apparent.

  11. Mark, the problem is that they asked the OpenSSL folks, but the change they actually committed went beyond the change they described to the OpenSSL folks.  If they’d submitted the patch to the OpenSSL folks, they presumably would have explained the error.

  12. Anonymous says:

    "What I currently see as best option is to actually comment out those 2 lines of code" is from the original mailing list message, so Kurt Roeckx mentions both changes. The reply from the OpenSSL dev even quotes that bit.

    Ben Laurie’s excuse that he sent it to the wrong list is laughable since his suggested list is mentioned only in a random bug report on OpenSSL’s website, and the website specifically states that openssl-dev is the correct place for such issues. It seems to have been that way since at least 2005.

    Still doesn’t excuse the fact that a proper patch should have been sent upstream following OpenSSL’s guidelines so that the nature of the change would have been explicit and not buried at the bottom of a long message. It’s also bad that the Debian-specific patch was submitted the day after the original query, which shows he didn’t even try to submit it upstream.

  13. Anonymous says:

    Pewnie już słyszeliście o problemach z PRNG w OpenSSL, które są specyficzne dla Debiana (i dystrybucji na nim opartych). Ciekawe jest jednak jak do tego doszło… More proof that crypto should be left to the experts, Vendors Are Bad For Security

  14. Anonymous says:

    Addendum to a previous comment: The OpenSSL dev’s response wasn’t made in the context of "this change will go out to many more people than one" as Kurt’s messages didn’t call out the fact that he was doing this for a package to be distributed by Debian and not just for his own personal testing. It still seems like he should have probed more instead of just saying "uninitialized data doesn’t add much entropy, you should be fine deleting those lines".

  15. Anonymous says:

    Valgrind is a /dynamic/ analysis tool. It detected that OpenSSL was using uninitialized memory as part of its entropy pool (nothing wrong with that, if it’s done carefully), and that this was tainting all data produced by OpenSSL as being "uninitialized" (valgrind’s way of warning about nondeterministic behaviour). Now, in this case, that’s exactly what was wanted, but it made it harder to debug real problems with programs which use OpenSSL.

    No-one thought this use of uninitialized memory was a security bug. But someone messed up the patch, and removed not only the use of uninitialized data, but also another line which happened to look the same, but did something fundamentally different. That’s the first mistake, which could happen to anyone.

    Then, when communicating with the openssl maintainers, the Debian folks didn’t actually provide the patch they were going to use, for critique. That’s the second mistake; the bug should have been caught when reviewed by the original developers — but there are plenty of reasons why this would slip through, expert developers or not.

    Then, millions of Debian users upgraded this security-sensitive package, picking up a patch to aid a handful of people debug their own code, which had the side-effect of breaking the pRNG.  This is where the fairy-tale of open-source security software is supposed to kick in. Of the millions of people upgrading, hundreds of people are expected to review the changes and spot the mistake. But not a single person did.

    I hope the open source community learns something from this. In principle their code could get more review, and so could be more secure than corresponding closed-source code, but it requires people to actually do said review.

  16. Anonymous says:

    "As has already been commented several times in these discussions, the Debian maintainer did ask the package maintainers:"

    don’t defend this guy , Kurt Roeckx:

    i) he touched code without analyzing the consequences ( i’m a amateur programmer and i *do* analyze every piece of dependent code before removing anything )

    ii) he committed this patch without consensus in the package maintenance team ( see the bug report , you will see that Kurt Roeckx unilaterally decided to commit the infamous patch ).

    iii) he didn’t run any test to check if the RNG keep working ok

    etc, etc

    How ironic, the stupidity of one man can throw to trash the ( well earned ) reputation of a great distro as Debian.

    The remedy:

    Tigher policies, unit testing, peer review, code audits, clear consensus before commit

  17. Anonymous says:


    > a simple hotfix doesn’t do *anything* to close your vulnerability

    Which is why it’s more than a simple hotfix; there are also updates to openssh and other packages which, in openssh’s case, actively blacklists the affected keys. After that update is installed, the compromised keys won’t work anymore.

  18. Anonymous says:

    Ned: Yep, any data ever sent to an box where the RSA private key data for the the box’s TLS cert was generated on an Etch/recent Ubuntu system must be consider compromised.

    Should be possible to generate a list of all possible RSA public key pairs that could be made by openssl on x86, x64, however.  The broken RNG had a whopping 32768 (!) possible output states.

    It’d certainly be an interesting exercise to go around with such a list of known-bad 1024-bit pubkey values and see how many https sites you’ve visited in the past two years match.

  19. Anonymous says:

    RNG – Random Number Generator Kada se otkrije propust u modulu koji generiše kripto ključeve, problem

  20. Anonymous says:

    He did include the patch in that thread, albeit not in context.  He clearly said he was going to remove both of them, and it was assented to by OpenSSL devs – Ulf Moler said "[it does not contribute much to the entropy].  If it helps in debugging, I’m in favor of removing them."

    But it very obviously wasn’t tested, as the original check-in didn’t even compile!

    This was an WTF all the way around, and clearly the "many eyes makes bugs shallow" theory does not apply if those eyes are not looking at the code or not understanding the code they’re looking at.

  21. Anonymous says:

    By the way, those advocating unit testing – how do you use unit tests to verify something is /not/ deterministic?

  22. Mark, read Knuth volume 2 – there are a number of tests that can measure the randomness of a RNG.  Given that this change removed all the randomness of the seed, it should have been pretty easy to tell that something had been broken.

    So yeah, a unit test could have been written to catch this.

  23. Anonymous says:

    >But the vast majority of the developers at MSFT aren’t

    >crypto experts, that’s why we rely on the crypto team to

    >review our decisions that are crypto-specific.

    I think that’s the difference between MS and Debian, this was a failure of change control and the development process, not necessarily a failure of crypto.  The problem arose because a single maintainer decided to make a change in some code that they didn’t understand and that no-one ever reviewed before it was committed.  Presumably at MS a single user wouldn’t be able to slip through a change like that without it being reviewed by a domain expert.

    (Having said that, Benny Pinkas and co showed that the original Windows CryptGenRandom implementation wasn’t so hot either :-).

  24. Anonymous says:

    >Mark, read Knuth volume 2 – there are a number of tests that

    >can measure the randomness of a RNG.  Given that this change

    >removed all the randomness of the seed, it should have been

    >pretty easy to tell that something had been broken.

    Not necessarily, since it was still being postprocessed with MD5.  I can (for example) use MD5 to hash the sequence { 0, 1, 2, 3, 4, 5, 6, … } (entirely predictable) and yet no amount of entropy testing will be able to tell that the output isn’t random.  If it could then you have a distinguisher for MD5, which means you’ve broken the hash function.  This is why you need to do your goodness-testing on the input to the hash function.  However since hash functions are often used to accumulate entropy from low-entropy sources you can’t just reject apparently low-entropy input out of hand either.  In general there’s no easy way to determine something like this, if you specifically want to check for cycles you can use the n/2n-step trick to try and locate them, but if the cycle is too large or the fault is something other than short cycles this won’t detect it.

    (It’s a lot more complex than that, I could go on about this for awhile :-).

  25. Anonymous says:

    It doesn’t appear that all the programs that use cryptography on Debian-based systems were affected by this issue. gnupg for example is not likely affected by this particular issue.

  26. Peter, both of your points are totally valid.  This <i>was</i> a failure of process, not crypto.  I’m also not saying that it couldn’t happen at MSFT (there but for the grace of G_d go I) – stuff happens, and it sucks bigtime when it does.  I hope that the checks and balances we have in place would have caught this but sometimes problems get missed.

    Also I hadn’t realized they ran the output of the RNG through MD5, you’re right that would complicate the ability to detect the amount of entropy in the RNG.  

    And Ryan, again, you’re right – it was only OpenSSL that had the issue, so only SSL and SSH keys were busted, not all crypto on the platform.

  27. Anonymous says:

    It has been interesting to read this discussion, particularly after reading more blogs in the last 2 days than can be considered within healthy limits (most of which has been startlingly ill-informed). Nice to see some informed discussion here. As one of the openssl devs who responded to the original posting from the debian package maintainer, I want to clarify what I think is an important point – because I’ve seen this "but he *did* mail the openssl list and they botched it, so it’s openssl’s fault too" stuff floating about with troubling regularity. Far be it for me to chalk this up to debian fanboyism … <ahem> … but;

    Please consult the openssl FAQ and search the page for valgrind, that may help put this in context. We get this same question *frequently* on the mail-list – some fresh intern or overeager basement-dweller shows up on the list and says "I ran valgrind on <openssl-dependent-app> and found this bug, I’m gonna save the world". More jetsam amidst the flotsam that we all have too little time to adequately respond to. My response (one of the two devs who responded) was to note that one of the quoted lines had a comment mentioning purify and "That’s your first clue, build with -DPURIFY". Another FAQ bites the dust, I think to myself. But oh no, this innocent FAQ posting was to later resurface in the most uncomfortable of fashions… Ulf’s response to the two quoted lines (essentially identical to one another, out of any kind of context, and without any real hint that a review was warranted) was tantamount to saying he’d like to see the uninitialised usage go away, presumably because he was fed up with this question coming up and perhaps based on a suspicion that something daft would, one day, happen. (FWIW, the use of uninitialised data from the caller’s buffer at best adds a little entropy that is caller-pattern-dependent, and at worst won’t do any harm.) There was no patch posted, no indication that this was anything other than a FAQ (that the poster *should* have determined for himself, before or after posting), and most importantly, no indication that this was going to be patched onto a popular distribution affecting thousands of mission-critical systems around the globe.

    It was a giant package-management cockup and I welcome Larry’s comment that patching an application’s code should ideally be left to those who understand that code best. IMHO distributions need to get a little less preoccupied with filling in some delusional (and competitive) "value-add" between their users and up-stream applications, and a little more preoccupied with managing risk and ensuring that the apps their users run match as closely as possible from the same source code that the rest of the world is reviewing (ie. the up-stream code). This "problem" was not a distribution- or usage-specific thing, the maintainer *thought* it was a fundamental glitch in the code and, had this been true, must have assumed it affects everyone in the same way – not just debian. So why did it stay local, unannounced, an unreviewed? So rather than blaming openssl or even FOSS more generally, one could reasonably argue that from a code-review perspective, this maintainer created a more or less *closed-source* variant of openssl. But this being an MSDN blog, I’m better to leave that whole discussion alone … 😉

  28. Geoff, thanks for stopping by, I appreciate the insightful comment.

    I totally agree that this was a failure of management – while I have made changes to other portions of Windows before (most recently to WMP, the shell and HID control logic), I ALWAYS get the people who actually own the code (and who will ultimately be responsible for my bugs) to carefully review my code.  

    And those developers have found bonehead bugs in my code that they would never have made, and the code review caught them (this works both ways – I’ve found some stupid bugs in changes other developers have made to code I own).

    I don’t blame FOSS for this, this has absolutely nothing to do with the FOSS vs OTS argument, IMHO

    One advantage our model has over the FOSS model is that there is effectively only one distrbution for Windows, so it’s much harder to have long-term variations beween components[1].  In FOSS terms, because we’re a closed source shop it means that all external patches are submitted to the maintainers of the branch.

    [1] the sustained engineering division has its own set of branches of the Windows source code that they use to create hot fixes and service packs but even those changes eventually get merged back into the main line distribution.

  29. Anonymous says:

    Larry, I totally agree. You make an interesting point, as the "FOSS advantage" of having millions of eyes is very dependent on them all looking at the code that everyone is *using*. If everyone is looking at their own forked version of the code then the value of the code-review is diminished and coordination of any issues that get found is quasi-impossible in any case. Worse still, if everyone’s looking at the same code but everybody is *using* derivations of the code that *nobody* is reviewing, the risks should be clear to even the dullest among us.

    I’m not interested in debating FOSS vs OTS here, in case that wasn’t clear. But I would comment that FOSS, in isolation, has the valuable (nay fundamental) characteristic that users can assume a significant responsibility for identifying problems directly – either preventatively (audit, study, curiosity, etc) or retrospectively (debugging, diagnosis, localised support). So, to some extent the solution-domain is pushed out to meet the problem-domain. You can test and test and test, but it’s always the noob that buys your product that’s going to tickle out those last few bugs after release, I dare say you’re familiar with this. 🙂 FOSS has this strength, but the corresponding weakness is precisely what you identify – it presumes that users do not (yield to human nature and) fork, hoard, and re-badge the up-stream code. Linux distributions straddle this strength and this weakness, and to my dispair, all too frequently veer the wrong way. In particular, they raise the bar too high for users to help themselves. Users’s are typically running a legacy version, patched and built via a pathological packaging system that they are usually shielded from, in which even just identifying the source code that is compiled into their system is *hard*. As you know, if you’re going to give users an essentially closed-source system (in terms of the ease with which users can engage problems or look for them before they even occur), you better have a serious budget for identifying such problems yourself. This debian issue has been an excellent example of this – the widely-reviewed code that users figured they were relying on did not contain the bug that was present on their system, and yet the distributed package had minimal internal review because like most FOSS, it relies on users and peers (and up-stream quality) to meet the appropriate levels of Q/A.

  30. Anonymous says:

    What if your whole business was about trust. More specifically, what if your entire business was about providing a long number (very long) to companies so nobody knows that number except you and the company. And then, suddenly you find out that the number

  31. Anonymous says:

    What if your whole business was about trust. More specifically, what if your entire business was about providing a long number (very long) to companies so nobody knows that number except you and the company. And then, suddenly you find out that the number

  32. Anonymous says:

    Where were your security when MS Blast infected millions of Windows copies around the world?

  33. Tom, Blaster (and Sasser) were the tipping points that caused Microsoft to rethink the way we shipped software.

    You’ll note that there hasn’t been a significant internet worm since Blaster – that’s because of the work that MSFT did.

    Yes, we came to the game late, but we get it.  And our current track record by any reasonable measurement is dramatically better than any of the other platforms out there (either OS or application).

  34. Anonymous says:

    IE7 has more unpatched security vulnerabilities than any other web browser (with the exception of IE6, which has 117 unpatched vulnerabilities). The windows server OS has more unpatched vulnerabilities than any OS with the exception of Linux and Solaris.

    I believe that Microsoft has come around in the area of security, and I’ve run many different windows based servers without incident since the Code Red worm; including NT 4, 2000, and 2003 systems. However, the fact that you chose to knock Linux to prove your point about the delicateness of crypto code (and not-so-subtlely bashing OSS in the process), gives me even more reason to believe that Microsoft is a cult. I’ve worked with coders who were trained in Redmond, and its amazing how closed minded they were.

    If you want to talk about leaving crypto to the experts, why not start with what’s closest to you: (I especially like this one.. error released in a patch that had to be re-patched. Sound familiar?)

  35. IMHO I wasn’t "picking on linux".  I was picking on a team that mucked around with code where they shouldn’t have and introduced a potentially catastrophic security hole because of it.  It happens that it was a Linux distribution, but that’s irrelevant, I would have picked on MSFT if we’d made such a boneheaded mistake in recent history.

    About those "crypto" vulns you pointed to.  The first link wasn’t a crypto vuln, but instead was the fact that cryptoapi’s certificate validation logic could be tricked to attempt to validate certificates against arbitrary hosts.  No question it’s a bug but it had nothing to do with crypto.

    The other two you picked were fixed 6 and 8 years ago (long before the trustworthy computing initiative).  Ancient history.

  36. Igor Levicki says:

    The main issue here is that OpenSSL team didn’t document use of uninitialized memory in their code.

    Code comments are abused nowadays to state stupid and obvious things like:

    a *= 2; // we multiply a with 2

    But it is obviously too much to ask for:

    // NOTE: We are using this memory without initialization on purpose

    If the OpenSSL code was properly documented Debian maintainers wouldn’t make such a mistake.

    You are barking up the wrong tree Larry.

  37. Igor Levicki says:

    >You’ll note that there hasn’t been a significant internet worm since Blaster – that’s because of the work that MSFT did.

    I wouldn’t give you all the credit for that — most likely it was because people started using third-party statefull inspection capable firewalls.

  38. Igor, you’re wrong.  The reason we’ve not had a significantworm since Blaster is because SP2 enabled the built-in firewall on hundreds of millions of machines.  

    There isn’t enough market share for 3rd party firewalls to even come close to matching the installed base of XP SP2 machines.

  39. Btw, you’re wrong.  The problem wasn’t the comments.  The OpenSSL team has said that removing the use of uninitialized memory wouldn’t actually make a difference.  The problem was that the Debian developer went and removed a 2nd line of code that DID have cryptographic significance.

  40. Igor Levicki says:

    >There isn’t enough market share for 3rd party firewalls to even come close to matching the installed base of XP SP2 machines.

    Really? What about all those SP2 machines which had that same firewall disabled and even uninstalled clean by the viruses and malware?

    Let me tell you about Blaster days — I had XP with firewall enabled and I was online via dial-up when Blaster hit me. Machine rebooted and I removed it manually from DOS before booting into Windows again. Then I reconnected and I somehow managed to stay online long enough to find a fix. Windows Firewall didn’t help a single bit.

    Since then I used 3rd party statefull inspection firewalls until I got broadband. Then I completely disabled and uninstalled all firewalls and delegated the protection to the NAT on my router.

    Truth is that many broadband Windows XP users are safe because of NAT and not because of Windows built-in Firewall because that firewall is a joke.

    >Btw, you’re wrong.  The problem wasn’t the comments.

    I never said that the problem was the comments but the lack of them.

    My point was that non-obvious things in source code (a.k.a. nasty tricks or "clever" hacks) should be properly documented especially if the code is open-sourced.

    I sometimes can’t remember why I did something the way I did in my own code, so I don’t expect others to understand. That is what comments are for.

    Here is an example (error checking removed):

    // Get supported OpenGL extensions from a display driver

    const GLubyte *ext = glGetString(GL_EXTENSIONS);

    int len = strlen((char*)ext);

    GLubyte *buf = (GLubyte *)malloc(len);

    RtlCopyMemory(buf, ext, len);

    // WARNING! extension string is returned by driver

    //          and it may reside in read-only memory

    //          that is why we need to copy the string

    //          before tokenizing it

    // WARNING: strstr() would not work because some

    //          extension names are subset of another

    //          longer extension name

    char *p = strtok((char*)buf, " ");

    while (p != NULL)


    // …


    This is an example of what you should do when you expect that someone could play smart and try to "improve" your code.

  41. Anonymous says:

    Everyone has to do their best work. Learn new technology if someone want. Developers can write cryptography code to train thyself but do not publish this code.

Skip to main content