you all are still aware of the bug that had been introduced into Debian random key generation two years before it has been discovered. The whole IT world was victim since not only the code was used in different systems all over the place, the week keys generated spread also.
In an issue of the IEEE Security & Privacy (Sep/Oct 08) I found an article on the topic (page 70 ff). While the article is very good I cannot go conform with one of the last parts:
"As noted, the community failed to find this vulnerability for two years for two reasons: there must have been just enough entropy to produce keys that looked random, and there clearly wasn't enough scrutiny of the critical components or key material quality."
First: No, neither Microsoft is totally secure, a 100% and absolutly nor anybody else. OK??
What I find interesting here is that it first is a nice recipe. Just bring in enough entropy and nobody will find it... Another point is the scrutiny. What this problem really shows is that it is just not enough to review code (how often this takes place is another question) it is also about who scans it. Michael Howard had a nice metaphor: If the thousand eyeballs would work, why not asking the passengers of a plane to inspect it and if everybody is satisfied the plane must be safe!?!? Because one must know what to look for, right??
The guys that introduced that bug were doing just the right thing to prevent the compiler from complaining on an error. The problem was that what used to be an error everywhere else was done by intent right in this part of the code. So once again Security is about people, process, and technology... if one fails, forget the other three. Is this meant by scrutiny?? So how did the world change after that??