Wow – The code review entry was really popular.
I have to admit that I have never used a code review tool and they may be wonderful. I tend to plough on through the code just to be sure that I haven’t missed anything.
I don’t do that many code reviews so it comes as something of a break from routine when I do get one. Much of my time is spent on reviewing reports generated by one or other tool as part of an incident response case.
IR cases are hacking or malware – basically, a compromise of a customer. These do happen, sometimes because systems are badly patched, sometimes because of social engineering getting someone to run a malicious file or (in theory) because someone has found a new hole and exploited it. I am yet to see an attack through a previously unknown vulnerability and I hope that I never will – a wormable hole is the sort of thing that gives us nightmares. Anyway, however it happens, a customer gets malware on a client or server. What they want is for us to remove it so that they can carry on as before. They are often disappointed when we recommend rebuilding. The decision as to whether to rebuild comes down to acceptable risk.
How much risk is acceptable? In many environments, a bit of malware that just pops up the odd advert for Spong’s footcare products would be acceptable. A keylogger which records every keystroke and sends it to a black hat would not be OK in any environment.
However, how do you know exactly what a bit of malware does? It is possible to analyze a bit of malware automatically but that is always a bit risky – because all you know is what it did that time when it was being watched. Does it do that every time? Can it do something else? Would it behave differently if it were not being monitored? That isn’t as crazy a question as it seems – there are bits of malware that do that. A good way to be sure is to double-check by getting a good reverse engineer to analyze the malware as well. This is a tricky thing to do and there are not so many people who can do it, especially against the packed malware that we see these days. However, it is possible and a decent engineer using the right tools can get a pretty complete map of a simple malware in a few hours. A nightmare malware could take a couple of weeks. However, it is possible… or it would be if there were a handful of malwares released a week and they were not polymorphic and there was some mechanism that ensured that they would wait in a queue to be analyzed. In practice, there are many teams of blackhats and they release dozens of variants of each malware. No-one knows for sure how many variants there are out there. It is more than 200,000 certainly. Let us call it a quarter of a million. If we assume that an expert can analyze and report on 1 per day and that there are something like 100 analysts doing this day in and day out and sharing results (and that would be optimistic) then that would be around 10.5 years. That assumes no new malware, of course. Not a very realistic assumption given that the rate of production is increasing. In practice, the industry is hard pressed to get more than an approximate idea of what most malware is doing.
When considering risk, we have to look at the worst case. Unless we know otherwise, malware could drop other malware – Trojan droppers do – or turn off firewalls or disable AV solutions or act as a back door allowing someone else to administer the machine. We know of multiple malwares that do just that.
Now, detection is not all that certain. The best antivirus solutions get up to 9x% of known malware. That sounds OK, doesn’t it? So, they miss between 5-10% of known malware and pretty much anything that they have no signature for is missed. So, if you scan your machine and the AV solution says that it is clean, it may be right. Or it may be subverted in such a way that it is unable to tell. Or there may be a malware that is not in those signatures.
Realistically, there is a significant chance that a system reported clean is still compromised. Even the most careful checking of the kernel and user memory might miss something as humans are fallible. Any machine that is known to have been compromised once is necessarily less trustworthy because holes may have been added in the firewall or user rights changed or a bunch of other things. AV solutions don't generally do more than check for known malware.
If you are 99% sure that 1 machine is clean then that is fine for a home user. It is risky for a developer on a network. It is unacceptably dangerous in a financial, military or safety critical scenario. Of course, most of us are not controlling nuclear reactors for fun and profit but a lot of us bank online.
The question in the end is simple. I am at heart a developer. Let’s look at it that way.
Let CostOfProbableLoss = ValueOfAsset * Risk.
If CostOfProbableLoss > CostOfMitigation then RecommendMitigation
It takes hours to rebuild a compromised SQL server box. If it contains data that is worth millions… well, the risk doesn’t have to be very high for that to be the cheapest option