Have you ever heard someone ask “Do we need to fuzz this?”
This question comes up quite a bit in the context of reactive security work. There are basically two traditional answers:
When you’re attempting to find variants of something like a memory corruption bug, fuzzing is your best friend. It’s a no-brainer.
- No. Er, wait. Sure, uh, go for it...
When you’re attempting to find variants of something that looks more like a design bug, fuzzing might at first seem silly. The answer becomes less clear after thinking about it a little more. With 20/20 hindsight you can usually think up a way in which any particular bug might be caught by an automated process. Would that automated process fit a loose definition of fuzzing? Possibly.
This intellectual discussion usually doesn’t go very far. This is because of a general perception that fuzzing for design bugs just isn’t going to deliver the ROI that creative hacking, code analysis, etc. can provide in a given period of time. But it’s very hard to say that this is true in all cases. Hence the vague answer above.
Here’s one simple scenario where a technique that could surely be considered fuzzing (and was specifically designed to identify design bugs) did yield a good result.
While testing the Internet Explorer XSS Filter prototype in 2007, SkyLined identified that classic ASP would simply drop invalidly encoded character sequences from HTTP request querystring parameters prior to the HTTP response formation. This resulted in a situation where our filter could not properly match requests to responses and thus the filter could be bypassed for apps on classic ASP.
The XSS Filter was adapted to account for this situation and test cases were created.
Later we developed a fuzzer capable of slightly modifying test cases before running them. As you may imagine, it’s not hard for a simple fuzzer to generate various forms of invalidly encoded character sequences. As it turned out, our fix for the encoding issue missed a corner case that our fuzzer was able to trigger. We were then able to fix the variant and add a new test case to cover any future regressions.
Fuzzing for design bugs is not a new idea. Just in regards to XSS, it was mentioned by Alexander Sotirov in 2008, and of course the sla.ckers are well known for putting this approach into practice. What is most interesting to me right now is the question of when / how to apply fuzzing style techniques for design bugs in general. I don’t recall ever having seen a really good answer to this question.
So I would be interested in your thoughts on classes of design defects that are particularly amenable to some form of fuzzing, as well as classes of design defects where fuzzing is just a waste of time. (Some other questions: What actions must a DOM crawler have to perform in order to be a true fuzzer, and does it even matter if it’s called a fuzzer or not?)
Feel free to hit me up on Twitter or leave a comment on this blog.