Sometimes you have to trust my Jedi instincts

One of the things that annoys me a little bit about the Microsoft revolution is the desire for proof-before-action.  For example, if I want to change a setting to (increase spam filtering | reduce false positives) I have to go back and get historical data about whether or not this will have the desired effect and projected desired effect, what the trade-off is, and so forth.  This is quite reasonable, actually.  I think that this is an important part of the test-and-roll-out process and should not be overlooked with new technologies.  Usually, it takes quite a while to gather the required information and the changes don't go through as often as I like, but that's the price we're willing to pay to avoid making costly mistakes.

On the other hand, when it comes to certain parts of our filtering product, I believe that I have exceptionally good instincts.  I'm like a Jedi.  I can use the Force to foresee the effects of a change I want to make.  Let me share two examples.

In September 2004, I had only been on the job for two months.  It was during this time that the Dan Rather scandal hit wherein 60 Minutes reported a story on President Bush's military records based on documents that later turned out to be forged.  Anyhow, there was a flurry of spam going around with the subject line "Dan Rather must be fired!"  I wrote a spam rule based on that subject line, but in the comments of the rule I said that I predicted the rule would expire (ie, not get anymore hits) within 10 days.  I created the rule on Sept 16, 2004.  The last time it was hit?  Sept 25, 2004, nine days after I created it.  Even after only processing spam for a couple of months I had already acquired an intuitive feel for the nature of spam runs.

The second example I can think of is a flurry of pharmaspam with the subject line "The Ultimate Online Pharmaceutical."  We were seeing lots and lots of these messages and we blocked them, but I kept seeing false positive reports.  I suggested to my manager that we make the rule that blocked those messages so aggressive that we drop the message and not even deliver it to the users' spam quarantines, but he declined saying that was too risky.  Yet, a week and a half later, upon directives from his manager, he made the rule so aggressive that it dropped the message and didn't deliver it to users' spam quarantines.  In other words, eventually we did what I suggested we do ten days earlier.

A more recent example is the time another spam analyst tried to block an obscene word in the subject and made the rule quite aggressive.  I suggested loosening the score because on such a short word, it can have unintended matches like "peacock", "poppycock" or "Stephen Leacock."  Sure enough, a month later, I found a false positive with the subject line "I need your John Hancock on this."  (I've changed what the rule matches to some fictional examples, but you get my point).  I didn't need to think of a specific example at the time I made the suggestion to back off on aggressiveness, I just knew (from experience, I guess) what makes a good spam rule and what doesn't.

In Brett Steenbarger's blog and book on Trader Performance, he says that expert performers don't know how they made the transition from novice to professional.  Thus, when they offer advice it's something not particularly helpful like cut your losses short and let your profits run.  This is good advice but not particularly helpful to the rest of us who know we're supposed to do that but the question is how do we do it?  Anyhow, experts are good at what they do and they have an intuitive feel for the market.  Similarly, in some (but not all) regards when it comes to anti-spam measures, I have an intuitive feel for it.

Thus, when I am supposed to go back and provide proof for some of the configuration changes I want to make, it can be frustrating.  I know that nobody is going to believe that I somehow know that what I want to do is going to be effective, but nonetheless I still need them to take my word for it.  I can't always explain how I know it, but most of the time when it's implemented it is effective.  Delays in the fight against spam have a direct impact on user-experience, and believe-you-me doing all this extra research introduces delays.  The anti-spam industry must react quickly in order to keep up with the spammers themselves because the challenges evolve so quickly.  It helps to have somebody on your side who has that feel for the world of spam, even if it is only a small part of what the spammers are doing.

This post is a bit more of a rant, I suppose, but I needed to get it off my chest.