Real life security

One of the things I frequently find myself on the lookout for is real life security breaches and the trust model.  However, I also like to see how these things interact with technology and psychology.

Part of the problem with spam is that the protocol for email, SMTP, was essentially built on a trust model.  It was assumed that everyone with access to email would only use it for legitimate purposes.  Because this was back in the 1970s and early 1980s, it was a model that worked, for the most part.  Access to technology that could transmit email is beyond the reach of the average user; only researchers and government employees really had the access.  The trusted web worked because if someone abused the system, others could find out without a lot of difficulty.  The garden wasn’t walled, but it might as well have been because the cost of access to the technology was prohibitive to regular ham-and-eggers (that’s you and me).

As the cost of technology fell throughout the 1980s, 1990s and 21st century, the security flaws in SMTP became obvious.  Almost anyone could access email and use (or should I say, abuse) it.  It wasn’t difficult to send tens of thousands of spam messages and worse yet, because the protocol was insecure, it wasn’t that difficult to cover your tracks if you were using it for nefarious purposes.  Greed took over, and the lowered cost of technology is what enabled it.  Lowered technology costs are good, of course, but because there was so much existing infrastructure on the underlying platform, the inherent flaws were now a serious problem (ie, spammers could send tons of spam with little recourse available to the end user).  These flaws were always there, but not exploited until the cost of technology was lower than the profit potential of spamming.  Nobody really wants to take the time and energy to completely overhaul the existing email infrastructure; the cost of doing that and the service disruption, combined with new technology adoption, is simply too large – at least for now.

And that brings me to my story.  This past weekend, myself and a friend were driving down the road and he wanted to stop for fresh corn.  If that sounds odd, well, I guess he likes his vegetables.  We pulled into a driveway and here’s how it works: there is a sign that says 2 cobs for $1, a basket where you put the money, and a bunch of pieces of corn on the table.  There is nobody manning the table.  You simply drive up, pay for your corn, and then take it away.  It is based upon the honor system, a model of trust.

I said to my friend, “You know, this is really insecure.  Anyone could just come up here and take whatever they wanted without paying for it.”  But yet, people didn’t.  If they did, then the owners of the corn would probably stop providing it. 

I couldn’t help but compare it to email.  It’s insecure because it is built on a model of trust.  The garden is not walled, but it acts as if it were because knowledge of it is limited to the geographical area in which the corn stand is contained, not everyone likes to eat fresh vegetables, and not everyone knows about the place (I didn’t until yesterday).  So why does it work?

I think that the answer has to do with technology.  The world of computers and the internet has shrunk the world in terms of access to information and ease of transmission of information.  Technology has made it possible to transport physical goods more efficiently, and to create/manufacture/grow/drill raw physical goods more efficiently.  However, this little corn stand does not take advantage of technology in this manner.  People still have to physically go and drive to this place and pick their stuff up.  And they have to know about it.  The ease of acquisition is no easier today than it would have been 5 years ago, 10 years ago or even 20 years ago.  Well, maybe the road is paved better now, but the point still remains.

And what point is that?

The point is insecure trust models can work so long as the circle of trust is small.  If something has open access but knowledge of that access is small… and no one really cares about gaining access… then you have de facto security.  Like the corn above, for now, the model of not having anyone verify that customers are putting in the correct amount of money works.  There’s only a few customers, and people understand that if the system is abused, it will be cut off.  But it has worked for years.  And really, you don’t have to go to this vegetable stand, you could go to the grocery store.  And for the store owners, it is just corn.

If a lot of people knew about this place, the unmanned table wouldn’t work.  The circle of access would grow too large and people would start stealing it.  Perhaps the paradox of technology is that the more successful you are (selling more corn), the more security becomes a concern.  Small players can probably afford to take more risks, but as you get bigger, the risk/reward ratio starts to become too large.  Technology is a great enabler, but with more power comes more responsibility… and other headaches.

Skip to main content