Think inside the box

IMG_2201_small

Yesterday evening I stumbled upon a curious article on an Italian online magazine, and as it became a conditioned reflex I posted it on my Facebook’s wall: then i promptly forgot everything about it.

This morning I stumbled upon a blog post from a good friend of mine in the identity community, who (not speaking Italian) thought that the above entry was in fact one of those very annoying fb quizzes and reposted a screenshot snippet on his blog with a humor comment on it. All normal, right? Those sound like the most unremarkable events ever reported. And yet, think for a moment: something broke, a system did fail here. My Facebook entry was set to be visible only to my fb friends & networks, while my friend’s blog is available on the public Internet. In analogy to the “elevation of privilege attack” expression, I would say I was a victim of “unintended audience enlargement” attack; but then again, I should probably phrase it differently… ;-) Let’s just say that when I use a system like Facebook I am in fact thinking inside the box: I think in term of what Fb enforces, and confidently use the tools at my disposal with the implicit assumption that everything will comply with the Fb’s laws of physics. Guess what, those laws can be eluded: you think that’s air you’re breathing now? You may think that placing the Bishop there will close the chess match in your favor, however your adversary may still have a move at its disposal: indeed, smashing the checkerboard with a sledgehammer is one such move. Taking a screenshot (or a picture, as the photo on the left suggests) of my fb’s wall is the sledgehammer that shatters the thin porcelain between the original intended audience and the vast see of casual internet surfers. Hofstadter (I never, never spell it right at first attempt) makes a beautiful point about heterarchies in GEB, but I don’t want to overstate my case. My friend certainly didn’t enlarge the audience of that piece on purpose, and I really really don’t care if that specific entry is visible on the wild wild Internet. But it’s a very good example of how *hard* is it really to manage rights and access rules on information outside a rigorously secured system. Expressing intentions is complicated: I know that Eve is working hard on the problem, and at the last IIW it was clear that the issue is of interest for many. Having an intuitive understanding of other’s intent is complicated as well, as our little example demonstrates: misinterpreting can happen even to identity experts, and once we misunderstand, percolating to the next level in the heterarchy and violating it can be exceedingly easy. Don’t get me wrong, the fact that it is complicated does not exempt us system builders from trying. The fact that many users may choose not to bother to understand and take control does not exempt us either: we should work toward reasonable defaults (good luck defining those) for the ones that can’t be bothered, and empower the others to take informed decisions. First law! You know Aenea’s leitmotif, “choose again”. At least, IMO that holds for agile applications where the info you share are vacation pics and the latest phone you bought, however I do believe that for certain other applications we should be ready to kick up a notch. That means both making sure that the user appreciate the sensitivity of the information being handled (dialog: “Dude! That’s your SSN we are talking about. Are you sure you want to print it on a tshirt?”) and providing adequate measures for keeping the info within the intended audience (ie encrypt a token containing SSN with the key of its intended audience, and have the token declare the intended audience in the signed portions). That does not mean that the info cannot be improperly shared (wait a minute, with uprove it may mean exactly that! You can’t share what you don’t know), but that doing so requires effort: how much effort should be a function of how much you care about keeping the info inside its proper box.

Oh well, 30 mins of rambling for what? In summary: we can trust each other, but that does not imply that we always understand each other’s intention: the resilience of one system should a misunderstanding (or abuse) occur should be proportional to the projected cost of the consequences of such misunderstanding. The fact that in our daily experience that cost often tends to zero (as in today’s example, luckily I was not saying anything exceedingly wrong…) should not lull us in the conviction that it is always zero.

Woah, it was a long time I could not afford the luxury of a post deserving the tag “Wild Ideas”, it was nice to unwind a bit. Nevermind, I’ll get back to business soon enough :-)