In doing a little audience research for my spot at MacDev 2009, I've discovered that the word "security" to many developers has a particular meaning. It seems to be consistent with "hacker-proof", and as it could take most of my hour to set the record straight in a presentation context, here instead is my diatribe in written form. Also in condensed form; another benefit of the blog is that I tend to want to wrap things up quickly as the hour approaches midnight.
Security has a much wider scope than keeping bad people out. A system (any system, assume I'm talking software but I could equally be discussing a business process or a building or something) also needs to ensure that the "good" people can use it, and it might need to respond predictably, or to demonstrate or prove that the data are unchanged aside from the known actions of the users. These are all aspects of security that don't fit the usual forbiddance definition.
You may have noticed that these aspects can come into conflict, too. Imagine that with a new version of OS X, your iMac no longer merely takes a username and password to log a user in, but instead requires that an Apple-approved security guard - who, BTW, you're paying for - verifies your identity in an hour-long process before permitting you use of the computer. In the first, "hacker-proof" sense of security, this is a better system, right? We've now set a much higher bar for the bad guys to leap before they can use the computer, so it's More Secure™. Although, actually, it's likely that for most users this behaviour would just get on one's wick really quickly as they discover that checking Twitter becomes a slow, boring and expensive process. So in fact by over-investing in one aspect of security (the access control, also sometimes known as identification and authorisation) my solution reduces the availability of the computer, and therefore the security is actually counter-productive. Whether it's worse than nothing at all is debatable, but it's certainly a suboptimal solution.
And I haven't even begun to consider the extra vulnerabilities that are inherent in this new, ludicrous access control mechanism. It certainly looks to be more rigorous on the face of things, but exactly how does that guard identify the users? Can I impersonate the guard? Can I bribe her? If she's asleep or I attack her, can I use the system anyway? Come to that, if she's asleep then can the user gain access? Can I subvert the approval process at Apple to get my own agent employed as one of the guards? What looked to be a fairly simple case of a straw-man overzealous security solution actually turns out to be a nightmare of potential vulnerabilities and reduced effectiveness.
Now I've clearly shown that having a heavyweight identification and authorisation process with a manned guard post is useless overkill as far as security goes. This would seem like a convincing argument for removing the passport control booths at airports and replacing them with a simple and cheap username-and-password entry system, wouldn't it? Wouldn't it?
What I hope that short discussion shows is that there is no such thing as a "most secure" applications; there are applications which are "secure enough" for the context in which they are used, and there are those which are not. But the same solution presented in different environments or for different uses will push the various trade-offs in desirable or undesirable directions, so that a system or process which is considered "secure" in one context could be entirely ineffective or unusable in another.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment