It becomes evident, thanks to the mass centralisation of the neverending september effect that is stackoverflow, that despite the large number of electrons expended on documenting the retain/release/autorelease reference counting mechanism for managing memory in Cocoa, Cocoa Touch, UIKit, AppKit, Foundation, GNUstep, Cocotron and Objective-C code, very few people are reading that. My purpose in this post is not to re-state anything which has already been said. My purpose is to aggregate information I've found on the topic of managing memory in Cocoa, so I can quote this post in answers to questions like these.
In fact, I've already answered this question myself, as How does reference counting work? As mentioned in the FAQ, I actually answered the question "how do I manage object lifecycles in (Cocoa|GNUstep|Cocotron)"? It's actually a very violently distilled discussion, so it's definitely worth checking out the references (sorry) below.
Apple have a very good, and complete, Memory Management Programming Guide for Cocoa. They also provide a Garbage Collection Programming Guide; remember that Objective-C garbage collection is opt-in on 10.5 and above (and unavailable on iPhone OS or earlier versions of Mac OS X). GNUsteppers reading along should remember that the garbage collector available with the GNU objc runtime is entirely unlike the collector documented in Apple's guide. GNUstep documentation contains a similar guide to memory management, as well as going into more depth about memory allocation and zones. Apple will also tell you how objects in NIBs are managed.
The article which gave me my personal eureka moment was Hold Me, Use Me, Free Me by Don Yacktman. Stepwise has another article, very simple rules for memory management in Cocoa by mmalc, which is a good introduction though with one caveat. While the table of memory management methods at the top of the article are indeed accurate, they might give you the impression that keeping track of the retain count is what you're supposed to be doing. It's not :). What you're supposed to be doing is balancing your own use of the methods for any given object, as described in rules 1 and 2 of "Retention Count rules" just below that table.
James Duncan Davidson's book "Learning Cocoa with Objective-C" has not been updated in donkey's years, but its section on memory management is quite good, especially the diagrams and the "rules of thumb" summary. Luckily, that section on memory management is the free sample on O'Reilly's website.
If reading the theoretical stuff is all a bit too dry, the Mac Developer Network have a rather comprehensive memory management training video which is $9.99 for non-MDN members and free for paid-up members.
Finally, Chris Hanson has written a good article on interactions between Cocoa memory management and objc-exceptions; if you're using exceptions this is a good discussion of the caveats you might meet.
Monday, December 22, 2008
Wikipedia == fail
On the same day that fark announce the wikipedia irony, I would like to point out a similar situation I saw just today.
This is from the Susie Dent entry discussion page:
This is from the Susie Dent entry discussion page:
IMDb relies on the contributions of the public, so isn't an overly reliable source.
Sunday, December 21, 2008
better security, not always more security
Today's investigative investigations have taken me to the land of Distributed Objects, that somewhat famous implementation of the Proxy pattern used for intra-process, inter-process and inter-machine communication in Cocoa. Well, by people who measure whether it's a performance hog, rather than those who quote it; as a hint, it was indeed a significant overhead when your CPU was a 25MHz 68030 and your network link a 10BASE-2 coaxial wire. These days we can spend around those problems freely.
Specifically, I wondered whether I should add discussion of the authentication capabilities in PDO to the FAQ entry. Not that it's frequently asked - indeed, it's a NAQ - but because getting mentions of security into a Usenet FAQ is likely to cause newbies to be thinking about security, which is possibly a good thing (for the world, not so much my uniquely employable attributes). But I decided no, though the subject is interesting, it's not because of the technicality, but the philosophy.
Distributed Objects works by sending NSPortMessage messages over NSConnection connections. The connections and message-passing bumph are peer-to-peer, but DO adds some client-server distinction by having servers register their vended connections with name servers and clients look up the interesting vendors in said name servers. By default, anything goes; all connections are honoured and all clients serviced. There are two security features (both implemented as delegate methods) baked into DO, though. The most interesting of the two is the authentication.
The reason that the authentication feature is interesting is that it's implemented in such a way as to make non-security-conscious developers question the security. The end sending the NSPortMessage includes some data based on the constituent parts of the message, and the end receiving the message decides whether to accept it based on knowledge of the constituents and of the data. On the face of it, this looks like shared-secret encryption, with the shared secret being the algorithm used to hash the port message. It also appears to have added no security at all, because the message is still sent in plain text. In fact, what this gives us is more subtle.
All that we know is that given the source information and the sender's authentication data, the receiver gets to decide whether to accept the sender's message. We don't necessarily know the way that the receiver gets to that decision. Perhaps it hashes the information using the same algorithm as the sender. Perhaps it always returns YES. Perhaps it always expects the authentication data to be 42. On the other hand, perhaps it knows the public key of the sender, and the authentication data is a signature derived from the content and the sender's private key. Or perhaps the "authentication data" isn't used at all, but the source material gives the server a chance to filter malicious requests.
Now all of that is very interesting. We've gone from a system which looked to be based on a shared secret, to one which appears to be based on whichever authentication approach we decide is appropriate for the task at hand. Given a presumed-safe inter-process link, we don't need to be as heavyweight about security as to require PKI; whereas if the authentication were provided by a secure tunnel such as DO-over-SSL, we'd have no choice but to accept the cost of the PKI infrastructure. Given the expectation of a safe server talking to hostile clients, the server (or, with some amount of custom codery, a DO proxy server) can even sanitise or reject malicious messages. Or it could both filter requests based on authentication and on content. The DO authentication mechanism has baked in absolutely zero policy about how authentication should proceed, by letting us answer the simple question: should this message be processed? Yes or no? Choose an approach to answering this question based not on what you currently believe could never be circumvented, but on what you currently believe is sufficient for the environment in which your DO processes will live. If a shared secret is sufficient and adds little overhead, then do that, rather than 4096-bit asymmetric encryption.
By the way, the second security feature in DO is the ability to drop a connection when it's requested. This allows a DO server to survive a DoS, even from a concerted multitude of otherwise permissible clients.
Specifically, I wondered whether I should add discussion of the authentication capabilities in PDO to the FAQ entry. Not that it's frequently asked - indeed, it's a NAQ - but because getting mentions of security into a Usenet FAQ is likely to cause newbies to be thinking about security, which is possibly a good thing (for the world, not so much my uniquely employable attributes). But I decided no, though the subject is interesting, it's not because of the technicality, but the philosophy.
Distributed Objects works by sending NSPortMessage messages over NSConnection connections. The connections and message-passing bumph are peer-to-peer, but DO adds some client-server distinction by having servers register their vended connections with name servers and clients look up the interesting vendors in said name servers. By default, anything goes; all connections are honoured and all clients serviced. There are two security features (both implemented as delegate methods) baked into DO, though. The most interesting of the two is the authentication.
The reason that the authentication feature is interesting is that it's implemented in such a way as to make non-security-conscious developers question the security. The end sending the NSPortMessage includes some data based on the constituent parts of the message, and the end receiving the message decides whether to accept it based on knowledge of the constituents and of the data. On the face of it, this looks like shared-secret encryption, with the shared secret being the algorithm used to hash the port message. It also appears to have added no security at all, because the message is still sent in plain text. In fact, what this gives us is more subtle.
All that we know is that given the source information and the sender's authentication data, the receiver gets to decide whether to accept the sender's message. We don't necessarily know the way that the receiver gets to that decision. Perhaps it hashes the information using the same algorithm as the sender. Perhaps it always returns YES. Perhaps it always expects the authentication data to be 42. On the other hand, perhaps it knows the public key of the sender, and the authentication data is a signature derived from the content and the sender's private key. Or perhaps the "authentication data" isn't used at all, but the source material gives the server a chance to filter malicious requests.
Now all of that is very interesting. We've gone from a system which looked to be based on a shared secret, to one which appears to be based on whichever authentication approach we decide is appropriate for the task at hand. Given a presumed-safe inter-process link, we don't need to be as heavyweight about security as to require PKI; whereas if the authentication were provided by a secure tunnel such as DO-over-SSL, we'd have no choice but to accept the cost of the PKI infrastructure. Given the expectation of a safe server talking to hostile clients, the server (or, with some amount of custom codery, a DO proxy server) can even sanitise or reject malicious messages. Or it could both filter requests based on authentication and on content. The DO authentication mechanism has baked in absolutely zero policy about how authentication should proceed, by letting us answer the simple question: should this message be processed? Yes or no? Choose an approach to answering this question based not on what you currently believe could never be circumvented, but on what you currently believe is sufficient for the environment in which your DO processes will live. If a shared secret is sufficient and adds little overhead, then do that, rather than 4096-bit asymmetric encryption.
By the way, the second security feature in DO is the ability to drop a connection when it's requested. This allows a DO server to survive a DoS, even from a concerted multitude of otherwise permissible clients.
Thursday, December 11, 2008
Whither the codesign interface?
One of the higher-signal-level Apple mailing lists with a manageable amount of traffic is apple-cdsa, the place for discussing the world's most popular Common Data Security Architecture deployment. There's currently an interesting thread about code signatures, which asks the important question: how do I make use of code signatures?
Well, actually, it's not so much about how I can use code signatures, but how the subset of Clapham Omnibus riders who own Macs (a very small subset, as the combination of overheating batteries in the old G4 PowerBooks and combustible bendy-busses means they don't stay around very long) can use code signatures. Unfortunately, the answer seems to currently be "not by much", with little impression of that changing. The code signing process and capability is actually pretty damned cool, and a nice security feature which I'll be talking about at MacDev 09. It's used to good effect in the iPhone, where it and FairPlay DRM are part of that platform's locked-down execution environment.
The only problem is, there's not much a user can do with it. It's pretty hard to find out who signed a particular app, in fact the only thing you can easily do is discover that the same entity signed two versions of the same app. And that's by lack of interface, not by any form of dialogue or confirmation. That means that when faced with the "Foobar app has changed. Are you sure you still want to allow it to [whatever]" prompt, many users will be unaware of the implications of the question. Those who are and (sensibly) want to find out why the change has occurred will quickly become frustrated. Therefore everyone's going to click "allow", which rather reduces the utility of the feature :-(.
Is that a problem yet? Well, I believe it is, even though there are few components yet using the code signature information in the operating system. And it's precisely that allow-happy training which I think is the issue. By the time the user interfaces and access control capabilities of the OS have developed to the point where code signing is a more useful feature (and believe me, I think it's quite a useful one right now), users will be in the habit of clicking 'allow'. You are coming to a sad realisation; allow or deny?
Well, actually, it's not so much about how I can use code signatures, but how the subset of Clapham Omnibus riders who own Macs (a very small subset, as the combination of overheating batteries in the old G4 PowerBooks and combustible bendy-busses means they don't stay around very long) can use code signatures. Unfortunately, the answer seems to currently be "not by much", with little impression of that changing. The code signing process and capability is actually pretty damned cool, and a nice security feature which I'll be talking about at MacDev 09. It's used to good effect in the iPhone, where it and FairPlay DRM are part of that platform's locked-down execution environment.
The only problem is, there's not much a user can do with it. It's pretty hard to find out who signed a particular app, in fact the only thing you can easily do is discover that the same entity signed two versions of the same app. And that's by lack of interface, not by any form of dialogue or confirmation. That means that when faced with the "Foobar app has changed. Are you sure you still want to allow it to [whatever]" prompt, many users will be unaware of the implications of the question. Those who are and (sensibly) want to find out why the change has occurred will quickly become frustrated. Therefore everyone's going to click "allow", which rather reduces the utility of the feature :-(.
Is that a problem yet? Well, I believe it is, even though there are few components yet using the code signature information in the operating system. And it's precisely that allow-happy training which I think is the issue. By the time the user interfaces and access control capabilities of the OS have developed to the point where code signing is a more useful feature (and believe me, I think it's quite a useful one right now), users will be in the habit of clicking 'allow'. You are coming to a sad realisation; allow or deny?
Monday, December 01, 2008
You keep using that word. I do not think it means what you think it means.
In doing a little audience research for my spot at MacDev 2009, I've discovered that the word "security" to many developers has a particular meaning. It seems to be consistent with "hacker-proof", and as it could take most of my hour to set the record straight in a presentation context, here instead is my diatribe in written form. Also in condensed form; another benefit of the blog is that I tend to want to wrap things up quickly as the hour approaches midnight.
Security has a much wider scope than keeping bad people out. A system (any system, assume I'm talking software but I could equally be discussing a business process or a building or something) also needs to ensure that the "good" people can use it, and it might need to respond predictably, or to demonstrate or prove that the data are unchanged aside from the known actions of the users. These are all aspects of security that don't fit the usual forbiddance definition.
You may have noticed that these aspects can come into conflict, too. Imagine that with a new version of OS X, your iMac no longer merely takes a username and password to log a user in, but instead requires that an Apple-approved security guard - who, BTW, you're paying for - verifies your identity in an hour-long process before permitting you use of the computer. In the first, "hacker-proof" sense of security, this is a better system, right? We've now set a much higher bar for the bad guys to leap before they can use the computer, so it's More Secure™. Although, actually, it's likely that for most users this behaviour would just get on one's wick really quickly as they discover that checking Twitter becomes a slow, boring and expensive process. So in fact by over-investing in one aspect of security (the access control, also sometimes known as identification and authorisation) my solution reduces the availability of the computer, and therefore the security is actually counter-productive. Whether it's worse than nothing at all is debatable, but it's certainly a suboptimal solution.
And I haven't even begun to consider the extra vulnerabilities that are inherent in this new, ludicrous access control mechanism. It certainly looks to be more rigorous on the face of things, but exactly how does that guard identify the users? Can I impersonate the guard? Can I bribe her? If she's asleep or I attack her, can I use the system anyway? Come to that, if she's asleep then can the user gain access? Can I subvert the approval process at Apple to get my own agent employed as one of the guards? What looked to be a fairly simple case of a straw-man overzealous security solution actually turns out to be a nightmare of potential vulnerabilities and reduced effectiveness.
Now I've clearly shown that having a heavyweight identification and authorisation process with a manned guard post is useless overkill as far as security goes. This would seem like a convincing argument for removing the passport control booths at airports and replacing them with a simple and cheap username-and-password entry system, wouldn't it? Wouldn't it?
What I hope that short discussion shows is that there is no such thing as a "most secure" applications; there are applications which are "secure enough" for the context in which they are used, and there are those which are not. But the same solution presented in different environments or for different uses will push the various trade-offs in desirable or undesirable directions, so that a system or process which is considered "secure" in one context could be entirely ineffective or unusable in another.
Security has a much wider scope than keeping bad people out. A system (any system, assume I'm talking software but I could equally be discussing a business process or a building or something) also needs to ensure that the "good" people can use it, and it might need to respond predictably, or to demonstrate or prove that the data are unchanged aside from the known actions of the users. These are all aspects of security that don't fit the usual forbiddance definition.
You may have noticed that these aspects can come into conflict, too. Imagine that with a new version of OS X, your iMac no longer merely takes a username and password to log a user in, but instead requires that an Apple-approved security guard - who, BTW, you're paying for - verifies your identity in an hour-long process before permitting you use of the computer. In the first, "hacker-proof" sense of security, this is a better system, right? We've now set a much higher bar for the bad guys to leap before they can use the computer, so it's More Secure™. Although, actually, it's likely that for most users this behaviour would just get on one's wick really quickly as they discover that checking Twitter becomes a slow, boring and expensive process. So in fact by over-investing in one aspect of security (the access control, also sometimes known as identification and authorisation) my solution reduces the availability of the computer, and therefore the security is actually counter-productive. Whether it's worse than nothing at all is debatable, but it's certainly a suboptimal solution.
And I haven't even begun to consider the extra vulnerabilities that are inherent in this new, ludicrous access control mechanism. It certainly looks to be more rigorous on the face of things, but exactly how does that guard identify the users? Can I impersonate the guard? Can I bribe her? If she's asleep or I attack her, can I use the system anyway? Come to that, if she's asleep then can the user gain access? Can I subvert the approval process at Apple to get my own agent employed as one of the guards? What looked to be a fairly simple case of a straw-man overzealous security solution actually turns out to be a nightmare of potential vulnerabilities and reduced effectiveness.
Now I've clearly shown that having a heavyweight identification and authorisation process with a manned guard post is useless overkill as far as security goes. This would seem like a convincing argument for removing the passport control booths at airports and replacing them with a simple and cheap username-and-password entry system, wouldn't it? Wouldn't it?
What I hope that short discussion shows is that there is no such thing as a "most secure" applications; there are applications which are "secure enough" for the context in which they are used, and there are those which are not. But the same solution presented in different environments or for different uses will push the various trade-offs in desirable or undesirable directions, so that a system or process which is considered "secure" in one context could be entirely ineffective or unusable in another.
Labels:
conference,
metadev,
rant,
security,
usability
Free apps with macdev ticket
The Mac Developer network currently have a Special Offer running until christmas eve, get a free copy of Changes and Code Collector Pro with your ticket. Both are useful apps for any developer's arsenal.
Subscribe to:
Posts (Atom)