Tuesday, December 29, 2009

Nearly the end-of-year review

My first post (Farkers, feel free to replace that with "boobies") of the year 2009 was a review of 2008's blog and look forward to 2009. It's time to do the same for the 2009/2010 blogyear bifecta.

Let's start with the recap.



2009 was a comparatively quiet year for iamleeg, with a total of 45 posts (including this one). Although I gave up on LiveJournal, leading to an amount of "mission creep" in the content of this blog, I think that the vast increase my use of Twitter led to the decline in post frequency here. I've come to use Twitter as a replacement for Usenet, it's much easier to share opinions and discuss things on Twitter where there's more of a balanced conversation and less of iamleeg telling the rest of the world how things should work. The other main contributory factor was that I spend my days writing for a living currently, split between authorship, consultation and the MDN security column. I'm often all written out when it comes to the end of the day.

So, the mission creep. 2009 saw this blog become more of a home for ideas long enough to warrant a whole page on the internet, losing its tech focus—directly as a result of dropping LJ, which is where non-tech ideas used to end up. However, statistics show that the tech theme is still prevalent, with only four of the posts being about music or dancing. Security has become both the major topic as well as the popular choice; the most-read article was Beer Improves Perception of Security.

During August and September the focus started to shift towards independent business and contract work, as indeed I made that shift. Self-employment is working well for me, the ability to choose where I focus my effort has let me get a number of things done while still retaining a sense of sanity and a balance with my social life.

So what about next year?



Well, the fact that I have a number of different things to focus on leads to an important choice: I need to either regroup around some specific area or choose to remain a polymath, but either way I need to be more rigorous about defining the boundaries for different tasks. My major project comes to an end early in 2010, and after that it's time to calm down and take a deep look at what happens next. I have a couple of interesting potential clients lined up, and have put onto the back burner my own application which will definitely see more work. I also have some ideas for personal development which I need to prioritise and get cracking on. The only thing preventing me from moving on a number of different projects is convincing myself I have time for them.

So the blog will fit in with that time-management strategy; I won't necessarily decide that 9:00-10:14 on a Monday is always blogging time, but will resolve to put aside some time to writing interesting things. One thing I have found is that working on one thing for a whole day means I don't get much of it done, so factoring that into my plans will let me take advantage of it. Half an hour working on a new article at lunchtime could be the stimulus required to get more out of the afternoon. My weapon of choice for organising my work has always been OmniFocus, it's time to be more rigorous about using it. It doesn't actually work well for time allocation, but it does let me see what needs to be done next on the various things I have outstanding.

Obviously what becomes the content of this blog depends on what happens after I've shaken down all of those considerations and sorted out what it means to be leeg. Happy new year, and stay tuned to find out what happens.

Thursday, December 17, 2009

On Operation Chokehold

So Fake Steve Jobs has announced Operation Chokehold, a wireless flashmob in which disgruntled AT&T customers are to use data-intensive apps for an hour in protest at the poor service and reduced investment AT&T provide on their network. At time of writing, Operation Chokehold has 3,000 fans on Facebook - a small fraction of the ∼82M AT&T Mobility subscribers in the U.S. Fake Steve has latterly wondered whether it is illegal (using the "it's now out of my hands" defence, popular with people who don't understand what incitement means), and seemingly back-pedalled, considering aloud whether people might try a shorter duration or physical flashmob of AT&T stores instead. It would appear that the FCC (the U.S. agency responsible for regulating national and international communications) has weighed in, declaring a wireless flashmob to be irresponsible and "a significant public safety concern".

How is it a concern? Due to the way the phones work, you don't need to the capacity to support all of the users, all of the time, in order to provide a reasonable service. Think of running a file-sharing service like DropBox or Humyo. If you offered up to 10GB storage per customer and have 10,000 customers, then you need 100TB of storage, right? Wrong. That's the maximum that could be used, but let's say in practice you find average use to be 100MB/customer. It turns out that 1TB of storage would be the minimum you'd need to satisfy current demand, if you had even 1.5TB then you'd comfortably support the current customer base while allowing for some future use spikes or growth. The question most businesses ask then is not how risky it is to drop below 100% capacity, but how much risk they can accept in their buffer over average capacity. The mobile phone network operates in the same way. To avoid dropped calls you don't need the bandwidth to support 100% of the phones operating 100% of the time, you need to support the average number of phones the average amount of time, plus a little extra for (hopefully foreseen) additional demand.



The argument by AT&T and the FCC against the wireless flashmob then is that because the network is oversubscribed as an accepted business risk, it would actually be possible for the concerted operation of a large number of users to cause disruption to the network. This eventuality is evinced every year in the early morning of January 1st, as people phone or SMS each other with New Year greetings. People making legitimate calls during this time could be disconnected or unable to place a call at all—while that would undoubtedly make the protest noticed by AT&T it's that aspect of it which makes it a potential public safety concern. Personally, I find it hard to believe that the network doesn't have either dedicated capacity or priority quality of service (QoS) treatment for 9-1-1 calls, but it's possible still that some 9-1-1 calls might not get placed correctly. That's especially true if the caller's handset can't even connect to a tower, which could happen if nearby towers were all saturated with phones making data connections. While it's possible to mitigate that risk (dedicated cell towers for 9-1-1 service, which emergency calls are handed over to) it would be very expensive to implement. There's no business need for AT&T to specially support emergency calls, as they don't make any money from them, so they'd only do that if the FCC mandated it.

But then there are all the non-9-1-1 emergency calls—people phoning their local doctor or hospital, and "business critical" calls made by people who somehow think that their business is critical. Even the day-to-day running of government is at least partially conducted over the regular phone networks, as was seen when the pager traffic from New York on September 11th 2001 got posted to WikiLeaks. These calls are all lumped in with the regular calls, because they are regular calls. The only way to mitigate the risk of dropping these is to increase the capacity of the network, which is exactly the thing that people are saying AT&T don't do to a satisfactory level. If the contracts on AT&T Mobility are anything like the contracts on UK phone networks, then subscribers don't have a service level agreement (SLA) with the provider, so there's no guarantee of provision. The sticking point is the level of expected provision doesn't match that. If the providers operated multi-tier subscription services like the broadband providers do in the UK, then they probably would do QoS management so that preferential customers get better call service—again, assuming the customers can connect to the cell tower in the first place. But again, that's a business issue, and if the guy participating in Chokehold has a more expensive plan than the girl trying to phone the hospital, his connection will win.

Will Chokehold fulfil its goal of making AT&T invest more in its infrastructure? I don't know. Will it actually disrupt public safety services such as 9-1-1? I doubt it. Is it a scale model for a terrorist attack on the communications infrastructure of the US? Certainly not. Much easier to jump down a manhole and snip the cables to the data centres.

Tuesday, December 15, 2009

Consulting versus micro-ISV development

Reflexions on the software business really is an interesting read. Let me borrow Adrian's summary of his own post:

Now, here’s an insider tip: if your objective is living a nightmare, tearing yourself apart and swear never touching a keyboard again, choose [consulting]. If your objective is enjoying a healthy life, making money and living long and prosper, choose [your own products].


As the author himself allows, the arguments presented either way are grossly oversimplified. In fact I think there is a very simple axiom underlying what he says, which if untrue moves the balance away from writing your own products and into consulting, contracting or even salaried work. Let me start by introducing some features missed out of the original article. They may, depending on your point of view, be pros or cons. They may also apply to more than one of the roles.

A consultant:


  • builds up relationships with many people and organisations

  • is constantly learning

  • works on numerous different products

  • is often the saviour of projects and businesses

  • gets to choose what the next project is

  • has had the risks identified and managed by his client

  • can focus on two things: writing software, and convincing people to pay him to write software

  • renegotiates when the client's requirements change


A μISV developer:


  • is in sales, marketing, support, product management, engineering, testing, graphics, legal, finance, IT and HR until she can afford to outsource or employ

  • has no income until version 1.0 is out

  • cannot choose when to put down the next version to work on the next product

  • can work on nothing else

  • works largely alone

  • must constantly find new ways to sell the same few products

  • must pay for her own training and development


A salaried developer:


  • may only work on what the managers want

  • has a legal minimum level of security

  • can rely on a number of other people to help out

  • can look to other staff to do tasks unrelated to his mission

  • gets paid holiday, sick and parental leave

  • can agree a personal development plan with the highers-up

  • owns none of the work he creates


I think the axiom underpinning Adrian Kosmaczewski's article is: happiness ∝ creative freedom. Does that apply to you? Take the list of things I've defined above, and the list of things in the original article, and put them not into "μISV vs. consultant" but "excited vs. anxious vs. apathetic". Now, this is more likely to say something about your personality than about whether one job is better than another. Do you enjoy risks? Would you accept a bigger risk in order to get more freedom? More money? Would you trade the other way? Do you see each non-software-developing activity as necessary, fun, an imposition, or something else?

So thankyou, Adrian, for making me think, and for setting out some of the stalls of two potential careers in software. Unfortunately I don't think your conclusion is as true as you do.

Sunday, November 01, 2009

Poke the other one, it's got bells on

Originally the title for this post was to be "Why a Morris organisation should adopt social media (and why they probably won't)", with what is now the title being reduced to the rank of a subtitle. Then I remembered that I am leeg, and as such humour is always better than content (certainly a lot easier). So we have the title you see before you.

Please bear in mind when reading the epistle located below that I'm fairly new to the whole world of Morris, and especially new to its political fiddle-faddle. If anything here seems to stereotype people or their motives, it's based on the experience of someone who can at best be considered an informed outsider. [And for those who are even more of an outsider than I am, Morris dancing is the name for a roughly-related collection of traditional English dancing styles, performed by groups called sides or teams. It's both good fun and a decent workout, give it a go.]

It seems to have been a problem for at least the length of my lifetime that Morris lacks any relevance to what, for sake of pomposity, I shall refer to as the man on the Clapham omnibus — taking the irony fully on board. That your average person sees no reason to engage with or appreciate the Morris. Why? Has the Morris really made much of an attempt to engage with or appreciate the life of the average person? Not, I would argue, at any concerted or large-scale level, leaving the final impression most people have of Morris dancers the same as their first impression: a bunch of old men in silly clothes hitting each other with sticks.

So, shouldn't I have mentioned the social media by now? Yes, I'm just coming to that, it's only a couple of paragraphs away now. First some context. There are three umbrella organisations for the Morris in the UK: the Morris Ring is the oldest, with its infamous men-only rules; the Morris Federation (whose website was broken at time of writing) started life as the less-infamously women-only Women's Morris Federation; then there's Open Morris, which to my knowledge has never had any infamous membership rules and is the youngest of the three organisations. None of these organisations takes on a promotional or advocacy rôle — or at least, if they do, they've done a bad job of it. They represent more of an internal support network, offering guidance, information, training and the like to the sides. If you don't believe that they aren't promotional bodies, take a look at their websites.

As a digression I will use this paragraph to mention the English Folk Dance and Song Society, which does indeed have advocacy, promotion and outreach as its goals. However, I'm not aware what the relationship between the EFDSS and any of the sides or morris umbrella groups is. That's definitely lack of knowledge on my part, rather than indicative of a lack of interaction. Certainly the three groups named above have all contributed to the EFDSS's magazine, ED&S, recently, although I haven't read their contributions. There's a lot of morris-related material in the EFDSS archives, too. By the way, I'm reliably informed that the society's name is pronounced "EFdəs".

There are, then, four groups identified who either do, or could take on if they so chose, a rôle in promoting the morris to the general public. Given the continued and increasing popularity that web-based media, particularly social media, has in the world, let's examine the part each of these groups plays.

  • Twitter Presences: 0.

  • Official Facebook Presences: 2. The EFDSS has a fan page. There's a group for Open Morris too, which looks like it could be run by the real organisation. I think that counts as user-generated content is part of the world of social media.

  • Official YouTube Channels: 0. Or at least none that I could find.

  • Myspace Presences: 1. EFDSS again.

  • Podcasts: 0.

  • RSS feeds (that scraping noise you hear, it's the bottom of the barrel): 1. It's the Morris Ring's news feed. I nearly fell out of my chair.

  • For completeness, I'll also mention that there's an unofficial mailing list called MDDL (Morris Dancing Discussion List) which is very active with a strong signal-to-noise ratio.



Now, why should any of this be important? Well, as you're reading this, you're consuming a blog. You either thought "I would like to know what leeg is up to, I'll read his blog", "I heard that some guy on the interwebs really lays into morris dancing organisations, I'll check it out", or something else which made you decide to spend time engaging with social media and user-generated content on the web. That's time which the various morris groups could have spent injecting your brain with Morris Dancing. Are they doing that? No. They're writing internal newsletters bemoaning the lack of traditional dancing in the national curriculum, and press releases for the Torygraph to pick up, claiming that morris dancing is dying out because all of the practitioners are getting too old. In fact, it's been nearly a year since that last one was done.

The problem is that you and I and everybody else don't give a shit about the Morris Ring Circular or their press releases, in the same way we give a shit about, say, procrastinating on YouTube, following interesting people on Twitter, catching up with friends on Facebook and so on. The people who complain about morris dancing fading into irrelevance are even managing to complain about it in an irrelevant fashion. Irony. They're doing it right.

And the most galling thing is that the social media and traditional dancing could go together so well. Take a photo like this, which I took at the Morris 18-30 in WakeField:



You may wonder why everyone's wearing different costumes; well the answer is that they're from different sides. A mash-up could tag the dancers with their side's name, and a link to their website. Your next question would, of course, be the same as mine: "but where the hell is Packington, anyway?" A map showing the location of each side could be available, perhaps even showing proximity of their practice venue to your current location and when they'll next be practising (please, do not get me started on the existing SideFinder pages. You would not like what I would become). Had I remembered what dance was being danced (I think it's Skirmishes, though I'm sure someone would be able to spot it from the photo), then information about that dance and videos of sides performing it could be available.

[Another aside: the relative obscurity of some teams' locations has consequences for those teams' performances. As an example, the fool of Adderbury Morris always wears a hat made of fox skin during the dances. This is because when the first team was formed, the fool went home to tell his wife about his new position. Her response is recorded as being "Adderbury? Wear the fox hat".]

"Aha," you hypothetically cry. "But I would not have seen the photo in the first place, had you not talked about it on your blog." Indeed no, but the point of blogs, Twitter, Tumblr and the like is that you get personal recommendations from people you either trust or consider expert in their field. So I think it's fair to have introduced the photo in that way; and that there's a space online for personal recommendation of trying out morris dancing. And that there ought to be some organisation helping people to do that promotion.

Incidentally, the Morris 18-30 is actually a good example of a group (or phenomenon, I suppose) building a decent website with good amounts of information, and providing an easy way for people to get their photos online. There are many sides which have done the same; Westminster Morris Men's YouTube channel has a decent set of videos including some from the archives. My point is not that this information is not being made available, nor that an effort is not being made — I chose the 18-30 photo for my example because I took the picture and have the right to re-use it in this blog.

I don't think that the fact some sides do well at promoting themselves affects my argument; those who don't have the skills or resources to do this work themselves are not being helped by the national/international bodies. People who might be interested in traditional dancing aren't finding out about it, because the nearest side who have any skill at marketing are in the next county. In a world where information can travel the world in a fraction of a second, that's ludicrous.

So the fact that 18-30 has a good website where people can share information easily is a good thing. The various mash-up suggestions I made would all be good improvements, and were they implemented by an umbrella group then everyone could take advantage of them. My point is that the umbrella organisations should be taking and collating the vast amount of morris-related information out there, and making it easy for people who are not (yet) in the morris to find. They should be using it to promote the dance form and the social activities that surround it, but they aren't. They should be providing a central service to make it easier for sides to share their own material, but they aren't. They should be taking their existing archives and making them available online, but they aren't. They should be looking at the innovations made by some of their members and applying them at the international level, but aren't.

In addition to existing content, there is plenty of scope for promotional material based on novel content distributed over the internet. The dances, music and even pub sessions could make great segments for a vlog or video podcast. Even mini-instructionals could be presented as video podcasts or on a YouTube channel, so that people can try things out without having to join a side first. "What's the point of dancing a team dance on your own?" — leaving jigs aside as PhD-level morris, some people may just want to find out whether they can do some of the basic stuff on their own before turning up to a practice session. If you're not sure whether you want to take part in an activity, would you have your first try in front of twenty people who've been doing it for years, and are each armed with a big stick? Maybe not.

So there's plenty of space for morris information to be distributed digitally. But, really, who gives a toss? That's where the promotional aspect comes in. I've already mentioned the personal level of promotion through Twitter and the like. There are obvious places where morris could be promoted; what's on guides, tourism sites, tradition-reporting sites and the like. But how about novel audiences? Dancers, like many British people, enjoy a Beer in the Evening. Given decent information feeds like the things I described a few paragraphs ago, morris data could be highly Mashable, featuring in those little Facebook games. If people like what they see, they will Digg it. Were one of the umbrella orgs to hire a dashing, intelligent, handsome developer-dancer they could even promote through the iPhone app store (though where would they find such a person?).

And what Americans like to call "the kicker" is this: real people drink beer, use websites and download apps. Most of the dancers I've met are either from families of dancers or already had interest in folk music; in that sense the person on the street is an "unexploited vertical" for the marketers of morris, and probably has been since the end of the first world war.

OK, so that's what could be done, who should be doing it, and why. Is it fair for me to put words in the mouths of the umbrella orgs in suggesting why they aren't currently doing it? No, but I will anyway. If you think this blog is fair, then I've got a slightly-used iBook I'll sell you for a great price.

I think that for a large part, the people in charge probably just don't use and therefore don't understand the potential of social media. But that doesn't explain why the morris umbrella organisations don't do any promotion whatsoever. At least one of the organisations may be wary of getting too much publicity for themselves; I'm not a lawyer of course, but the equality bill currently awaiting its 3rd reading in the House of Commons could require the Morris Ring to change its membership rules, as it "Extends discrimination protection in the terms of membership and benefits for private clubs and associations". I expect there isn't much in the way of training available to the organisations in the general field of marketing. I'm not sure what kind of budgets these groups run on, but maybe the three of them together could afford a part-time marketer.

Of course, there's some appeal to the idea that you're in a secret society, isn't there? It's quite exciting to think that you do something enjoyed by few others, and it's easier to become important in smaller social groups. Not that I'm suggesting that's a related point, oh no.

I apologise for writing such a long post. I didn't have the time to write a shorter one.

Friday, September 25, 2009

Unfamiliar territory

People who've seen me play music more than once will probably have noticed more than a slight sense of a pattern - give me a fiddle (or if you're really unlucky then I'll bring my own), and I play English folk music. Give me a guitar and I'll play rock and roll. That's just what my fingers are used to - to me the guitar has six strings of rockabilly and the fiddle has four strings of constant billy. No, I didn't write this piece just to get that pun in, though I am glad that I did. I can now go to my grave a happy man, safe in the knowledge that I have compared fishes flying over mountains with blues-derived country music. Classy.

Well, I think it's time to change things around a bit. I want to leave my violin all shook up, and my guitar all in a garden green. It probably sounds quite easy, given that I already know all the tunes, and just want to play them on different instruments, but really it isn't. There are two important issues which make it hard for me to just swap over.

The first is that in many cases I don't actually know the tune at all, I just use muscle memory to get my digits moving in the right places for music to occur. That's particularly true in the case of rock n roll music, where there really aren't tunes at all. There are just 'licks', basic one or two bar figures which are strung together into a twelve bar part. But it's also true in folk music which has a similarly hypnotic repetition but with longer figures. So taking a tune to a new instrument means discovering what it is I'm actually playing on the first instrument, then trying to reproduce that on the second.

The second issue is that just playing the notes from one instrument on another isn't necessarily the correct thing to do, nor even particularly easy. For a start guitar strings are tuned to fourths (mainly) while violin strings are to fifths, and as the bridges are different shapes the instruments invite playing a different number of strings simultaneously. So what I need to do is not even to work out how to play the same tune on the other instrument, but what that instrument's version of the tune should be and how to play that.

I expect that the outcome of this little experiment will be mainly a cacophony, but with some increased understanding of what the instruments can do and how to play them. If I focus on cacophony then I'll probably get quick results, though.

Friday, September 18, 2009

Next Swindon CocoaHeads meeting

At one time a quiet market town with no greater claim than to break up the journey between Oxford and Bristol, Swindon is now a bustling hub of Mac and iPhone development activity. The coming meeting of CocoaHeads, at the Glue Pot pub near the train station on Monday October 5th, is a focus of the thriving industry.

I really believe that this coming meeting will be a great one for those of you who've never been to a CocoaHeads meeting before. We will be having a roundtable discussion on indie software development and running your own micro-ISV. Whether you are a seasoned indie or just contemplating making the jump and what to find out what's what, come along to the meeting. Share your anecdotes or questions with a group of like-minded developers and discover how one person can design, develop and market their applications.

You don't need to register beforehand and there's no door charge, just turn up and talk Cocoa. If you do want to discuss anything with other Swindon CocoaHeads, please subscribe to the mailing list.

Sunday, September 06, 2009

Unit testing Core Data-driven apps

Needless to say, I'm standing on the shoulders of giants here. Chris Hanson has written a great post on setting up the Core Data "stack" inside unit tests, Bill Bumgarner has written about their experiences unit-testing Core Data itself and PlayTank have an article about introspecting the object tree in a managed object model. I'm not going to rehash any of that, though I will touch on bits and pieces.

In this post, I'm going to look at one of the patterns I've employed to create testable code in a Core Data application. I'm pretty sure that none of these patterns I'll be discussing is novel, however this series has the usual dual-purpose intention of maybe helping out other developers hoping to improve the coverage of the unit tests in their Core Data apps, and certainly helping me out later when I've forgotten what I did and why ;-).

Pattern 1: remove the Core Data dependence. Taking the usual example of a Human Resources application, the code which determines the highest salary in any department cares about employees and their salaries. It does not care about NSManagedObject instances and their values for keys. So stop referring to them! Assuming the following initial, hypothetical code:

- (NSInteger)highestSalaryOfEmployees: (NSSet *)employees {
NSInteger highestSalary = -1;
for (NSManagedObject *employee in employees) {
NSInteger thisSalary = [[employee valueForKey: @"salary"] integerValue];
if (thisSalary > highestSalary) highestSalary = thisSalary;
}
//note that if the set's empty, I'll return -1
return highestSalary;
}


This is how this pattern works:

  1. Create NSManagedObject subclasses for the entities.
    @interface GLEmployee : NSManagedObject
    {}
    @property (nonatomic, retain) NSString *name;
    @property (nonatomic, retain) NSNumber *salary;
    @property (nonatomic, retain) GLDepartment *department;
    @end

    This step allows us to see that employees are objects (well, they are in many companies anyway) with a set of attributes. Additionally it allows us to use the compile-time checking for properties with the dot syntax, which isn't available in KVC where we can use any old nonsense as they key name. So go ahead and do that!
    - (NSInteger)highestSalaryOfEmployees: (NSSet *)employees {
    NSInteger highestSalary = -1;
    for (GLEmployee *employee in employees) {
    NSInteger thisSalary = [employee.salary integerValue];
    if (thisSalary > highestSalary) highestSalary = thisSalary;
    }
    //note that if the set's empty, I'll return -1
    return highestSalary;
    }


  2. Abstract out the interface to a protocol.
    @protocol GLEmployeeInterface <NSObject>
    @property (nonatomic, retain) NSNumber *salary;
    @end

    Note that I've only added the salary to the protocol definition, as that's the only property used by the code under test and the principle of YAGNI tells us not to add the other properties (yet). The protocol extends the NSObject protocol as a safety measure; lots of code expects objects which are subclasses of NSObject or adopt the protocol. And the corresponding change to the class definition:
    @interface GLEmployee : NSManagedObject <GLEmployeeInterface>
    {}
    ...
    @end

    Now our code can depend on that interface instead of a particular class:
    - (NSInteger)highestSalaryOfEmployees: (NSSet *)employees {
    NSInteger highestSalary = -1;
    for (id <GLEmployeeInterface> employee in employees) {
    NSInteger thisSalary = [employee.salary integerValue];
    if (thisSalary > highestSalary) highestSalary = thisSalary;
    }
    //note that if the set's empty, I'll return -1
    return highestSalary;
    }


  3. Create a non-Core Data "mock" employee
    Again, YAGNI tells us not to add anything which isn't going to be used.
    @interface GLMockEmployee : NSObject <GLEmployeeInterface>
    {
    NSNumber *salary;
    }
    @property (nonatomic, retain) NSNumber *salary;
    @end

    @implementation MockEmployee
    @synthesize salary;
    @end

    Note that because I refactored the code under test to handle classes which conform to the GLEmployeeInterface protocol rather than any particular class, this mock employee object is just as good as the Core Data entity as far as that method is concerned, so you can write tests using that mock class without needing to rely on a Core Data stack in the test driver. You've also separated the logic ("I want to know what the highest salary is") from the implementation of the model (Core Data).



OK, so now that you've written a bunch of tests to exercise that logic, it's time to safely refactor that for(in) loop to an exciting block implementation :-).

Saturday, September 05, 2009

CocoaHeads Swindon is this Monday!

The town of Swindon in the Kingsbridge hundred, Wiltshire is famous for two things. The first is the Wilts and Berks Canal, linking the Kennet and Avon at Trowbridge with the Thames at Abingdon. Authorised by act of parliament in 1775, the canal first passed through the town in 1804 and allowed an explosion in both the industrial and residential capacity of the hitherto quiet market cheaping.

The second is, of course, the local CocoaHeads chapter. Founded by act of Scotty in 2007, Swindon CocoaHeads quickly brought about a revolution in the teaching and discussion of Mac and iPhone development in the South-West, its influence being felt as far away as Swansea to the West and London to the East. Unlike the W&B canal, Swindon CocoaHeads is still thriving to this day. On Monday, 7th September at 20:00 there will be another of the chapter's monthly meetings, in the Glue Pot pub near the train station. Here, Pieter Omvlee will be leading a talk on ImageKit, and the usual combination of beer and Cocoa chat will also be on show. As always, the CocoaHeads website contains the details.

Thursday, August 27, 2009

Indie app milestones part one

In the precious and scarce spare time I have around my regular contracting endeavours, I've been working on my first indie app. It reached an important stage in development today; the first time where I could show somebody who doesn't know what I'm up to the UI and they instinctively knew what the app was for. That's not to say that the app is all shiny and delicious; it's entirely fabricated from standard controls. Standard controls I (personally) don't mind so much. However the GUI will need quite a bit more work before the app is at its most intuitive and before I post any teaser screenshots. Still, let's see how I got here.

The app is very much a "scratching my own itch" endeavour. I tooled around with a few ideas for apps while sat in a coffee shop, but one of them jumped out as something I'd use frequently. If I'll use it, then hopefully somebody else will!

So I know what this app is, but what does it do? Something I'd bumped into before in software engineering was the concept of a User Story: a testable, brief description of something which will add value to the app. I broke out the index cards and wrote a single sentence on each, describing something the user will be able to do once the user story is added to the app. I've got no idea whether I have been complete, exhaustive or accurate in defining these user stories. If I need to change, add or remove any user stories I can easily do that when I decide that it's necessary. I don't need to know now a complete roadmap of the application for the next five years.

As an aside, people working on larger teams than my one-man affair may need to estimate how much effort will be needed on their projects and track progress against their estimates. User stories are great for this, because each is small enough to make real progress on in short time, each represents a discrete and (preferably) independent useful addition to the app and so the app is ready to ship any time an integer number of these user stories is complete on a branch. All of this means that it shouldn't be too hard to get the estimate for a user story roughly correct (unlike big up-front planning, which I don't think I've ever seen succeed), that previous complete user stories can help improve estimates on future stories and that even an error of +/- a few stories means you've got something of value to give to the customer.

So, back with me, and I've written down an important number of user stories; the number I thought of before I gave up :-). If there are any more they obviously don't jump out at me as a potential user, so I should find them when other people start looking at the app or as I continue using/testing the thing. I eventually came up with 17 user stories, of which 3 are not directly related to the goal of the app ("the user can purchase the app" being one of them). That's a lot of user stories!

If anything it's too many stories. If I developed all of those before I shipped, then I'd spend lots of time on niche features before even finding out how useful the real world finds the basic things. I split the stories into two piles; the ones which are absolutely necessary for a preview release, and the ones which can come later. I don't yet care how late "later" is; they could be in 1.0, a point release or a paid upgrade. As I haven't even got to the first beta yet that's immaterial, I just know that they don't need to be now. There are four stories that do need to be now.

So, I've started implementing these stories. For the first one I went to a small whiteboard and sketched UI mock-ups. In fact, I came up with four. I then set about finding out whether other apps have similar UI and how they've presented it, to choose one of these mock-ups. Following advice from the world according to Gemmell I took photos of the whiteboard at each important stage to act as a design log - I'm also keeping screenshots of the app as I go. Then it's over to Xcode!

So a few iterations of whiteboard/Interface Builder/Xcode later and I have two of my four "must-have" stories completed, and already somebody who has seen the app knows what it's about. With any luck (and the next time I snatch any spare time) it won't take long to have the four stories complete, at which point I can start the private beta to find out where to go next. Oh, and what is the app? I'll tell you soon...

Monday, August 17, 2009

On XP mode

This is a reply to @gcluley, who linked to this ZDNet story (which in turn took its quotes from Sophos Podcasts).

The second most crazy thing about the entire "XP mode" issue in Windows 7 is that the feature is entirely unnecessary. Corporate customers of Windows are already, for the most part, comfortable with managing virtual Windows desktops through third-party products with much better management options or at least have trialled such products. Home users of Windows just take whichever version is pre-installed when they buy the PC and if it means buying new versions of some apps, that's what they do. They're used to it. The group of people who could benefit from XP mode - people with a strong need for app compatibility with XP but with no experience of virtualisation - just doesn't exist.

The very existence of the XP mode feature is a microcosmic example of the way Ballmer has been running Microsoft - if there's a market out there that MS isn't in, MS needs to be in it pronto. Bing, Morro, Web-Office, Zune and now virtualisation are all testament to the inability of Microsoft to concentrate on what it does. What Microsoft really does is to sell two things; an enterprise computing environment and an OEM software distribution. Forget that Windows and Office are accounted as two separate products; MS sell Windows+Office to businesses and Windows to computer makers.

Now the interesting question to ponder is which of Microsoft's (real or perceived; remember they aren't necessarily in this market) competitors the "XP mode" feature is a response to. My interpretation is that it's not actually VMware and its ilk at all - Microsoft is once again responding to nonexistent competition from Apple. Boot Camp and the third-party desktop virtualisation offerings on the Mac (including, without hint of irony, VMware) let users use OS X as their shiny new OS with an "XP mode" of sorts for legacy applications. I think what Microsoft are trying to do here is to show that Windows can be the new shiny with XP as the legacy mode, and are therefore positioning XP mode as a counter to the fictitious competition from Apple. Oh, and if you don't believe me when I say that the competition from Apple doesn't exist - Apple sell all of the premium computers while Microsoft take the aforementioned corporate and OEM markets.

OK, so if that was the second most crazy thing about XP mode, what is the most crazy thing about XP mode? It's also that the feature shouldn't exist. Windows has always had a problem with segregating distinct services which other operating systems don't suffer from. While Microsoft's avoidance of this issue has allowed a whole new software industry to spring up around it, the fact that they need to start a second copy of Windows just to get some applications running in Windows 7 doesn't give me much hope for the future.

Friday, August 14, 2009

The next million-dollar iPhone application

I'm constantly surprised by questions such as this one. They invariably go along the lines:

I heard that I need to get a Mac to do iPhone development. I want to do iPhone development but do I have to buy a Mac? Is there any other way to develop iPhone software?


If the projected sales for your app don't meet the cost of a new computer, whatever platform you're developing on, it's time to get a different idea for your app. I speak with the smug self-confidence of one who has yet to get his own app within smelling distance of the store.

Thursday, August 13, 2009

A rap upon the noggin

When a patient may be concussed, it's common for parademics to ask simple, topical questions to determine whether the patient is confused. Questions such as "who is the Prime Minister"?

I think somebody may have knocked this poor spammer upside the head (emphasis is mine):

Lloyd's TSB Group plc
25 Gresham Street
London EC2V 7HN

Greetings,

Following the recent announcement by the Chancellor of the Exchequer, Gordon Brown that all assets in accounts that have been

dormant for over 15years be transferred to the Treasury i send this mail to you.

There is a dormant account in my office,with no owner or beneficiaries. It will be in my interest to transfer this assets

worth 20,000,000 British pounds to an offshore country. If you can be a collaborator/partner to this please indicate interest

immediately for us to proceed.

Remember this is absolutely confidential,as i am seeking your assistance to act as the beneficiary of the account, since we

are not allowed to operate a foreign accounts. Your contact phone numbers and name will be necessary for this effect.
I have reposed my confidence in you and hope that you will not disappoint me.

My Regards,
Jim McConville
Lloyd's TSB Group plc

Friday, July 31, 2009

Website relaunch!

Today I have re-launched Thaes Ofereode to focus on my new role as an independent Mac boffin. I really like the new design, which was created by the ever-delightful Freya.

edit: Gecko doesn't understand the CSS media selector I was using to provide the iPhone CSS. I've therefore reverted the iPhone design until I can find a way to get Firefox to suck less.


The one thing I added to her design was a more iPhone-friendly look. For those of you without iPhones, the screenshots demonstrate how the mobile version will appear. For those of you who are CSS experts, the following will probably be rather dull but for those like me who know enough to be dangerous but no more, here's how it's done.

The three-column layout works really well on the desktop, but the iPhone has a tallscreen-oriented display so not much space for horizontal layout. I therefore chose to put the leftmost, menu column underneath the main content on each page, so iPhone users get to see the heading and then the meat and potatoes. If they are interested enough to get to the end, they'll see the links to the rest of the site.

The links, btw, are just paragraphs with a border, a lot of padding and the magic -webkit-border-radius providing the roundy edges; no messing with JavaScript and funny part-circle images.

So, the third column? Well those impressive-looking widgets can't be displayed on the phone anyway, and would be a bit out of place so they're gone for the moment with the mobile CSS. I may code up some JavaScript replacements soon enough, but I'll need to find somewhere else for them to go. In the meantime, I know you read my blog because you're here, and there are many apps which can help you follow me on Twitter.

Tuesday, July 28, 2009

Next CocoaHeads Swindon meet!

So for those of you who didn't manage to enjoy the glories to be found in the town that was the inspiration for one of Legion's more colourful adventures,[*] next Monday, the 3rd of August, offers yet another once-in-a-monthtime opportunity! As ever, the location is in (or just outside) the Glue Pot, a strong man's stone's throw from the Swindon train station. This month's meeting is a recap on QTKit, to allow those who weren't there last time due to the reschedule to catch up on integrating QuickTime into their Cocoa apps.

[*] What am I doing knowing quotes like that? Well, the clue is in the user name. When I was a student my UNIX username was leeg, clearly based on my real name. In short order, I was introduced as "He is Leeg, for he are many" and thus iamleeg.

Tuesday, July 14, 2009

NSConference videos

Scotty and the gang have been getting the NSConference videos out to the public lately, and now sessions 7-9 are available including my own session on security. The videos are really high quality, I'm impressed by the postproduction that's gone in and of course each of the sessions I watched at the conference has some great information and has been well-presented. All of the videos are available here.

I've also put the slides for my presentation up over on slideshare.

Wednesday, July 08, 2009

Refactor your code from the command-line

While the refactoring support in Xcode 3 has been something of a headline feature for the development environment, in fact there's been a tool for doing Objective-C code refactoring in Mac OS X for a long time. Longer than it's been called Mac OS X.

tops of the form



My knowledge of the early days is very sketchy, but I believe that tops was first introduced around the time of OPENSTEP (so 1994). Certainly its first headline use was in converting code which used the old NextStep APIs into the new, shiny OpenStep APIs. Not that this was as straightforward as replacing NX with NS in the class names. The original APIs hadn't had much in the way of foundation classes (the Foundation Kit was part of OpenStep, but had been available on NeXTSTEP for use with EOF), so took char * strings rather than NSStrings, id[]s rather than NSArrays and so on. Also much rationalision and learning-from-mistakes was done in the Application Kit, parts of which were also pushed down into the Foundation Kit.

All of this meant that a simple search-and-replace tool was not going to cut the mustard. Instead, tops needed to be syntax aware, so that individual tokens in the source could be replaced without any (well, alright, without too much) worry that any of the surrounding expressions would be broken, without too much inappropriate substitution, and without needing to pre-empt every developer's layout conventions.

before we continue - a warning



tops performs in-place substitution on your source code. So if you don't like what it did and want to go back to the original… erm, tough. If you're using SCM, there's no problem - you can always revert its changes. If you're not using SCM, then the first thing you absolutely need to do before attempting to try out tops on your real code is to adopt SCM. Xcode project snapshots also work.

replacing deprecated methods



Let's imagine that, for some perverted reason, I've written the following tool. No, scrub that. Let's say that I find myself having to maintain the following tool :-).

#import <Foundation/Foundation.h>

int main(int argc, char **argv, char **envp)
{
NSAutoreleasePool *arp = [[NSAutoreleasePool alloc] init];
NSString *firstArg = [NSString stringWithCString: argv[1]];
NSLog(@"Argument was %s", [firstArg cString]);
[arp release];
return 0;
}


Pleasant, non? Actually non. What happens when I compile it?

heimdall:Documents leeg$ cc -o printarg printarg.m -framework Foundation
printarg.m: In function ‘main’:
printarg.m:6: warning: ‘stringWithCString:’ is deprecated (declared at /System/Library/Frameworks/Foundation.framework/Headers/NSString.h:386)
printarg.m:7: warning: ‘cString’ is deprecated (declared at /System/Library/Frameworks/Foundation.framework/Headers/NSString.h:367)


OK so we obviously need to do something about this use of ancient NSString API. For no particular reason, let's start with -cString:

heimdall:Documents leeg$ tops replacemethod cString with UTF8String printarg.m


So what do we have now?

#import <Foundation/Foundation.h>

int main(int argc, char **argv, char **envp)
{
NSAutoreleasePool *arp = [[NSAutoreleasePool alloc] init];
NSString *firstArg = [NSString stringWithCString: argv[1]];
NSLog@"Argument was %s", [firstArg UTF8String], length);
[arp release];
return 0;
}


Looking good. But we still need to fix the -stringWithCString:. That could be just as easy, replacemethod stringWithCString: with stringWithUTF8String: would do the trick. However let's be a little
different here. Why don't we use -stringWithCString:encoding:? If we do that, then we're going to need to take a guess at the second argument, because we've got no idea what the encoding should be (that's why -stringWithCString: is deprecated, after all. However if we're happy to assume UTF8 is fine for the output, let's do that for the input. We'd better let everyone know that's what happened, though.

So this rule is starting to look quite complex. It says "replace -stringWithCString: with -stringWithCString:encoding:, keeping the C string argument but adding another argument, which should be NSUTF8StringEncoding. While you're at it, warn the developer that you've had to make that assumption". We also (presumably) want to combine it with the previous rule, so that if we see the original file we'll catch both of the problems. Luckily tops lets us write scripts, which comprise of one or more rule descriptions. Here's a script which encapsulates both our cString rules:

replacemethod "cString" with "UTF8String"
replacemethod "stringWithCString:<cString>" with "stringWithCString:<cString>encoding:<encoding>" {
replace "<encoding_arg>" with "NSUTF8StringEncoding"
} warning "Assumed input encoding is UTF8"


So why does the <encoding> token become <encoding_arg> in the sub-rule? Well that means "the thing which is passed as the encoding argument". This avoids confusion with <encoding_param>, the parameter as declared in the class interface (yes, you can run tops on headers as well as implementations).

Now if we save this script as cStringNoMore.tops, we can run it against our source file:

heimdall:Documents leeg$ tops -scriptfile cStringNoMore.tops printarg.m


Which results in the following source:

#import <Foundation/Foundation.h>

int main(int argc, char **argv, char **envp)
{
NSAutoreleasePool *arp = [[NSAutoreleasePool alloc] init];
#warning Assumed input encoding is UTF8
NSString *firstArg = [NSString stringWithCString:argv[1] encoding:NSUTF8StringEncoding];
NSLog(@"Argument was %s", [firstArg UTF8String]);
[arp release];
return 0;
}


Now, when we compile it, we no longer get told about deprecated API. Cool! But it looks like I need to verify that the use of UTF8 is acceptable:

heimdall:Documents leeg$ cc -o printarg printarg.m -framework Foundation
printarg.m:6:2: warning: #warning Assumed input encoding is UTF8


exercises for the reader, and caveats



There's plenty more to tops than I've managed to cover here. You could (and indeed Apple do) use it to 64-bit-cleanify your sources. Performing security audits is another great use - particularly using constructs such as:

replace strcpy with same error "WTF do you think you're doing?!?"


However, notice that tops is a blunter instrument than the Xcode refactoring capability. Its smallest unit of operation is the source file; refactoring only within particular methods is not quite easily achieved. Also, as I said before, remember to check your source into SCM before running a script! There is a -dont option to make tops output its proposed changes without applying them, too.

Finally tops shouldn't be used fully automated. Always assume that you need to inspect the output carefully, don't just Build and Go.

Monday, July 06, 2009

CocoaHeads Swindon tonight!

For those of you who've never explored the delights that the fine city of the Hill of Pigs has to offer, tonight offers an unparalleled opportunity. Come and sit in (or outside, weather permitting) a pub only a short distance from the railway station, and listen to Mike Abdullah speaking about WebKit. As always there'll also be general NSDiscussion, and the occasional pint of beer. Maps etc. at our cocoaheads.org page.

Friday, July 03, 2009

Just because Brucie says it...

Bruce Schneier claims that shoulder-surfing isn't much of a problem these days.

Plenty of people discovered "my password" at NSConference, so I disagree :-) (photo courtesy of stuff mc).

Wednesday, July 01, 2009

KVO and +initialize

Got caught by a really hard-to-diagnose issue today, so I decided to write it down in part so that you don't get bitten by it, and partly so that next time I come across the issue, I'll remember what it was.

I had a nasty bug in trying to add support for the AppleScript duplicate command to one of my objects. Now duplicate should, in principle, be simple: just conform to NSCopying and implement -copyWithZone:. The default implementation of NSCloneCommand should deal with everything else. But what I found was that there's a class variable (OK, there isn't, there's a static in the class implementation file with accessors) with which the instances must compare some properties. And this was empty by the time the AppleScript ran. Well, that's odd, thought I, it's only being emptied once, and that's when it's created in +[MyClass initialize]. So what's going on?

Having set a watchpoint on the static, I now know the answer: the +initialize method was being called twice. Erm, OK...why? It's only called whenever a class is first used. It turns out that there were two classes with the same IMP for that method. The first was MyClass, and the second? NSKVONotifying_MyClass. Ah, great, Apple are adding a subclass of one of my classes for me!

It turns out that TFM has a solution:


+ (void)initialize
{
if (self == [MyClass class])
{
//real code
}
}


and I could use that solution here to fix my problem. But finding out that is the problem was a complete pig.

Monday, June 29, 2009

Going indie!

This is sort of a message from the past. I wrote it yesterday, but had people I needed to talk to before I could hit the big old publish button. (Including this bit, so I really wrote it "today", but the earliest you can read it means that "today" will be "yesterday". This is one of those uses of the past-perfect-nevertense that blows up lesser recording equipment.)

Today (the real today that this thing was posted), I handed in my notice at Sophos. Six weeks from now, I will be officially an unemployed starving artist. I'm working on lining up a project to take up most of my time for the first few months of the new era, which really looks like it will work out, and of course need to solve the "marketing presence" problem. Although the fact that you're reading this probably means you already know who I am, and something about what I do. If you want a Cocoa or UNIX developer for a contract - especially one with experience in the worlds of security and scientific computing - then please see my LinkedIn profile to find out a bit more of what I've been up to, and drop me a line - here, at @iamleeg or to iamleeg at gmail dot com. I know there are a load of interesting people out there working on a load of interesting projects, one of the great things about WWDC every year is meeting you all and sharing in the excitement. Well hopefully I'll get to work with you on some of that cool software, too!

So, like many people who go self-employed, I've got little idea of what will happen next :-). I've got some ideas for apps which I'll be working on in the (probably too copious) spare time I'll have. But I'm going to focus on contracting and consulting in the short term. This is going to be an exciting time, if somewhat daunting...but you'll be able to check on my daunt levels right here, dear readers.

I promised at the turn of the year that there would be lots of blog posts on various tech things during the first half of the year. Unsurprisingly that didn't quite pan out, and I'm hoping to rectify that over the next couple of months now that I have fewer (perceived) content restrictions on the blog. And this first project I have lined up should certainly be producing some good'uns, assuming it all works out. I'll be the first to admit that if it doesn't, I'm heading for trouble very shortly. Which is why I know that it, or something very like it, will work out :-).

To fellow Sophists who are hearing about this for the first time here, I'm very sorry. I tried to let people know today but there are hundreds of you, one of me and lots of loose ends to tie by mid-August. But don't worry, there will be beer.

Sunday, June 21, 2009

Reverse-engineering stringed instruments

Despite being able to play some instruments, I probably couldn't do a good job of making any of them. I don't have the patience required to boil a horse for long enough to stick a fiddle together, for instance. Luthiers would probably get incredibly bored by the following post but for fellow apart-takers, it'll hopefully be quite interesting.

I once made a stringed synthesiser, when I was at college. The basic principle is that of an electric guitar running in reverse. A metal string is stretched between two bridges, and sits inside a horseshoe magnet. Now rather than plucking the string and letting the magnetic pickup detect the vibrations, we pass an alternating current through the string and use the magnetic field to cause the string to vibrate. Mount the whole shooting match on a soundbox roughly the shape of an appalachian dulcimer, so it makes a decent amount of noise. And there you go!

Well, there bits of you go, anyway. While you can drive the string at any frequency you like, it'll be really quiet on any note which isn't a natural harmonic - it's being forced at one frequency, and trying to vibrate at a bunch of other frequencies, so can't really resonate. Assuming that the materials of the string aren't up for grabs, it's the length and tension which choose the fundamental frequency. What you currently have is capable of playing bugle tunes. Put a few different bugles together and you can play chromatic scales, so putting a few different strings together increases the likelihood that our synthesiser has the ability to play some notes in the tune we want.

In fact, it's still going to have a fairly nasty volume characteristic, because most of the noise doesn't come from the string, it comes from the soundbox. In that regard, violins and pipe organs have quite a lot in common. But they differ in that pipe organs have one soundbox per note - the pipe itself - and violins have a single box. So it'd better be possible to get it to resonate at a whole bunch of interesting frequencies, which is one of the reasons for making them (and acoustic guitars) the shapes that they are. Now what the real acoustic characteristics of a fiddle body are, I'm not entirely sure. But what I do know is that a violin is surprisingly small - the "concert pitch" A pipe in an organ is a little under 40cm and the lowest note a violin usually has (a ninth lower) will therefore be almost a metre long. Even though an enclosed box like a violin needs to be half the length of an organ pipe playing the same note, it will still naturally accentuate the higher frequencies that the strings have on offer because it isn't big enough to do much else. And it's that which gives the instruments their sound. My synth's soundbox was cuboid-ish, so had two characteristic "loud" notes and their harmonics. The z-axis wouldn't have resonated much as the box was on a table which absorbed the momentum.

The remaining interesting point is that the violin is forced to extract the sound from a rubbish part of the string. The bridge is both responsible for translating the motion of the string into the wood and for stopping the string from moving. The string moves most somewhere out toward the middle (it's mainly vibrating at its natural wavelength, which is twice the length of the string) and not at all near the ends. That's why pickups on electric guitars sound "warmer" further away form the bridge. There they pick up more of the lower harmonics, but the nearest pickup (and the bridge) get proportionally more of the higher harmonics.

Tuesday, June 16, 2009

Beer improves perception of security

...at least, it provides for a nice analogy to use when discussing basic security concepts. I don't think people necessarily choose better passwords after a skinful, nor do they usually make improved choices of what information to share on social networking sites when returning from the pub. That's probably why Mail Goggles exists.

So, the attendee beer bash at WWDC. There is beer. There is also the regulatory framework of the state of California, which mandates that minors under 21 years of age may not be supplied with beer. There are also student scholarship places to the WWDC, and no theoretical minimum attendee age. There are also loads of people in the vague area of Yerba Buena Gardens who are not attending the WWDC. But only attendees may visit the bash, and only attendees over 21 may drink beer.

PassportI went to the registration desk at Moscone West on the day before Phil Schiller's keynote, and identified myself as Graham Lee. "Hi, my name's Graham Lee" I said to the lady behind the desk. It turns out that's not sufficient. While I did indeed give the name of someone who was indeed registered to attend the conference, there's no reason for the lady to believe the identification I gave her. She wants to be able to authenticate my claim, and chooses to rely on a trusted third party to do so. That third party is the British government (insert your own jokes here), and she is happy to accept my passport as confirmation of my identity.

WWDC attendee passNow I am given my attendee badge, a token which demonstrates that I have authenticated as Graham Lee, an attendee at WWDC. When I move around the conference centre to get to the sessions and the labs, the security staff merely need to look for the presence of this token. They don't need to go through the business of checking my passport again, because the fact that I have my token satisfies them that I have previously had my identity authenticated to the required level.

WWDC beer braceletThe access token would be sufficient to get me in to the attendee beer bash, as it proves that I have authenticated as an attendee. But it does not demonstrate that I am authorised to drink beer. So on the day of the bash I go back to the registration desk and show a different lady my passport again, which indicates that I am over 21 and can therefore be given the authority to drink beer at the bash. I am given the subtle green wristband pictures, which again acts as a token; this time not an identification token but an authorisation token. The bar staff do not care which of the 5200 attendees I am. In fact they do not care about my identity at all, because they know that the security staff have already verified my attendee status at the entrance to the event - therefore they don't need to see my attendee pass. They only care about whether I'm in the group of people permitted to drink beer, and the wristband shows that I am in that group. It shows that I have demonstrated the credentials needed to gain that particular authority.

So, there we have it. A quick beer of an evening with a few thousand colleagues can indeed turn into a fun discussion of the distinction between authentication and authorisation, and how these two tasks can be carried out independently.

Saturday, June 13, 2009

WWDC wind-down

As everyone is getting on their respective planes and flying back to their respective homelands, it's time to look back on what happened and what the conference means.

The event itself was great fun, as ever. Meeting loads of new people (a big thank-you to the #paddyinvasion for my dishonourary membership) as well as plenty of old friends is always enjoyable - especially when everyone's so excited about what they're working on, what they've discovered and what they're up to the next day. It's an infectious enthusiasm.

Interestingly the sessions and labs content has more of a dual impact. On the one hand it's great to see how new things work, how I could use them, and to realise that I get what they do. The best feeling is taking some new information and being able to make use of it or see how it can be used. That's another reason why talking to everyone else is great - they all have their own perspectives on what they've seen and we can share those views, learning things from each other that we didn't get from the sessions. If you were wondering what the animated discussions and gesticulations were in the 4th Street Starbucks at 7am every morning, now you know.

On the other hand, it makes me realise that OS X is such a huge platform that there are parts I understand very well, and parts that I don't really know at all. My own code spreads a wide path over a timeline between January 1, 1970 and September 2009 (not a typo). For instance, it wasn't until about 2003 that I knew enough NetInfo to be able to write a program to use it (you may wonder why I didn't just use DirectoryServices - well even in 2003 the program was for NeXTSTEP 3 which didn't supply that API). I still have a level of knowledge of Mach APIs far below "grok", and have never known even the smallest thing about HIToolbox.

There are various options for dealing with that. The most time-intensive is to take time to study - I've got a huge collection of papers on the Mach design and implementation, and occasionally find time to pop one off the stack. The least is to ignore the problem - as I have done with HIToolbox, because it offers nothing I can't do with Cocoa. In-between are other strategies such as vicariously channeling the knowledge of Amit Singh or Mark Dalrymple and Aaron Hillegass. I expect that fully understanding Mac OS X is beyond the mental scope of any individual - but it's certainly fun to try :-).

Wednesday, June 10, 2009

Unit testing Cocoa projects in Xcode

Unlike Bill, whose reference to unit testing in Xcode 3.0 is linked at the title, when I started writing unit tests for my Cocoa projects I had no experience of testing in any other environment (well, OK, I'd used OCUnit on GNUstep, but I decline to consider that as a separate environment). However, what I've seen of unit testing in Cocoa still makes me think I must be missing something.

The first thing is that when people such as Kent Beck talk about test-driven development, they mention "red-green-refactor". Well, where's my huge red bar? Actually, I sometimes write good code so I'd like to see my huge green bar too, but Xcode 3.1 doesn't have one of those either. You have to grub through the build results window to see what happened.

Sometimes, a test is just so badly broken that rather than just failing, it crashes the test runner. This is a bit unfortunate, because it can be very hard to work out what test broke the harness. That's especially true if the issue is some surprising concurrency bug and one test breaks a different test, or if the test manages to destroy the assumptions made in -teardown and crashes the harness after it's run. Now Chris Hanson has posted a workaround to get the debugger working with a unit test bundle target, but wouldn't it be nice if that "just worked", in the same way that breaking into the debugger from Build and Run "just works" in an app target?

Wednesday, June 03, 2009

Follow-up-and-slightly-over on safety/security

The one thing which makes this a less-than-standard follow-up is that the original was not posted here, but over on paranym Graham Cluley's blog. I originally wrote about the (fictitious) difference between safety and security. For those who didn't clickety the linkelode, I wrote that Jon Gruber's distinction between safety and security was just a neat lexical game to sidestep popular Mac-press opinion. I also wrote that I wanted to cover the original article, Snow Leopard security is all relative. Well, now I shall.

So Dennis Fisher argues that "very few users will switch to a Mac or from a Mac because of security"…"But if Snow Leopard turns out to be a major security upgrade over the current versions, that's an important step for Apple and its customers." I'm not sure I agree there. As far as I can see, the fundamental place where Fisher and I start to disagree is on what value security marketing has.

I infer from the article that Fisher takes security marketing to be a zero-sum game between the competitors: every time Apple wins, Microsoft necessarily loses: Microsoft have "out-secured" Apple and it's up to Apple to "out-secure" them back. I believe that many consumers consider security as a "hygiene factor"; invisible most of the time, but unacceptable when it becomes an issue. That would make it hard to market a secure OS, but impossible to sell an insecure one. An important distinction, let me explain. People will not consider security as a factor in any product which seems secure enough, but will not touch any product which does not seem secure enough. Therefore a loss for any one company pulls its rep down with respect to the competitors, but there isn't really any such thing as a security marketing "win".

Where does that leave Apple? Well, a "major security upgrade" could go in one of two directions. Either it means moving from 0 concern over Mac security to a smaller value of 0 concern; or it could lead people to think that there were some security holes in Leopard that Apple decided not to tell us about and patched up in Snow Leopard. It seems to me that there is, at best, no value in marketing based on the security posture of the OS (though security features are, admittedly, different), however there certainly is value in improving the security posture to avoid the negative market perception of vulnerabilities. There is also value in responding openly and quickly to security issues to stem the rep bleeding any problem would cause.

Knowing Apple, though, they'll find the other way; the way of making security posture a winnable marketing game and winning that game.

Fisher's article states that the real question "is whether Snow Leopard will be more secure than the current version of OS X" - whereas for the moment the real question is whether Snow Leopard will continue to be secure enough.

Friday, May 29, 2009

chord graphics

Ha, crossover humour! It's like Core Graphics, which is a Mac thing, only it's chord, because I'm talking about music, but I'm a Mac guy...oh, never mind.

When I'm trying to think of chords in music I always end up with a mental image of a piano keyboard, with the notes that make up the chord pressed. That's all well and good, but I don't have a piano! Apart from some set pieces like barre chords, I can't really think of note combinations in the same way on a guitar, and certainly get flustered trying to harmonise on a violin. What I really need is a piano to sit down and work out harmonies at, which I could then play on the instrument of my choosing (playing a string of notes on any of those instruments isn't so much of a problem).

Unfortunately the biggest piece of floor space I currently have access to is about 1.3m x 0.4m. Maybe some cheap Bontempi would fit there, but not an 88-key upright. If there were a real-space version of the GarageBand digital keyboard, that would certainly fit...in fact it would probably fit in one of my nostrils. One octave doth not a piano make. Some people have suggested Clavinova, MODUS or similar electric pianos before. The thing is, they cost around £2k, whereas an upright is <£500.

Thursday, May 28, 2009

Prepping for WWDC

With the obvious first question being which parties do I go to? See you there?

Wednesday, May 20, 2009

The rokeg blood pie^W^W^Wplot thickens

So, having already discussed Klingon Anti-Virus, the under-research Klingon threat detection tool made available by Sophos, it seems that more information has been made available. From no less, or indeed more, of a source than the blog of my Clu-ful conym.

This seems to confirm the impression that the tool has been developed for some special internal use and might not be downloadable much longer. It's hard to tell, though; most of the company is being very quiet about it (indeed it wasn't until today that much internal noise was generated about the tool at all).

Of course, maybe I'm being duped. This could be some sort of company experiment to see, well, either how much free marketing they can get or who in the company is responsible for the press leaks. If it's the latter, then I need you all to take a look at my CV as I'll probably be relying on it - and you - soon ;-).

Anyway, take a look at the tool if you're interested, I've had reports that it works well but still haven't heard much feedback about the quality of the translation. BTW, interested in a Mac version of the tool? I can't promise anything but leave a message after the beep and I'll forward requests...

Tuesday, May 19, 2009

Detect the gagh lurking in your system!

Following up on my previous ability to get to the top of a Google search for a Klingon word (that one was chuvmey, as in my post Model, View, chuvmey) here is yet another attempt. At what? Why, at skewing the mental associations between science fiction television and the digital security industry, of course!

Sophos Klingon Anti-Virus is a threat detection tool for Windows computers, but in Klingon. Ever wondered what Conficker and Rokeg blood pie have in common? No, neither have I. In fact, I doubt anyone has. Nonetheless, try out the tool and see what Romulan back-doors have been installed on your box.

(N.B. this means we have to expand our remit from "Enterprise" security software, to include at least the "HMS Bounty" from Star Trek IV)

Saturday, May 02, 2009

Rootier than root

There's a common misconception, the book I'm reading now suffers from it, that single-user mode on a unix such as mac os x gives you root access. Actually, it grants you higher access than root. For example, set the immutable flag on a file (schg I think, but my iPhone doesn't have man). Root can't remove the flag, but the single user can.

Saturday, April 25, 2009

On dynamic vs. static polymorphism

An interesting juxtaposition in the ACCU 2009 schedule put my talk on "adopting MVC in Objective-C and Cocoa" next to Peter Sommerlad's talk on "Design patterns with modern C++". So the subject matter in each case was fairly similar, but then the solutions we came up with were entirely different.

One key factor was that Peter's solutions try to push all of the "smarts" of a design pattern into the compiler, using templates and metaprogramming to separate implementations from interfaces. On the other hand, my solutions use duck typing and dynamic method resolution to push all of the complexity into the runtime. Both solutions work, of course. It's also fairly obvious that they're both chosen based on the limitations and capabilities of the language we were each using. Nonetheless, it was interesting that we both had justifications for our chosen (and thus One True) approach.

In the Stroustroup corner, the justification is this: by making the compiler resolve all of the decisions, any problems in the code are resolved before it ever gets run, let alone before it gets into the hands of a user. Whereas the Cox defence argues that my time as a programmer is too expensive to spend sitting around waiting for g++ to generate metaprogramming code, so replace the compilation with comparitively cheap lookups at runtime - which also allows for classes that couldn't have possibly existed at compiletime, such as those added by the Python or Perl bridge.

This provided concrete evidence of a position that I've argued before - namely that Design Patterns are language-dependent. We both implemented Template Method. Peter's implementation involved a templatized abstract class which took a concrete subclass in the realisation (i.e. as the parameter in the <T>). My implementation is the usual Cocoa delegate pattern - the "abstract" (or more correctly undecorated) class takes any old id as the delegate, then tests whether it implements the delegation sequence points at runtime. Both implement the pattern, and that's about where the similiarities end.

Tuesday, April 21, 2009

Did you miss my NSConference talk?

The annotated presentation slides are now available to download in Keynote '08 format! Sorry you couldn't make it, and I hope the slides are a reasonable proxy for the real thing.

Monday, April 20, 2009

I may have not been correct

When I said Apple should buy Sun, whether that was a good idea or not, it seems to have failed to occur. Instead, we find that Oracle have done the necessary. Well, there goes my already-outdated SUNW tag. Presumably they'll keep Java (the licensing revenue is actually pretty good), MySQL (I've heard that Oracle make databases), the OS and a subset of the hardware gear. Then they'll become the all-in-one IT industry in a box vendors that Cisco have yet to organise, with (presumably x86) servers running the Solaris-Glassfish-Oracle-Java app stack in some insanely fast fashion. I wonder how many of the recent leftfield hardware projects they'll just jettison, and who will end up running the Santa Clara business unit...

Sunday, April 19, 2009

On default keychain settings

After my presentation at NSConference there was a discussion of default settings for the login keychain. I mentioned that I had previously recommended some keychain configuration changes including using a different password than your login password. Default behaviour is that any application can add a secure item to the keychain, and the app that did the adding is allowed to read and modify the entry without any user interaction. As Mike Lee added, all other apps will trigger a user dialogue when they try to do so - the user doesn't then need to authenticate but does have to approve the action.

That almost - but not quite - solves the issue of a trojan horse attempting to access the secure password. Sure, a trojan application can't get at it without asking the user. What about other trojan code? How about a malicious SIMBL hijack or a bundle loaded with mach_override? It should be possible to mitigate those circumstances by using custom code signing requirements, but that's not exactly well documented, and it's not really good usability for an app to just die on its arse because the developer doesn't like the other software their user has.

There's a similar, related situation - what if the app has a flawed design allowing it to retrieve a keychain item when it doesn't need it? Sounds like something which could be hard to demonstrate and harder to use, until we remember that some applications have "the internet" as their set of input data. Using a web browser as an example, but remembering that I have no reason to believe whether Safari, Camino or any other browser is designed in such a way, imagine that the user has stored an internet password. Now all that the configuration settings on the user's Mac can achieve is to stop other applications from accessing the item. If that browser is itself subject to a "cross-site credentials request" flaw, where an attacking site can trick the browser into believing that a login form (or perhaps an HTTP 401 response, though that would be harder) comes from a victim site, then that attacker will be able to retrieve the victim password from the keychain without setting off any alarms with the user.

If the user had, rather than accepting the default keychain settings, chosen to require a password to unlock the keychain, then the user would at least have the chance to inspect the state of the browser at the time the request is made. OK, it would be better to do the right thing without involving the user, but it is at least a better set of circumstances than the default.

Friday, April 17, 2009

NSConference: the aftermath

So, that's that then, the first ever NSConference is over. But what a conference! Every session was informative, edumacational and above all enjoyable, including the final session where (and I hate to crow about this) the "American" team, who had a working and well-constructed Core Data based app, were soundly thrashed by the "European" team who had a nob joke and a flashlight app. Seriously, we finally found a reason for doing an iPhone flashlight! Top banana. I met loads of cool people, got to present with some top Cocoa developers (why Scotty got me in from the second division I'll never know, but I'm very grateful) and really did have a good time talking with everyone and learning new Cocoa skills.

It seems that my presentation and my Xcode top tip[*] went down really well, so thanks to all the attendees for being a great audience, asking thoughtful and challenging questions and being really supportive. It's been a couple of years since I've spoken to a sizable conference crowd, and I felt like everyone was on my side and wanted the talk - and indeed the whole conference - to be a success.

So yes, thanks to Scotty and Tim, Dave and Ben, and to all the speakers and attendees for such a fantastic conference. I'm already looking forward to next year's conference, and slightly saddened by having to come back to the real world over the weekend. I'll annotate my Keynote presentation and upload it when I can.

[*] Xcode "Run Shell Script" build phases get stored on one line in the project.pbxproj file, with all the line breaks replaced by \n. That sucks for version control because any changes by two devs result in a conflict over the whole script. So, have your build phase call an external .sh file where you really keep the shell script. Environment variables will still be available, and now you can work with SCM too :-).

Friday, April 03, 2009

Controlling opportunity

In Code Complete, McConnell outlines the idea of having a change control procedure, to stop the customers from changing the requirements whenever they see fit. In fact one feature of the process is to be heavy enough to dissuade customers from registering changes.

The Rational Unified Process goes for the slightly more neutral term Change Request Management, but the documentation seems to imply the same opinion, that it is the ability to make change requests which must be limited. The issue is that many requests for change in software projects are beneficial, and accepting the change request is not symptomatic of project failure. The most straightforward example is a bug report - this is a change request (please fix this defect) which converts broken software into working software. Similarly, larger changes such as new requirements could convert a broken business case into a working business case; ultimately turning a failed project into a revenue-generator.

In my opinion the various agile methodologies don't address this issue, either assuming that with the customer involved throughout, no large change would ever be necessary, or that the iterations are short enough for changes to be automatically catered for. I'm not convinced; perhaps after the sixth sprint of your content publishing app the customer decides to open a pet store instead.

I humbly suggest that project managers replace the word "control" in their change documentation with "opportunity" - let's accept that we're looking for ways to make better products, not that we need excuses never to edit existing Word files. OMG baseline be damned!

Wednesday, March 04, 2009

On noodles

It's usually considered a good idea to keep a blog focused on exactly one subject. Sod that for a game of soldiers! This one's all about music.

Steph, who is a very good musician and knows what she's talking about, wrote that there are two ways to play a harp, a "get the music right" recitation style and a "get the rhythm right and everything else follows" style more suited to improvisation, noodling or folk playing.

That's not only true of the harp, it seems to hold for many instruments. For instance, in a moment of crazed, um, craziness this weekend I bought what's commonly referred to as a lute. In fact, a lute with much in common with this lute. Now I've been playing it for all of about two hours in total since Saturday, and can barely remember what note each course plays, and do a poor rendition of about four different tunes from a book of trivially simple lute tunes. But today marked an interesting transition, as it was the first day that I could make music up on the instrument without either knowing or concentrating on what I was doing. Only a couple of things (the song "Wooden Heart" made famous by Elvis Presley, and that banging dance-floor filler "Parson's Farewell") but this represented the point where I could make music on the lute - a different skill than remembering where and when to stick fingers on some bits of nylon, steel and wood.

And I think that's where the root of Steph's distinction of playing techniques really comes from; the unprepared style relies on having some music that needs to occur, and the ability for your hands (or nose or whatever your instrument is played with) to move around in some way which causes that music to exist. Whereas the prepared style relies on having some performance in mind that must be repeated, and requires that you think about moving $appendage in such-and-such way to recreate that performance. I find the distinction in conscious application to be an important one when playing the fiddle, an instrument I'm marginally better at than the lute. If I'm reading some music, playing solo or otherwise engaged in trying to play music, then I can only play whatever notes the music contained and in a fairly uninteresting manner. It's only if I'm able to relax and not think about the music that I can harmonise, ornament and otherwise play more interesting things than what was written on the page - even if not necessarily particularly well :-).

Monday, February 23, 2009

Cocoa: Model, View, Chuvmey

Chuvmey is a Klingon word meaning "leftovers" - it was the only way I could think of to keep the MVC abbreviation while impressing upon you, my gentle reader, the idea that what is often considered the Controller layer actually becomes a "Stuff" layer. Before explaining this idea, I'll point out that my thought processes were set in motion by listening to the latest Mac Developer Roundtable (iTunes link) podcast on code re-use.


My thesis is that the View layer contains Controller-ey stuff, and so does the Model layer, so the bit in between becomes full of multiple things; the traditional OpenStep-style "glue" or "shuttle" code which is what the NeXT documentation meant by Controller, dynamic aspects of the model which could be part of the Model layer, view customisation which could really be part of the View layer, and anything which either doesn't or we don't notice could fit elsewhere. Let me explain.


The traditional source for the MVC paradigm is Smalltalk, and indeed How to use Model-View-Controller is a somewhat legendary paper in the use of MVC in the Smalltalk environment. What we notice here is that the Controller is defined as:


The controller interprets the mouse and keyboard inputs from the user, commanding the model and/or the view to change as appropriate.

We can throw this view out straight away when talking about Cocoa, as keyboard and mouse events are handled by NSResponder, which is the superclass of NSView. That's right, the Smalltalk Controller and View are really wrapped together in the AppKit, both being part of the View. Many NSView subclasses handle events in some reasonable manner, allowing delegates to decorate this at key points in the interaction; some of the handlers are fairly complex like NSText. Often those decorators are written as Controller code (though not always; the Core Animation -animator proxies are really controller decorators, but all of the custom animations are implemented in NSView subclasses). Then there's the target-action mechanism for triggering events; those events typically exist in the Controller. But should they?


Going back to that Smalltalk paper, let's look at the Model:


The model manages the behavior and data of the application domain, responds to requests for information about its state (usually from the view), and responds to instructions to change state (usually from the controller).

If the behaviour - i.e. the use cases - are implemented in the Model, well where does that leave the Controller? Incidentally, I agree with and try to use this behavior-and-data definition of the Model, unlike paradigms such as Presentation-Abstraction-Control where the Abstraction layer really only deals with entities, with the dynamic behaviour being in services encapsulated in the Control layer. All of the user interaction is in the View, and all of the user workflow is in the Model. So what's left?


There are basically two things left for our application to do, but they're both implementations of the same pattern - Adaptor. On the one hand, there's preparing the Model objects to be suitable for presentation by the View. In Cocoa Bindings, Apple even use the class names - NSObjectController and so on - as a hint as to which layer this belongs in. I include in this "presentation adaptor" part of the Controller all those traditional data preparation schemes such as UITableView data sources. The other is adapting the actions etc. of the View onto the Model - i.e. isolating the Model from the AppKit, UIKit, WebObjects or whatever environment it happens to be running in. Even if you're only writing Mac applications, that can be a useful isolation; let's say I'm writing a Recipe application (for whatever reason - I'm not, BTW, for any managers who read this drivel). Views such as NSButton or NSTextField are suitable for any old Cocoa application, and Models such as GLRecipe are suitable for any old Recipe application. But as soon as they need to know about each other, the classes are restricted to the intersection of that set - Cocoa Recipe applications. The question of whether I write a WebObjects Recipes app in the future depends on business drivers, so I could presumably come up with some likelihood that I'm going to need to cross that bridge (actually, the bridge has been deprecated, chortle). But other environments for the Model to exist in don't need to be new products - the unit test framework counts. And isn't AppleScript really a View which drives the Model through some form of Adaptor? What about Automator…?


So let me finish by re-capping on what I think the Controller layer is. It's definitely an adaptor between Views and Models. But depending on who you ask and what software you're looking at, it could also be a decorator for some custom view behaviour, and maybe a service for managing the dynamic state of some model entities. To what extent that matters depends on whether it gets in the way of effectively writing the software you need to write.