Saturday, December 08, 2007

Objective-C Design Patterns

Certain events at work have turned me into a bit of a design patterns geek of late, and as such I stumbled across this DDJ article from 1997 (the title of this post is the link). According to not many other people have stumbled across it, but it's a great article. The code listing links seem to 404 even though the listings are at the bottom of page 1 of the article.

Something very important can be learned from this article, which is at best covered tangentially in the GoF book: Design patterns are not language-independent. Calling the C++ and Objective-C ways of writing code "pattern idioms" ignores the point that actually, the code you come up with is more important than the design (customers don't typically want to pay the same cash for your UML diagrams that they do for your executables), and gasp different languages require different code! Different patterns can be used in ObjC and Smalltalk than in C++, and different patterns again in object-oriented Perl. Different patterns will be appropriate for working with Foundation than with the NeXTSTEP (pre 4.0) foundation kit or with ICPak101. Designing your solution independent of the language and framework you're going to use will get you a solution, but will not necessarily produce the easiest solution to maintain, the most efficient solution or one that makes any sense to an expert in the realm of that language and framework.

Tuesday, December 04, 2007

FSF membership

I am now an associate member of the FSF. This is a good way to support Free Software development (including GNUstep, and you don't even need to be able to code :-). I've added a referral link to the sidebar - I don't get a kickback obviously, although I do get gifts if enough people are referred by me, and it helps the FSF to track where donors are getting their information from. If you use Free Software, you might consider donating some cash - especially now that the dollar's so crap ;-).

Thursday, November 22, 2007

Post #100!

And to celebrate, we look at the differences between managers and humansprogrammers.

Upcoming Cocoa nerd stuff

I have organised a NSCoder Night for this coming Tuesday, November 27. It shall be in the Jericho Tavern pub at 8pm; bring yourself, bring an interest in Cocoa, and perhaps bring some code to talk about or work on. There won't be any agenda as such, just a group of NSCoders talking about NSCoding.

In January, PaulHR and I shall be entertaining OxMUG on the subject of Getting Things Done™ - in particular I have now started braindumping my many to-do lists into OmniFocus and I'm finding it very expressive and useful. In fact preparing that talk has just zoomed its way over to my OF inbox :-). That talk shall be Tuesday, January 8th.

Monday, October 29, 2007

Verify your backups

Apple shipped Mac OS X 10.5 this weekend, and three of the features are Time Machine, dtrace, and improved CHUD tools. Time Machine, dtrace, CHUD tools. iPod, mobile phone, web browser. Time Machine, dtrace, CHUD tools.

To spell that out in long hand, it's very easy now to see how various features in the Operating System behave. And in the case of Time Machine, we see that it walks through the source file system, copying the files to the destination. When I last gave a talk to OxMUG on the subject of data availability, it was interesting to notice how the people who had smugly put their hands up to indicate that they performed regular backups became crestfallen when I asked the second question: and how many of you have tested that backup in the last month?

Time Machine is no different in this regard. It makes copies of files, and that's all it does. It doesn't check that what it wrote at the other end matches what it saw in the first place, just like most other backup software doesn't. If the Carbon library reports that a file was successfully written to the destination, then it happily carries on to the next file. Just like any other backup software, you need to satisfy yourself that the backup Time Machine created is actually useful for some purpose.

Sunday, October 28, 2007


Back in 1992, Robert X. Cringely wrote in Accidental Empires: How the boys of Silicon Valley make their billions, battle foreign competition, and still can't get a date [Oxford comma sic]:

Fifteen years from now, we [Americans] won't be able to function without some sort of machine with a microprocessor and memory inside. Though we probably won't call it a personal computer, that's what it will be.

Of course, by and large that's true; the American economy depends on microcomputers and the networks connecting them in a very intimate way. It's not obvious in 2007 just how predictable that was in 1992, as the "networks connecting them" had nothing like the ubiquity which is now the case. When "Accidental Empires" was written, the impact of a personal computer in an office was to remove the typewriter and the person trained to type, replacing both with someone who had other work to be doing typing on a system thousands of times more complicated than a typewriter.

What's most interesting though is the (carefully guarded; well done Bob) statement that "we probably won't call it a personal computer," as that part is only partially true. All of the people who have Tivos, or TVs, also have a personal computer. All of the people who have mobile phones and digital cameras also have a personal computer. The people who have Playstation 3s and Nintendo Wiis also have personal computers. In business, the people who annoy everyone else by playing with their palmtops in meetings instead of listening to what the amazingly insightful Cocoa programmer has to say are also wasting time trying to work out how to sync them with, yup, the personal computer they also have on their desk.

So the question to be asked is not why Cringely got it wrong, because he didn't, but why hasn't the PC already disappeared, to be completely replaced with the "it is a PC but we won't call it that" technology? Both already exist, both are pervasive, and the main modern use of both is remote publishing and retrieval of information, so why do we still tie ourselves to a particular desk where a heavy lump of metal and plastic, which can't do very much else, sits disseminating information like some kind of [note to self: avoid using the terms Oracle or Delphi here] groupthink prophet?

Thursday, October 25, 2007

OmniWeb 5.6 tip of the day

defaults write com.omnigroup.OmniWeb5 WebKitOmitPDFSupport -bool TRUE

Sorry, but it doesn't view properly and doesn't print properly either :-(.

Tuesday, October 23, 2007

The times, they mainly stay the same

bbum displays a graph of the market capitalization (he's american, so the z sticks) of a few of the computer companies, noting that if after-hours trading isn't too surprising, then tomorrow (for Americans, again) the market will open with Apple being the biggest computer manufacturer on the planet. However, these figures fail to show something reasonably interesting.

What have IBM (up 24% y-o-y), HP (up 30 %) or Dell (up 21%) done to enamour you to their brand lately? If you're anything like me, then they've done nothing at all. Selling the same old Operating Systems on the same old hardware doesn't count as innovative. Compare them with Sun (up 13% year-on-year) or Apple (115%) and you'll see that there's basically no accounting for taste on the stock market. While Apple have been selling the shiny gadgets, Sun have been delivering the most observable operating environment on the planet and Dell have been doing, well, shit-all would be a polite phrase, and yet Dell have outstripped Sun in growing their stock price. In fact, HP have managed to blow up their stock price out of all proportion, while fighting scandals and the complete haemmorhaging of their management staff.

Saturday, October 13, 2007

Nice-looking LaTeX Unicode

Because there was no other single location with all of this written:

\usepackage{ucs} % Unicode support
\usepackage[utf8x]{inputenc} % UCS' UTF-8 driver is better than the LaTeX kernel's
\usepackage[T1]{fontenc} % The default font encoding only contains Latin characters
\usepackage{ae,aecompl} % Almost European fonts/hyphenation do a better job than Computer Modern

There are a couple of characters I need (Latin letter yogh, Latin letter wynn + capitals) which aren't known by UCS, and I don't yet know how to add them. But this is a pretty good start.

Saturday, September 15, 2007

Still trading as ClosedDarwin

It's not surprising, but while Apple's opensource page now includes a link to the iPhone software release (clicky the title), this only contains links to the WebCore and JavaScriptCore source, which is also available from the WebKit home on While it is possible that the iPhone is distributed solely with software Apple can distribute without source, I wouldn't be surprised if there isn't just a teensy dollop of GPL code in there somewhere...

Wednesday, September 05, 2007

Old news

So the Inquirer thinks they've got a hot potato on their hands, with this "security flaw" in OS X. I've been using this approach for years (like, since NeXTSTEP): boot into single-user and launch NetInfo manually, then passwd root. Or in newer Mac OS X, nicl means you don't have to launch NetInfo.

Of course, if you give physical access to the computer without a Firmware password, then the 'attacker' may as well just boot from external media and do whatever they want from there. But the solution, as well as setting the Firmware password, is to edit the /etc/ttys file, change the line:

console "/System/Library/CoreServices/" vt100 on secure onoption="/usr/libexec/getty std.9600"


console "/System/Library/CoreServices/" vt100 on onoption="/usr/libexec/getty std.9600"

Now the root password is required in single-user mode (as the console is no longer considered a secure terminal).

Sunday, August 26, 2007

Holding a problem in your head

The linkied article (via Daring Fireball) describes the way that many programmers work - by loading the problem into their head - and techniques designed to manage and support working in such a way. Paul Graham makes the comparison with solving mathematics problems, which is something I can (and obviously have decided to) talk about due to the physics I studied and taught. Then I also have things to say about his specific recommendations.

In my (limited) experience, both the physicist in me and the programmer in me like to have a scratch model of the problem available to refer to. Constructing such a model in both cases acts as the aide memoire to bootstrap the prolem domain into my head, and as such should be as quick, and as nasty, as possible. Agile Design has a concept known as "just barely good enough", and it definitely applies at this stage. With this model in place I now have a structured layout in my head, which will aid the implementation, but I also have it scrawled out somewhere that I can refer to if in working on one component (writing one class, or solving one integral) I forget a detail about another.

Eventually it might be necessary to have a 'posh' layout of the domain model, but this is not yet the time. In maths as in computing, the solution to the problem actually contains the structure you came up with, so if someone wants to see just the structure (of which more in a later paragraph) it can be extracted easily. The above statement codifies why in both cases I (and most of the people I've worked with, in each field) prefer to use a whiteboard than a software tool for this bootstrap phase. It's impossible - repeat, impossible - to braindump as quickly onto a keyboard, mouse or even one of those funky tablet stylus things as it is with an instantly-erasable pen on a whiteboard. Actually, in the maths or physics realm, there's nothing really suitable anyway. Tools like Maple or Mathematica are designed to let you get the solution to a problem, and really only fit into the workflow once you've already defined the problem - there's no adequate way to have large chunks of "magic happens here, to be decided at a later date". In the software world, CASE tools cause you to spend so long thinking about use-cases, CRC definitions or whatever that you actually have to delve into nitty-gritty details while doing the design; great for software architects, bad for the problem bootstrap process. Even something like Omnigraffle can be overkill. it's very quick but I generally only switch to it if I think my boardwriting has become illegible. To give an example, I once 'designed' a tool I needed at Oxford Uni with boxes-and-clouds-and-lines on my whiteboard, then took a photo of the whiteboard which I set as the desktop image on my computer. If I got lost, then I was only one keystroke away from hiding Xcode and being able to see my scrawls. The tool in question was a WebObjects app, but I didn't even open EOModeler until after I'd taken the photo.

Incidentally, I expect that the widespread use of this technique contributes to "mythical man-month" problems in larger software projects. A single person can get to work really quickly with a mental bootstrap, but then any bad decisions made in planning the approach to the solution are unlikely to be adequately questioned during implementation. A small team is good because with even one other person present, I discuss things; even if the other person isn't giving feedback (because I'm too busy mouthing off, often) I find myself thinking "just why am I trying to justify this heap of crap?" and choosing another approach. Add a few more people, and actually the domain model does need to be well-designed (although hopefully they're then all given different tasks to work on, and those sub-problems can be mentally bootstrapped). This is where I disagree with Paul - in recommendation 6, he says that the smaller the team the better, and that a team of one is best. I think a team of one is less likely to have internal conflicts of the constructive kind, or even think past the first solution they get concensus on (which is of course the first solution any member thinks of). I believe that two is the workable minimum team size, and that larger teams should really be working as permutations (not combinations) of teams of two.

Paul's suggestion number 4 to rewrite often is almost directly from the gospel according to XP, except that in the XP world the recommendation is to identify things which can be refactored as early as possible, and then refactor them. Rewriting for the hell of it is not good from the pointy-haired perspective because it means time spent with no observable value - unless the rewrite is because the original was buggy, and the rewritten version is somehow better. It's bad for the coder because it takes focus away from solving the problem and onto trying to mentally map the entire project (or at least, all of it which depends on the rewritten code); once there's already implementation in place it's much harder to mentally bootstrap the problem, because the subconscious automatically pulls in all those things about APIs and design patterns that I was thinking about while writing the inital solution. It's also harder to separate the solution from the problem, once there already exists the solution.

The final sentence of the above paragraph leads nicely into discussion of suggestion 5, writing re-readable code. I'm a big fan of comment documentation like headerdoc or doxygen because not only does it impose readability on the code (albeit out-of-band readability), but also because if the problem-in-head approach works as well as I think, then it's going to be necessary to work backwards from the solution to the pointy-haired bits in the middle required by external observers, like the class relationship diagrams and the interface specifications. That's actually true in the maths/physics sphere too - very often in an exam I would go from the problem to the solution, then go back and show my working.

Saturday, August 25, 2007

Template change

I had to make some minor edits to the Blogger template used here anyway, so I decided to have a little monkey with the fonts. Here's what you used to get:

And here's what you now get:

I think that's much better. Good old Helvetica! I don't understand anything about typography at all, but I think that the latter doesn't look as messy, perhaps because of the spacing being tighter. Interestingly the title actually takes up more space, because the bold font is heavier. Marvellous.

Friday, August 24, 2007

Random collection of amazing stuff

The most cool thing that I noticed today ever is that Google Maps now allows you to add custom waypoints by dragging-and-dropping the route line onto a given road. This is great! I'm going to a charity biker raffle thing in Pensford next weekend, and Google's usual recommendation is that I stay on the M4 to Bristol, and drive through Bristol towards Shepton Mallet. This is, frankly, ludicrous. It's much more sensible to go through Bath and attack the A37 from the South, and now I can let Google know that.

Trusted JDS is ├╝ber-cool. Not so much the actual functionality, which is somewhere between being pointy-haired enterprisey nonsense and NSA-derived "we require this feature, we can't tell you why, or what it is, or how it should work, but implement it because I'm authorised to shoot you and move in with your wife" fun. But implementing Mandatory Access Control in a GUI, and having it work, and make sense, is one hell of an achievement. Seven hells, in the case of Trusted Openlook, of which none are achievement. My favourite part of TJDS is that the access controls are checked by pasteboard actions, so trying to paste Top Secret text into an Unrestricted e-mail becomes a no-no.

There does exist Mac MAC (sorry, I've also written "do DO" this week) support, in the form of SEDarwin, but (as with SELinux) most of the time spent in designing policies for SEDarwin actually boils down to opening up enough permissions to stop from breaking stuff - and that stuff mainly breaks because the applications (or even the libraries on which those applications are based) don't expect to not be allowed to, for instance, talk to the pasteboard server. In fact, I'm going to save this post as a draft, then kill pbs and see what happens.

Hmmm... that isn't what I expected. I can actually still copy and paste text (probably uses a different pasteboard mechanism). pbs is a zombie, though. Killed an app by trying to copy an image out of it, too, and both of these symptoms would seem to fit with my assumption above; Cocoa just doesn't know what to do if the pasteboard server isn't available. If you started restricting access to it (and probably the DO name servers and distributed notification centres too) then you'd be in a right mess.

Saturday, August 18, 2007

It's update o'clock

In other words, it's "Graham remembers that someone needs to write the interwebs, too" time. The main reason I haven't written in a while is that I'm enjoying the new job, so it's satiating my hacking desires. No hacking projects = nothing to talk about. I can't really talk about the work stuff, either; this is unfortunate because I did some really cool KVC-cheating in a prototype app I wrote, and it'd be a good article to describe that.

I've more or less dropped off the GNUstep radar recently. I still need to talk to the employers' legal eagles in order to update my FSF copyright assignment, and then find something to hack on. Currently I only have a Mac OS X machine so I'd probably switch to some app-level stuff. I have half a plan in my mind for a SOPE app, but despite reading all the documentation I still haven't worked out how to get started :-|. The code isn't too much of a problem, I've used WebObejcts before, but WebObjects has a 'hit this button to build and run your WebObjects code' button, and there's no description AFAICT for SOPE of what to put in the GNUmakefile to get built and running SOPE code, even whether you're supposed to provide your own main() or anything. Hmmm...I wonder if there's a sample project around...

Saturday, July 14, 2007

Ich habe mein Handy verloren

Ooops. Left my mobile phone on holiday in Germany when I came back. This may explain why some people who were expecting to hear from me in Germany didn't...anyway, could anyone who thinks I ought to know their phone number please either leave a private comment on this livejournal entry (all comments are private by default on that post), or e-mail me? Thankyou! I promise to buy a filofax which I won't leave in foreign parts.

Thursday, June 21, 2007

The new netiquette

It used to be that netiquette was all about TURNING OFF THE CAPS LOCK, making sure that the subject matched the content, that you didn't go off the wall if someone didn't reply to your e-mail in a few minutes, that sort of thing. In fact, this (RFC1855) sort of thing. But now there are different netiquette requirements, and no obvious guidelines, nor seemingly any common practices. Now I'm just the kind of person who thinks that some de facto ruleset would be useful, so that everyone knows what to expect from everyone else.

For instance, take social networking sites like LinkedIn or Facebook. How well do you know someone before you 'add' them as a friend? Once met at a conference, once read their blog, cohabited for two years? Do you talk to them first, to let them know who you are and that you're not a crazy stalker? If someone adds you, and you don't know who they are, do you accept or reject by default? Do you ask them who they are? If they claim to have met you at $conference or in $pub, do you accept that, ask for a photo, or what?

Tuesday, June 19, 2007

A bit late for that, isn't it?

Just won the above-linkied item on eBay: the Amiga SDK :-) I'm probably slightly late to make any money out of Amiga development, but I'm glad I finally get to tinker. Might have to crack open the AROS if I get sufficiently involved...

I could probably write a three-page rant about how great the Amiga was and how badly Commodore stuffed up. But of course, none of it would be new ;-).

Wednesday, June 13, 2007

Apple and Google sitting in a tree, f-i-g-h...erm...t-i-ng

This really came out of a throwaway comment I made on Daniel, but it seems popular to pick apart every last iota of Steveness from the WWDC keynote, and I'm nothing if not popular. So here we go.

What is WebClip? In fact, that's not really the question I want to be asking. We know what WebClip is; it's a technology which lets users see only the bits of web pages that those users want to see. The real question is what does that mean? Well, I know which bits of a web page I usually want to see; they're the bits which aren't adverts.

I'm going to go out on a bit of a limb, and guess that the way WebClip works (I'm not a WWDC bod so I don't have any more access to the new stuff than anyone else; in fact I haven't even downloaded the Safari 3 beta) is by observing which DOM elements are within the clipped region, and downloading only media relevant to those elements. If that's the case, then you can ignore the fact that the ads on the page don't get seen; they don't even get downloaded. Therefore if I'm reading, say, the Dilbert strip in a WebClip, I'm effectively getting free Dilbert, even more free than the free website because I'm not upping their ad impression count.

One thing I noticed about the various sites that Steve clipped is that as far as I can remember, none of them features 'Ads by Google'. It would be quite embarrassing for Apple's CEO to demonstrate how to reduce revenue for one of Apple's most prominent board members in a world-broadcast keynote talk. As over 99% of Google's revenue is from online ads, and Eric Schmidt (CEO of Google) is on the Apple board, that is exactly what Steve was showing us, though. There'll be a doughnut fight back at Infinite Loop over that, I expect.

Monday, June 11, 2007


Maybe it's just me who gets annoyed by teeny-tiny miniaturised views which are completely illegible. Even so, I've just uploaded an article I wrote on Miniwindows which can be used in any OpenStep implementation such as Cocoa or GNUstep. It's based on the Hillegass TypingTutor example, but doesn't really use any code from that and isn't (I hope) otherwise reliant on that context, so it should be possible to see what's going on even if you haven't read Hillegass. Which you should ;-)

Wednesday, June 06, 2007

Moving on

There has been some cat-escaping-bagness, which is mainly my fault, but now that it's all official I'm going to 'announce' it myself: I've got a new job! From the end of July, I'll be working at Sophos as Senior Software Engineer, Mac (the post is still up at the linky in the title, for the moment).

This looks like being an exciting time - I've been enjoying the ObjC hacking I do with Brainstorm and this will be an opportunity to do even more of that, and the move from services to user-installed apps will bring its own changes and new experiences.

Erm, that really is all for now. More info as it becomes available, and all that.

Saturday, May 12, 2007

A bit of backup script

Good news - there's a handy tool in OS X called wait4path which can help when writing timed scripts to backup to removable media.

Bad news - it [at least in Tiger....] works slightly esoterically - if a path is already present, it will still wait for another mount kevent before exiting. It should therefore be used in a script like this:


if [ ! -d /Volumes/Backups ]; then
echo "waiting for backup volume..."
/bin/wait4path /Volumes/Backups

# do some backups

Note, however, that if you do this in a crontab job it could potentially wait for a very long time, so you should wrap all that with a /var/run style semaphore.

Friday, May 11, 2007

Official Google Mac Blog: Measuring performance of distributed notifications

Official Google Mac Blog: Measuring performance of distributed notifications on the performance of Google Update: "Just how expensive is it? How many notifications can you broadcast per second? As with all Google client products, we want to be good citizens and not bog down the client machine."

A noble sentiment, but dear Google, answer me this: just how many times per second is each app going to be checking for updates? When does this become an important factor, and not a question of premature optimisation? They decided to go for distributed notifications instead of distributed objects, which seems reasonable - not because of the overhead issues (in fact a DO is probably a lot cheaper, if written properly), but because of the kind of information they're trying to get through this IPC.

Thursday, May 03, 2007

Bye bye data, hello...the same data

Of course it happens to everyone, and yesterday evening it happened to home directory became inaccessible. What seems to have happened is that the filevault image containing my ~ became corrupted upon unmounting (though notably, I didn't do the 'recover space' thing the last time I logged out before the failure, so it should just have been a straightforward unmount). so the simplest recovery route was to delete the user, re-create it then recover my data from the backups. I don't keep backups of the Library area so lost a few preference files, and of course have had to trawl around my email looking for licence keys and the like.

For the moment I've set up the replacement user without Filevault, and am using encrypted disk images for specific data I'd rather keep thus protected. This makes backups harder - I keep my backup drive unencrypted as it doesn't come out with me, so I now need to come up with a script to backup my home dir except for the encrypted images, mount the images and back up the content, then unmount them. This means that the backup will need to be manually triggered so that passwords don't have to be kept anywhere...or I write my own backup tool, which uses passwords stored in a keychain kept outside the target user account; and I need to make sure that keychain is also recoverable ;-).

A lot of my data was completely unaffected - work stuff is typically stored in subversion on their servers (as well as another local copy on my work laptop), my email is all on remote servers, my calendar is served by and so on. There are some improvements I could make - I could probably use an LDAP server and abxldap to remotify my contact list, and offers subversion hosting which I'm currently not making use of. But it happens that next Tuesday, I'll be talking about data security at the Oxford Mac Users Group, so I will expand on this tale in full and gory detail ;-). St. Cross College, Tuesday 8th May, 7:30 pm.

Update 20070503T1653Z+0000: actually, things look a little more serious than simply a trashed sparseimage:

mabinogi:~/Desktop leeg$ hdiutil attach OmniDazzle-1.0.1.dmg
load_hdi: timed out waiting for driver to load
load_hdi: timed out waiting for driver to load
load_hdi: timed out waiting for driver to load
load_hdi: timed out waiting for driver to load
load_hdi: timed out waiting for driver to load
2007-05-03 15:41:35.535 diskimages-helper[718] ERROR: unable to load disk image driver - 0xE00002C0/-536870208 - Device not configured.

Good news is that when that gets fixed, my old homedir will start working again. Bad news is: um, it looks fairly messed up to me :-(

Wednesday, April 11, 2007

All the more reason to like FOSDEM

So it seems that my half-attendance at FOSDEM paid off more than I could have hoped, as I won a year's subscription to GNU/Linux magazine. The publication is francophonic, so this will be a good chance to improve my command of la langue des grenouilles ;-).

Thursday, March 15, 2007

Summer of code

GNUstep has been approved for this year's Google Summer of Code. The title link goes to the GNUstep wiki page outlining possible projects, but I'm sure that if a student had another idea you'd be welcome to talk about it on the gnustep-discuss mailing list, and probably get a mentor!


When most perl developers (I believe there still are one or two in existence) talk of the "cool one-liner" that they wrote, what they actually mean is that they grabbed a crapload of packages from CPAN, invoked a few use directives and then, finally, could write one line of their own code which happens to invoke a few hundred lines of someone else's code, which they have neither read nor tested.

Modulo testing, my short script (below) to convert mbox mailboxes to maildirs is exactly like that. While it has three lines of meat, these call upon the (from what I can tell, fantastic) Mail::Box module to do the heavy lifting. That package itself is less svelte, with 775 lines of perl. Which is the interface to a C bundle, which is (single-arch) 92k. But never mind, I still wrote a three-liner ;-)

#!/usr/bin/perl -w

use strict;

use Mail::Box::Manager;

@ARGV = ("~/mail", "~/Maildir") unless @ARGV;

# open the existing (mbox) folders
my $mgr = new Mail::Box::Manager;
my ($srcPath, $dstPath) = @ARGV;
# expand tildes and stuff
$srcPath = glob $srcPath;
$dstPath = glob $dstPath;

opendir MBOXDIR, $srcPath or die "couldn't open source path: $!";

foreach my $file (grep !/^\./, readdir MBOXDIR)
my $mbox = $mgr->open(folder => "$srcPath/$file",
folderdir => "$srcPath");
# open a maildir to store the result
my $maildir = $mgr->open(folder => "$dstPath/$file",
type => "Mail::Box::Maildir",
access => 'rw',
folderdir => "$dstPath",
create => 1);

$mgr->copyMessage($maildir, $mbox->messages);

Monday, February 26, 2007

FOSDEM / GNUstep photos

Just came in on #gnustep. Many photos of the GNUstep booth, dev room and of course the famous GNUstep dinner.

Sunday, February 25, 2007

Apres ca, le FOSDEM

It appears that I'm sat in Terminal B of L'Aeroporte Nationale de Bruxelles, waiting for my flight to board. While there are wirelesses around, the ones to which I can connect seem not to be offering much in the way of DHCP so this update will come in later than it was written (which was at 13:35), as I will probably post it while I'm on the bus between Heathrow and Oxford. [Update: actually not until I got home]

Irrelevancies such as that aside, I had a great FOSDEM! In fact, a great half-FOSDEM, as I did my tourism today. I met a load of people (of which more below), went to some inspiring talks and discussed many exciting and interesting projects (in multiple languages - I spoke to one person in English, Dutch and French sometimes in the same sentence). It was a good exercise to see who wasn't present as much as who was - for instance RedHat didn't have an official presence although the Fedora Project had a booth (next to the CentOS one ;-), similarly Novell (one of the big sponsors) was absent but the OpenSUSE project had Yet Another Small Table. Sun were conspicuously present in that the OpenSolaris and OpenJDK table was being manned seemingly by Sun's salespeople rather than user group members...although maybe that's just my interpretation.

The overriding feeling I got was that the conference was running on l'espirit d'anarchie and that the resulting adrenaline and enthusiasm drove the conference on. The keynote speeches were really the only regimented aspect of FOSDEM - a necessity given the size of the auditorium and that was packed to the rafters with FLOSSers. I didn't go to the final keynote on open-sourcing Java as I was manning the GNUstep booth, but learned a lot on software patents and Free Software and Jim Gettys' description of the technical challenges in creating OLPC was very insightful.

So, GNUstep. GNUstep, GNUstep, Etoile [I'll add the accents in in a later update...this keyboard doesn't have dead keys :-(]. For a start it was great to meet all the other GNUsteppers, and have some good discussions and debates (as well as some good moule frites and Kwak beer). For anyone who doubts that GNUstep is still alive, the dev room at FOSDEM is one place to allay such suspicions with many developers, designers, users and supporters presenting their ideas to each other, asking each other questions and generally contributing to the GNUstep camaraderie. Even an improptu troll by Miguel de Icaza at the GNUstep booth wasn't enough to make us all throw Project Center away and buy a book on C# ;-). Presentations on GNUstep-make v2 (which I've described here before...), the Cairo graphics back-end (which I don't think Fred Kiefer was expecting to present, but made a very fine job of it anyway) and third-party use of GNUstep were all very useful and well-received...I expect today's presentations were too but I didn't get to go to them :-(. [Instead, I was significantly underwhelmed by the sight of the Mannekin Pis.]

Tuesday, February 20, 2007

Fairly cool update to GNUstep

GNUstep-make now supports arbitrary(-ish) filesystem layouts. While the default is still the /usr/GNUstep layout with the various domains, one of the bundled alternatives is to put everything in FHS-compliant locations. Once that hits a release (which I believe will be called gnustep-make 2.0), that should make newbie users and distributors much more comfortable with GNUstep. I also provided a NeXT-ish layout which doesn't clone the NeXT directory hierarchy, but rather mimics it sensitively.

My work laptop just earned itself a reinstall (I'd been uncomfortable with CentOS for a while, but the facts that hotplug isn't configured properly and every time I 'yum update' I have to fix a handful of drivers drove me over the edge), so when I re-create my GNUstep installation I'll do it with the new-style make. I'm currently wavering on the side of installing Midnight BSD, but I might wimp out and dual-boot Ubuntu ;-)

Thursday, February 15, 2007

Where are we going?

Recently, XML seems to have been playing an important role for me...I've been working with various XML-RPC frameworks, reviewing a book on XHTML (actually, a book on HTML 4 which occasionally reminds you to close elements) and also dealing with DocBook. For some reason, this means when I read something like Ten predictions for XML in 2007 I feel like I'm in a position to comment. Don't worry, I don't want to cover all ten points...

If I had to choose one big story for next year, it would be the Atom Publishing Protocol (APP). [...] APP is the first major protocol to be based on Representational State Transfer (REST), the architecture of the Web. Most systems to date have only used a subset of HTTP, usually GET and POST but not PUT or DELETE. Many systems like SOAP and Web-based Distributed Authoring and Versioning (WebDAV) have been actively contradictory to the design of HTTP. APP, by contrast, is working with HTTP rather than against it.

Well, this suits me...a complete bi-directional implementation of HTTP would certainly be pleasant to use. But the point is that you already can do bi-directional transactions with these ugly hack interfaces, which importantly are already in use. This isn't like moving from rcp to HTTP to do your file uploads, this is like moving from HTTP to a version of HTTP which IE doesn't recognise for doing your content publication. WebDAV might be ugly but it already allows people to run their Subversion or calendar (yes, I know...) servers over "the web". APP is good but the activation energy may be too high.

2007 is the make-or-break year for the Semantic Web.

s/make-or-//, I believe. I've talked to two people who considered themselves important in the world of the semantic web, and have attended a talk by Sir Tim on the subject. I still have yet to see anything beyond so-called blue skies proposals and views on how much better the web will be once everyone embraces semantics, which none of them appeared to have done. At least, there's nothing yet which has convinced me that Semantic Web is some kind of killer app, and that what it can do can't already be done with a bit of XML/XSLT and some good schemas. Maybe that's the point, that traditional web will just slide into Semanticism by dint of XQuery, XProc and the rest of the X* bunch. I don't think so.

2007 will be the first year in which almost every significant browser fully supports XSLT 1.0 [...] I predict that this will render many of the debates about HTML 5 and XHTML 2 moot. Sites will publish content in whatever XML vocabulary they like, and provide stylesheets that convert it to HTML for browser display.

Yup, this couldn't come too soon. I sorely hope that HTML 5 will be still-born; redundant as soon as it becomes available. I also hope that document-generating frameworks will become better at generating valid markup in the future (which is at least easier to do with XML than SGML)...of course if browsers really do make a good job of supporting XSLT then they won't really be able to generate invalid markup as they'll be doing it in their own format.

Apple will release Safari 3 along with Leopard. Although it will focus mostly on Apple proprietary extensions, Safari 3 will add support for Scalable Vector Graphics (SVG) for the first time.

That's not much of a prediction, unless predicting things means reading the WebKit changelog. What would be good is for a couple of the more popular also-ran browsers to support MathML (I think Firefox already does in some fashion, WebKit is at the "we're thinking about it" stage, and I really don't know about any others) so that it has a chance of being adopted. I speak here as an ex-physicist who was annoyed at seeing academics putting DVI or PostScript files on the web, or (and this is just as bad if not worse) HTML with graphics of the equations. If LaTeX could be transformed into HTML+MathML and published on the web then we'd be somewhere approaching the original goal of the Nexus.

Wednesday, February 14, 2007

FAQ update

Just a warning about @defs not necessarily being a good idea, some code from Mike Ash to add polish to the forwarding example'm sure I added something else. Oh yes, the fact that _cmd is the name of the SEL argument in method definitions.

I'm quite inclined to refactor the FAQ (because the source is basically the XHTML), as something like a DocBook FAQ Article, which might explain any (further) hiatus in updates. I haven't started working on this yet but it seems more attractive, as I look at the horrendous car crash of formatting produced by a combination of different text editors I've been using.

Thursday, February 08, 2007

Why Java is so damned lame, part exp(I*M_PI)

Gah. Back when I used to work for Oxford University, I had to do the occasional bit of Java programming for a WebObjects app. Being quite a bit more familiar with ObjC than with Java, I always found this a bit of a headache...partly the way Java Foundation is semi-bridged with the "real" Java API meant I was constantly referring to my Tiger book (Java in a Nutshell, not Mac OS X Tiger), and partly because ironically Java required a lot more mystical casting voodoo than ObjC...seriously, if I never see the phrase Session session=(Session)session() again, I'll be a happy man.

Today's "gah, why can't Java be as easy as ObjC" moment came courtesy of an algorithmic problem I was attacking at TopCoder, just for the hell of it. Problem is, to get to their problem definitions you have to use their applet thingy, and while I'm there I decided that I may as well type the code into their thingy after all Java's not that bad is it? [They also accept VB, C# and C++ but my recollection of the C++ class syntax is slim and my knowledge of the STL is slimmer...I do know enough to try and avoid it though] So I think their problem definitions are proprietary, but suffice to say I wanted the ability to compare two strings, returning equal if the strings could be made the same by popping any arbitrary number of characters off the front and shifting them onto the back (i.e. treating them like rings of characters).

Well, that's simple isn't it? In ObjC I'd subclass NSString, override isEqual: and Robert is your parent's sibling. I even know how to do that in Java:

class CircularString extends String {
public boolean equals(Object otherObj) {
} You're not allowed to do that, because for some unfathomable reason Gosling and the boys at Oak decided to inflict on us another voodoo This seems to have one purpose in existence: to stop me from subclassing the String class. In the words of Points of View, Why oh why oh why oh why?

Tuesday, January 30, 2007

TGD - expanding the field

Some primarily stochastic thought processes which occurred when I tried to apply Richard Dawkin's hypotheses to the Æsir. If your default browsing font doesn't contain a glyph for the ligature, well, tough ;-)

I suppose one of the first things to note is that whereas Xtianity is considered a religion, the Æsir and Vanir (the Norse pantheon) are thought more often as folklore or mythology.[1] In fact, upon reading the Eddas it's easy to have the impression that the sagas of Odin, Thor, Loki and chums have the same qualities as those associated with, for instance, Siegfried (of Das Niebelungenlied fame), King Arthur or Finn MCoul. Essentially, the gods have the air of being erstwhile real blokes (and of course blokesses), who have accumulated stories, feats and powers as people seek to glorify them, in order that when they later claim to be descended from same they can hope to persuade people of the existence of said powers.[2]

What I find interesting is that the only difference between a quirky set of historically interesting tales and gospel truth is how many people believe in what's said. For instance, it would be easy to apply the same distinction above between religion and folklore to the Roman pantheon, which equally was a major European religion relegated to providing saints and fables once Constantine got splashed in the font. Of course, to do so would be to ignore that there were multiple pre-Christian religions in the Roman Empire. Not merely in the same way that the Norse posh nobs worshipped Odin and the thains worshipped Thor, there were actually completely different mythological universes. I'm going to choose one, completely at random.

Between the second century BC and fourth (maybe fifth) century AD, a particular popular mythos in the Empire was Mithrainism.[3] If there are any modern Mithraists, I don't know about it. Which is not surprising, considering how wacky their religion was. [Update: apparently some Zoroastrians still venerate Mithras.]

Mithras was supposed to have been born around 270BC to a virgin Mother of God (the date of the celebration of his birth was December 25th). He was worshipped as a member of a trinity, as the mediating force between the heaven and earth. In fact, heaven was not only the celestial abode of God but also the place where atoned souls would go when they died, the true believers being absolved of their earthly sins. Those less fortunate were condemned to an infernal hell. Initiates (ceremonies were closed affairs, available only to men who had performed the appropriate rites) were baptised, and Sundays were a sacred time when the Mithraists ate bread, representing the body of their God, and drank wine, representing his blood; these were symbolic of the final supper he shared with his followers before ascending to heaven in about the 64th year of his life. Along with Odin and Osiris, he is supposed to have died and been resurrected before his final ascension.

You'd never get away with that rubbish these days, which is why this is clearly a deluded heathen folk tale as opposed to, um. You can clearly see why Dawkins didn't talk about this one in TGD...

[1] Actually, there is a religion with such a pantheon, called the Ásatrú - the word is Icelandic for Æsir faith. Despite widespread confusion, none of the major organisations of this faith are actually neo-Nazis or supremacy groups.

[2] I suppose this makes Finn and Aragorn the same being.

[3] Just through etymology I am reminded that I haven't yet covered Jainism. I need to.

Monday, January 29, 2007

That syncing feeling

So, ITunes tells me my iPod is up-to-date, and that it won't copy a few songs to my iPod because the iPod software needs updating first. The word which springs to mind is "erk".

Friday, January 26, 2007

Scary stuff

Possibly the scariest diagram anyone will ever have to look at. It's even less penetrable than that Eric Levenez history of unix thing.


As I wrote the @interface to an object today, I found myself wanting for but one thing:

- (NSArray <MyProtocol> *) foo;

Where - as if you hadn't guessed it - the pointy bracket bit (the lengths to which I go to avoid typing out HTML entity names) would specify that all of the objects in the array returned by -foo conform to @protocol(MyProtocol). I then realised that this wouldn't be quite as useful as I might think, but also decided that it wouldn't be too hard even on the existing ObjC runtimes to come up with a nightmare function such as:

(Protocol *) objc_class_to_protocol(Class *cls);

...therefore meaning that my hitherto unattainable pipe dream:

- (NSArray <MythicalNSStringProtocol> *) foo;

may indeed be somewhat closer to realisability. Of course, with all of this being compile-time type checking (as with the similar beasties on Java) there would be no need to frob the runtime.

Update 2007-01-26T13:22: yes, I realise that the snippits above read "an NSArray or subclass, which also conforms to the MyProtocol protocol". I also know what I mean.

Friday, January 12, 2007

Eating one's own dog food

I feel foolish for having made this error (especially after having so patiently explained how this stuff works on Mach), but today I did it. I reported the amount of free memory on a Linux system as being the amount reported by free as free.

My own opinion on this is that I suffer from a view of hardware management which was tainted by using micros like the Dragon 32 (Radio Shack/Tandy TRS-80 to my American readers) and the Amiga, where there were basically two types of memory usage: yes and no. A particular block was either in use by the system, or it wasn't. On the Dragon 32 it was even easier than that of course; all bytes were available for reading and writing, but what happened was context-dependent. Also because this was the MC6809E, whether such a peek/poke made sense depended on whether you were trying to hit an I/O address, and what was plugged in. The Amiga had a particularly lame memory allocator which quickly sucked performance like a vacuum of performance suckage +2; but a byte of RAM was either in use or it was available.

But I digress. The point is, that such a simple view of memory availability is no longer sensible but it's hard for me to think around it without a lot of work, just as it was hard for me to become a programmer after I'd been taught BASIC and Pascal. If I were involved in UNIX internals more (and indeed that would be fun, although I think maybe Linux wouldn't be my first choice to open up) I'd probably be able to think about these things properly, just as I had to throw myself into C programming in a big way before I lost my BASIC-isms.

For the record (and so that it looks like this post is going somewhere), both operating systems have an intermediate state for RAM to be in between "used" and "not used" (where I'm ignoring kernel wired memory, and Linux kernel buffers). On Mach, there's the "inactive" state which I've already described at el linko above. On Linux, it's used as an I/O cache for (mainly disk, mainly read-ahead) operations. This means that Linux will automatically take almost all (if not all) of the memory during the boot process. The way that inactive memory gets populated on Mach means that on that system (e.g. Darwin) actually the amount of free memory starts large and inactive starts small, but over time as the active and inactive counts go up, the free count goes down, and it's rare for memory to be re-freed. On both systems, free memory is really "memory it wasn't worth you buying" as it's not being put to any use at all.