Outstanding!: December 2008 Archives


December 31, 2008

  • My insurance company (State Farm) bills home insurance on a yearly basis but car insurance on a bi-yearly basis. They actually tell me that they can't bill on a yearly basis. I asked what would happen if I were to send in the whole year's payment "I'd have to research that. We'd probably give you a refund."
  • HFS+ is case preserving but case insensitive. What the heck?
  • The Roku is great, except that in the logical conclusion of modern A/V gear, it's 100% useless without the remote, just a flat plastic console. Outstanding!

December 18, 2008

This is sort of IETF inside baseball, but also a good example of how things can go really wrong even when you're trying to do the right thing. As you may or may not know, the Internet Engineering Task Force (IETF) is the standards body responsible for most of the Internet standards (TCP, HTTP, TLS, ...) that you know and hate. The IETF is a non-membership organization and participants aren't compensated by the IETF for their contributions. Moreover, most IETF participants are working on IETF standards as part of their job. This all makes the copyright situation a bit complicated, since, at least in the US, companies tend to own work you do for them in the course of your employment.

The IETF has opted to deal with this situation with a combination notify and attest model, which works like this. There are three main ways in which people submit "contributions" i.e., text for documents, comments, etc. to the IETF:

  • As actual documents ("internet-drafts").
  • As mailing list messages.
  • As comments at IETF meetings.

The first case is the clearest: every I-D submission is required to have a boilerplate license grant attached to it, or rather a reference to a license grant. It looks (or at least until recently) something like this:

This document is subject to the rights, licenses and restrictions contained in BCP 78, and except as set forth therein, the authors retain all their rights.

That's the attest part. But what about submissions on mailing lists, stuff said at the mic at meetings, etc.? It's not really practical to expect people to start every comment with a copyright statement. Instead, the IETF has a general policy on submissions which you're given a copy of when you sign up for an IETF meeting or join a mailing list. (This policy is colloquially called "Note Well" after the title of the document). Contributing once you've read the statement is deemed to be an acknowledgement and acceptance of the policy. Note: I'm not a lawyer and I'm not taking a position on whether this will hold up in court. I'm just reporting how things are done.

OK, so you have to agree to some license terms. But what are those terms? Until recently (confusingly, the date is a bit uncertain), they were approximately that the text in your document could be reused for any IETF purpose, e.g., you could republish them, prepare derivative works for the purpose of doing revisions, etc. So far so good. Once documents were in the IETF system you could pretty much do anything IETFy with them, since you could safely (at least that's the theory) assume that the authors had granted the appropriate rights. What you couldn't do, however, was take text out of the documents and use them to prepare documents for some other standards body such as OASIS, ITU, etc. Anyway, IETF decided that it was a good idea to fix this and so a new set of license terms were prepared which involved granting this additional set of rights (the details are actually quite a bit more involved, but not really that relevant). Moreover, the new rules required IETF contributors (i.e., document authors) to attest that they had the right to submit the document under these new terms.

This last bit is where things started to go wrong. Consider what happens if you want to prepare a revision of some RFC that was written before the rule change. Ordinarily, you could do this, but now you need to submit under the new rules, and moreover you need to attest that you've obtained the necessary permissions to do so. But since all you know is that the document was submitted under the old rules, you need to go back to every contributor to the original document and get them to provide a more expansive license grant. That's fairly inconvenient already, but it gets worse. IETF doesn't really keep records of who contributed which piece of each document (remember: if it's in the system it was supposed to be automatically OK), so you don't actually know who you're supposed to contact. Even if you just take the list of people who were acknowledged in the document, this can run to tens of people, some of whom might have changed employers, died, or whatever. So, there's a reasonable chance that some documents can't practically be revised under these terms, and some IETFers are already avoiding contributing documents they would otherwise have submitted. The IETF hasn't totally ground to a halt yet, presumably due to some combination of virgin submissions, participants unaware of the issue, and participants aware of the issue but assuming they're not going to get caught.

Unfortunately, this didn't quite get figured out (for some unknown reason, people aren't really excited about spending a lot of time reading licensing agreements) until it was too late and the terms were already in effect. Next step: figure out how to unscrew things. Outstanding!


December 14, 2008

The thing I love about the Mac is how it just works. Take today (well, really the whole weekend) for example.

For a variety of reasons, I decided it was time to use an encrypted filesystem on my laptop. The natural choice here is FileVault, which a little net research suggests is imperfect, but is, after all, what Apple provides, thus avoiding contaminating a perfect Apple artifact with any un-Jobslike software. That said, I'm not completely crazy, so on the advice of counsel I decided to proceed deliberately:

Step 1: Take a backup
Since encrypted filesystems tend to have less attractive failure modes than ordinary filesystems, it seemed like a good idea to take a backup. Originally, my plan here was to use Time Machine (Apple product, remember), but when I actually went to run it, performance was rather less than great. I suspect the problem here is that it's working file by file because it needs to be able to build a data structure that allows reversion to arbitrary time checkpoints. In any case, I got impatient and aborted it, figuring I'd move back to regular UNIX tools. Unfortunately, dump doesn't work with HFS/HFS+, so this left me with tar. Tar is generally quite a bit slower than dump because it works on a file-by-file basis, which is an especially serious issue with a drive with bad seek time like the 4200 RPM drive in the Air. [Evidence for this theory: dd if=/dev/zero to the USB backup drive did 20 MB/s, so it's probably not a limitation of the USB bus or the external drive.] It's not clear to me that it's actually any faster than Time Machine, but it has the advantage of being predictable and behaving in a way I understand.

Step 2: Turn on FileVault
At this point, I've got a backup and things should be easy, so I clicked the button to turn on FileVault. The machine thought for a while and then announced I needed more free space (as much as the size of my home directory) to turn on FileVault.

Step 3: Clean Up
OK, no problem. I'll just move some of my data off the machine and onto the backup drive [you don't trust the original backup do you?], turn on FileVault and then copy it back. This takes a few hours, but finally I managed to clear out 18 G or so and I had enough room to turn on FileVault.

Step 4: Turn on FileVault (II)
OK, at this point we really should be ready. I started up FileVault and this time it cheerfully announced it was encrypting my home directory and things would be ready in 12 hours or so. OK, so that's not so bad, it'll be done when I wake up. No such luck. About an hour in it complained that it had an error copying a file and it had aborted. At this point, I was starting to rethink my plan; maybe encrypting my massive operational home directory isn't such a good idea. But I'm still committed to FileVault—more committed since I've put so much time into it!—so this brings us to...

Step 5: The Big Purge
At this point I decided to get serious and delete almost everthing off my home directory, turn on FV, and restore from backup. Luckily, I checked my backup only to realize I'd fumble-fingered and deleted the backup file (Doh!). Two hours to pull another backup, and then I need to delete files. At this point, we're talking real data, not just Music and stuff like that, so I need a secure delete. A little reading suggests srm is the tool for the job and I set it to run overnight. Unfortunately, the next morning it's only deleted about 2G, so this is going to take forever [Technical note: I was only using 7-pass mode, not 35-pass mode. I'm paranoid, not insane]. Luckily, there's also rm -P which does a 3-pass delete but seems to be much more than 2x faster than srm. I run that and fairly quickly have my home directory trimmed down to a svelte 2GB, leaving us ready for Step 6.

Step 6: Turn on FileVault (III)
This time when I turn on FV, things look pretty good. It encrypts everything in about an hour and then announces that it's going to delete my old Home directory— I've checked the secure delete checkbox, whatever that does. Unfortunately, whatever it does is bad since 4 hours later it's still securely deleting away. A little research suggests it's safe to abort this, so I give it a hard power reset (did I mention there's no cancel button, or rather that there is one but it's grayed out at this point? Also, no real progress bar, just the old spinning blue candy cane.). Anyway, the machine reboots just fine and I now have an allegedly encrypted home directory and a directory that's named /Users/ekr-<random-numbers>. I figure that's the old home directory and hit it with the old rm -P and it vanishes.

Step 7: Nuke the site from orbit. It's the only way to be sure
At this point, I've been doing a lot of deleting, and it's pretty hard to be sure that I haven't typoed or that the filesystem hasn't screwed me somehow and copied some of my precious precious data to some unused partition, so I decide it would be a good idea to run "Erase Free Space" with 7 passes, just to make sure. I set it for 7 pass and started it up about 5 hours ago. I'll let you know when it finishes. The current promise is 12 hours.

UPDATE (5:55 AM):: More progress on the progress bar, but still promising 12 hours.


December 2, 2008

Today I had two people send meeting invites (Content-Type: text/calendar) to one of my GMail accounts. Ordinarily, I can read ICS files just fine: OS/X knows what to do with them: bring up iCal and add them to my calendar. Unfortunately, GMail has decided to do me a favor: instead of just letting me download the attachment and fire up the appropriate helper app, it fires up its own calendar app and offers to let me add the event to my Google calendar. My what? I don't even recall asking for a Google calendar. Apparently, I can subscribe to that calendar in iCal via CalDAV, but that's not what I want: I just want to add the event to my ordinary calendar.

OK, this is irritating but workable. I'll just forward the message to one of my other mail accounts which I read with IMAP/Emacs and then download the .ics file and open it with iCal as per usual. But nooo.... When I forward the message, Gmail strips off the .ics attachment and just sends a text version. How, uh, helpful.

Oh, did I mention that the iPhone doesn't seem to be able to handle .ics either? I read these same messages via IMAP on my iPhone but the attachment just sits there. Outstanding!