December 2008 Archives

 

December 31, 2008

  • My insurance company (State Farm) bills home insurance on a yearly basis but car insurance on a bi-yearly basis. They actually tell me that they can't bill on a yearly basis. I asked what would happen if I were to send in the whole year's payment "I'd have to research that. We'd probably give you a refund."
  • HFS+ is case preserving but case insensitive. What the heck?
  • The Roku is great, except that in the logical conclusion of modern A/V gear, it's 100% useless without the remote, just a flat plastic console. Outstanding!
 

December 30, 2008

As part of America's ongoing effort to ensure that sex offenders can never, ever, reintegrate with society (see also) Georgia is now requiring them to hand over their passwords.
"There's certainly a privacy concern," said Sara Totonchi of the Atlanta-based Southern Center for Human Rights. "This essentially will give law enforcement the ability to read e-mails between family members, between employers."

State Sen. Cecil Staton, who wrote the bill, said the measure is designed to keep the Internet safe for children. Authorities could use the passwords and other information to make sure offenders aren't stalking children online or chatting with them about off-limits topics.

Staton said although the measure may violate the privacy of sex offenders, the need to protect children "outweighs a lot of the rights of these individuals."

"We limit where they can live, we make their information available on the Internet. To some degree, we do invade their privacy," said Staton, a Republican from Macon. "But the feeling is, they have forfeited, to some degree, some privacy rights."

Obviously there are privacy concerns, but that's not the only issue: it potentially exposes those subject to these rules to a whole bunch of financial threats to e-commerce, banking, etc. Even if the requirement is limited to communications systems like IM, email, and Facebook, the ability to receive email is used as a generic authentication mechanism for things like password reset. For instance, e-commerce sites like Amazon or Zappos will email you a copy of your password. One wonders what, if any, controls Georgia intends to use to protect sex offenders from unscrupulous state officials who have access to this information.

 
Ever since the original Wang attacks on MD5 in 2005 it's been clear that certificates were the most attractive target. Today, Sotirov, Stevens, Appelbaum, Lenstra, Molnar, Osvik, and de Weger report (slides writeup) on an attack against a real CA, in this case RapidSSL.

Background
In order to understand what's going on, we first need to recall some basic facts about how certificates work A certificate is a digitally signed assertion of the binding between a name and a public key. The data to be signed is as follows (I'm simplifying a bit)

versionThe version number (2)
serialNumberThe unique certificate serial number
issuerThe name of the CA issuing the certificate
validityThe time period when the certificate is valid
subjectThe name of the entity to which the certificate was issued.
subjectPublicKeyInfoThe entity's public key.
extensionsArbitrary extensions

In order to make a certificate, this data gets serialized using an annoying encoding and then the entire mess is hashed and then the resulting hash is digitally signed by the CA. The problem we have here is that the hash, in this case MD5, is weak. More precisely, it's possible to generate a collision: two inputs that hash to the same output. (See here for more background on attacks on hash functions.) We've known for years how to exploit this kind of attack. The basic idea is that the attacker prepares two documents, one "good" and one "bad" that hash to the same value. He then gets the signer to sign the "good" variant and then cuts and pastes the signature onto the "bad" variant, thus producing a valid signature on the bad document.

So, the way you would use this to attack certificates is that you would generate a "good" certificate signing request that would result in a certificate that had the same hash as a "bad" certificate you had generated locally. You get the CA to sign the request and then substitute the bad certificate. Until now there were two major obstacles to using this technique to attack certificates:

  • It wasn't clear that the serialNumber field was predictable.
  • The techniques for generating collisions weren't very good: they weren't that controllable (they generated a lot of random-appearing data) and were slow; or rather there were techniques for generating fast collisions but they weren't at all controllable.

The relevance of the serialNumber is this: unlike the name and the public key, the serialNumber and validity are generated by the CA. So, you need to know in advance what they will be in order to generate the appropriate colliding "bad" certificate. The validity is typically just generated as something like a year or two from the time of issue, so it's relatively predictable. The CA has a lot of freedom in how to generate the serial number. If it's truly a sequence number, it's quite predictable. However, if it's randomly generated, then it can be made arbitrarily unpredictable, which effectively blocks this kind of collision attack. When MD5 collisions were first discovered, the two standard recommendations were (1) stop using MD5 and (2) generate random serial numbers.

This Attack
Which brings us to this new work, which involves two main contributions. First, the authors improved their collision finding techniques so they need a lot less random-appearing data. The second is that they found a CA which still used MD5 and doesn't randomize the sequence number. Taken together, this allowed them to convince the CA to sign a certificate which was in itself valid but which collided with a certificate that the CA would never have signed, in this case a certificate for a new, subordinate CA. (It could just as well have been a certificate for a specific target web site, but that's less flashy than a CA certificate.) Once in possession of this new CA certificate, it's possible for the authors to sign arbitrary new certificates which will be trusted by anyone who trusted the original CA [subject to some technical limitations which I won't go into here.] Effectively, the authors have made themselves a CA.

There are some interesting technical hacks needed to make this work: although the serial number is somewhat predictable, it's not completely so, and in order to mount the attack they had to guess the serial number in advance. This guess isn't totally accurate, but they were then able to issue their own CSRs to increment the serial number to where they needed it to be.

Impact
The impact of this is that the authors could in principle mount man-in-the-middle or other impersonation attacks on any Web server provided that the client trusted this particular CA (most do). The existence of this certificate doesn't allow anybody else to mount impersonation attacks, since ordinary attackers won't have the corresponding private key (unless they break into the authors machine and recover it, of course). The authors have taken some steps to make the particular certificate they issued less useful for this purpose. In particular, it has a time way in the past, so unless your clock is way off, you should notice this attack. That's not to say that there's no risk here, since you might not notice the expiration date issue.

Of course, it's possible that an attacker could independently use the same technique to acquire their own CA certificate. In fact, we don't know for a fact that nobody already has. The only real obstacle is that the crypto needed here is fairly involved and the experts on it are mostly respected academics, many of whom are on this paper. So, the sooner that CAs adopt the mitigations mentioned above, the better.

I should mention that this isn't the only way to get a bogus certificate: many CAs don't do a particularly good job of user verification in any case (I'll be posting about one particular exceptional such case shortly). In particular, it's common to use "email confirmation" for identity verification, where the CA sends email to the administrator of the relevant machine to verify the certificate request. There are probably a number of cases in which it's easier to attack that than to build up a whole certificate collision infrastructure.

Containment
There are really two questions about how to contain this vulnerability:

  • What should we do about this specific certificate?
  • What should be done about the class of vulnerability?

The two basic options for this certificate are to ignore it (assume we trust the researchers, especially since the certificate is expired) or to blacklist it. The way that the blacklist would work is that the browser manufacturers would just issue a security update with a patch to the certificate validation code telling it not to trust this specific certificate, just as they would patch any other security vulnerability. For perspective, we can think of this as a vulnerability with an exploit that is known only to the researchers—even though we have the CA cert, we can't use it productively, and it's not likely to be reproducible. If I were in charge of a browser, which I'm not, I would probably issue a patch with a blacklist for this certificate. Others opinions may vary; as far as I know, the browser manufacturers didn't issue mandatory security updates blacklisting all the Debian OpenSSL keys, so that may be a cue to their general attitude.

The second question is what to do about this class of vulnerability. Because this attack only can be mounted against a live CA, not against an old certificate, it's very important that the affected CAs either stop using MD5, use randomized serial numbers, or both. Presumably, the news coverage will act as an inducement for them to do so. I've also heard suggestions that the browser manufacturers should disable MD5. There are probably still enough MD5-using servers out there that this would be problematic, though it's something to consider for the future.

Bottom Line
As usual, don't panic. In its current state, this is more of a demonstration of a hole than a serious hole. Countermeasures are readily available to the CAs and if the remaining CAs fix their practices fast enough, then it's unlikely that there will be any more bad certificates issued (it takes some time to spin up your infrastructure for this attack). Even if one or two such certificates are issued—even to bad guys— it's not the end of the world. Once they're detected they can be blacklisted. This takes a long time with the current patching rate, but it's not conceptually any worse than a remotely exploitable problem with your browser, or a bug in certificate validation logic, both of which have been known to happen. That said, it is very important that the CAs do fix their practices, since this has the potential to become serious if the capability to mount the attack becomes widespread and convenient.

UPDATE: Some minor corrections due to Hovav Shacham (only controllable MD5 collisions were slow)

 

December 29, 2008

One of Slate's odder sections is the "Green Lantern", where they take on some simple question like "should I buy a natural or artificial Christmas Tree" and try to analyze it from an environmental perspective. The most recent article asks whether you should throw away your leftovers or flush them down the garbage disposal. Unfortunately, the articles tend to be pretty useless: sometimes they have a real answer but often they thrash around for a while giving you the pros and cons of each option and conclude that maybe you should do A and maybe you should do B:
The research is unambiguous about one point, though: Under normal circumstances, you should always compost if you can. Otherwise, go ahead and use your garbage disposal if the following conditions are met: First, make sure that your community isn't running low on water. (To check your local status, click here.) Don't put anything that is greasy or fatty in the disposal. And find out whether your local water-treatment plant captures methane to produce energy. If it doesn't--and your local landfill does--you may be better off tossing those mashed potatoes in the trash.

Or maybe not... Here's another example:

If these ideas don't excite you, the Lantern recommends putting the new cash toward insulating your family's home. Of course, whether this makes sense depends on your local climate and whether you buy or rent. (Likewise, the current state of your home will determine just how much insulation your $100 will buy.) For the rest of you, it might be wisest to replace any antiquated, energy-inefficient appliances you might have--along the lines spelled out here. (Let's put aside the complicated question of carbon offsets, which will be addressed in a future column. Suffice to say that they wouldn't be the Lantern's first choice.)

I'm not saying I can do any better; rather I think this is reflective of a systemic problem with this kind of overall cost/benefit analysis. While it's possible to measure the power consumption, carbon emissions, etc. of any particular microactivity, it's pretty hard to do an overall cost/benefit analysis of whether you should do A or B when each of them consists of a whole bunch of individual activities, all of which require their own analyses. The economist type answer is to levy Pigouvian taxes on each individual component (e.g., carbon taxes) and then let the market sort things out. I don't know if that would work any better, though, but I don't see people being able to do this kind of analysis for each individual purchasing decision either.

 

December 28, 2008

Check out this NYT infographic on the distribution of the world's fastest computers. Of course, the Times doesn't tell you why this matters. They just say:

While the U.S. has the world's fastest supercomputers, it faces increased pressure from countries like India and China.

As I've argued before, it's more or less irrelevant who has the world's fastest computer. Computers are a tool, and a faster computer is just a faster tool, which may or may not be relevant depending on what job you're trying to use the tool for. There are jobs where having a faster computer is important (e.g., climate simulation) but there aren't that many of them (especially as many of these machines get their performance from extreme parallelism and many problems can't be parallelized efficiently) and even then having a fast computer is more a signal that you're mounting a serious research program of the type that requires that kind of computational power than something that's important in and of itself. Instead, though, having a fast computer has become a national prestige issue; sort of the 21st century equivalent of having the world's tallest building.

 

December 27, 2008

I've been watching the new Doctor Who lately. Generally, I'd say it's pretty solid, with much higher production values and a hipper tone than the old series. One thing that drives me nuts, though. The Doctor is constantly getting into various kinds of scrapes where he gets trapped, chased, etc. The Tardis seems to be extremely resistant to nearly all forms of attack: why isn't it fitted with some sort of homing device so it can jump to him and get him out of trouble. I appreciate that this wouldn't always work—maybe it can't work across different time periods for instance—but you'd think a simple radio transmitter with a homing beacon wouldn't be out of the question. Obvously, there are plot reasons for this; Gallifreyan technology is already way better than practically everyone the Doctor encounters, and if he could just jump out of trouble, where would be the drama? [Though it's worth noting that Iain Banks's Culture novels seem to do just fine despite a similar set of technoligical gap challenges.] Still, it would be nice to have some explanation of why this isn't possible.

Bonus gripe: the Roku also has the original Hitchhiker's Guide TV series. In the second episode where Arthur and Ford get picked up by the Heart of Gold I saw a bank of computers with what sure looked like a 9-track tape drive. You'd think the Sirius Cybernetics Corporation could do a bit better.

 

December 26, 2008

Today on NPR's All Things Considered, I heard a segment on Massachussetts Proposition 2, which decriminalized Marijuana possession ($100 civil fine for amounts less than one ounce).
All Things Considered, December 26, 2008 · As of Jan. 2, being caught with less than an ounce of pot in Massachusetts will result only in a $100 civil fine. District Attorney David F. Capeless, president of the Massachusetts District Attorneys Association, says the law has many loopholes and is in its present form, in effect, legalization of marijuana.

Note: Capeless was the only guest. They didn't have anyone in favor of Prop 2, and the host just took whatever he said at face value. Always nice to see NPR believes in objective coverage.

Anyway, the complaints appear to be as follows:

  1. That there's no ID requirement so if you get caught with marijuana, you can just lie and avoid the fine.
  2. The law specifically prohibits the state from penalizing police, bus drivers, etc. for using marijuana.

I'm not saying either of these is untrue but so what? There are other actions (e.g., jaywalking, not having a light on your bicycle), which are nominally forbidden but where as a practical matter the cops aren't likely to haul you to jail if you don't have ID. More likely they'll just take your word for who you are, write you a ticket and let you go. Capeless suggests that this is a "loophole" but really it seems pretty sensible to me. If you have some action with a maximum $100 fine, it doesn't make a lot of sense to haul people off just because they can't prove their identity to your satisfaction. The complaint about stopping the state from punishing people who smoke pot seems even less well founded. I didn't follow the coverage of prop 2, but I doubt that's some unintended consequence. Rather, if you think that smoking pot is like speeding or overstaying your parking meter, why would it disqualify someone from being a police officer, bus driver, or owning a gun. As far as I can tell, the truth of the matter is that the Mass DAs don't like marijuana and wish prop 2 had never passed (this isn't just speculation, here, for instance is an article about the Essex County DA opposing prop 2).

While we're on the topic, who cares what the DAs and Police think about drug control policy? Their job is to arrest and prosecute people who break the law, but this doesn't give them any special insight into what the effect of what some proposed loosening in the laws would be. As a practical matter, their bias seems to be towards "law and order", i.e., stuff should be illegal. Hovav Shacham forwarded me a particularly egregious example: the police in California asking the DEA to help them crack down on medical marijuana, which is legal under California law but not US law.

 

December 25, 2008

Just skimmed Wired's predictably wacky Top Technology Breakthroughs of 2008 (Flash Memory? The Speedo Freaking LZR? I mean, it's cool but it will affect like what, thousands of people worldwide?) and came across the following at #2:
But the G1 scores with its operating system. It runs Android, the free mobile operating system from Google. It's the first mobile OS to make its debut in years and the G1 is just the first of what will be many phones that use it. With its open source base, growing developer community and dozens of cellphone manufacturers pledging to make Android phones, Android has the potential to reshape the wireless industry in significant ways.

And by "in years" we mean "since June 2007 when the iPhone was released".

 
One of the problems with buying running shoes is that it's very hard to get a good sense of fit just wearing them in the store. Obviously this is an issue with any consumer item, but running shoes are especially bad because of the unique combination of breakin time, repetitive friction induced blistering, and the sheer misery of running in shoes that give you blisters. I've had shoes that took several runs to break in and similarly I've had shoes which seemed fine at first but after several runs become clear would never fit properly.

For this reason, really good shoe stores will not only let you take the shoes outside and run with them but will let you return them even after you've taken them home and done several runs. For instance, I usually buy my shoes at The Runner's High in Menlo Park, which would let you return up to 30 days, even if the shoes were clearly worn from running. [Note: TRH has recently been acquired by Fleet Feet, so I don't know if they still maintain this policy.] [Full Disclosure: I'm a friend of the former owner and at least before the sale got a friend discount, which is another reason to shop at TRH at least for me. I haven't needed to replace my shoes since the sale.] The big running shop Road Runner Sports offers a similar policy:

60-Day Perfect Fit™ Shoe Guarantee
With our unrivaled Perfect Fit™ Guarantee you can run in your new shoes RISK-FREE for up to 60 days (from your purchase date). If they're not a perfect fit simply exchange them and we'll be happy to help you select another pair. No questions asked.

And of course, a number of the big outdoors retailers (REI, Backcountry.com) offer unlimited no questions asked return policies for every purchase.1.

This is relevant because lately I've gotten interested in trail running and after trying some of the more common trail shoes, and finding they didn't fit well, I started hearing good things about Inov-8s. Unfortunately, neither TRH nor RRS sells them, and REI and Backcountry only have a very limited selection of models and sizes, so I'm stuck buying from someone with a less flexible return policy. I don't really know what goes into setting this kind of policy, but it's kind of a bummer. I'd be happy to (as RRS wants you to do) commit to spending a certain amount of money with a merchant in order to avoid getting stuck with shoes which totally don't fit and I can't return. Especially since almost everyone carries my standard shoe (ASICS 2130) and so I could presumably always just get another pair of them. I'd certainly choose such a merchant if they were available. Unfortunately, apparently not.

1.REI will even accept returns for items which they have no ability to resell, like climbing ropes and carabiners. They can't tell if you've treated them correctly and since they're safety equipment they just have to destroy them.

 
One question a lot of athletes have is whether they can work out when they're sick. Obviously, you don't want to lose training time, but on the other hand you don't want to make yourself too sick by training when you should be resting. The conventional wisdom is the "neck" rule (see for instance this article): if your symptoms are above the neck then you can train; if they're below the neck you can't:
David Nieman, Ph.D., who heads the Human Performance Laboratory at Appalachian State University, and has run 58 marathons and ultras, uses the "neck rule." Symptoms below the neck (chest cold, bronchial infection, body ache) require time off, while symptoms above the neck (runny nose, stuffiness, sneezing) don't pose a risk to runners continuing workouts.

This view is supported by research done at Ball State University by Tom Weidner, Ph.D., director of athletic training research. In one study, Weidner took two groups of 30 runners each and inoculated them with the common cold. One group ran 30 to 40 minutes every day for a week. The other group was sedentary. According to Weidner, "the two groups didn't differ in the length or severity of their colds." In another study, he found that running with a cold didn't compromise performance. He concluded that running with a head cold--as long as you don't push beyond accustomed workouts--is beneficial in maintaining fitness and psychological well-being.

The relevant paper is here. Most of the people I know tend to stick to easy distance and avoid hard workouts like intervals. I don't know of any science supporting this theory, though.

This NYT article, sent to me by Eu-Jin Goh, also describes another study that indicates that colds don't impair exercise performance:

The studies began, said Leonard Kaminsky, an exercise physiologist at Ball State University, when a trainer at the university, Thomas Weidner, wondered what he should tell athletes when they got colds.

The first question was: Does a cold affect your ability to exercise? To address that, the researchers recruited 24 men and 21 women ages 18 to 29 and of varying levels of fitness who agreed to be deliberately infected with a rhinovirus, which is responsible for about a third of all colds. Another group of 10 young men and women served as controls; they were not infected.

At the start of the study, the investigators tested all of the subjects, assessing their lung functions and exercise capacity. Then a cold virus was dropped into the noses of 45 of the subjects, and all caught head colds. Two days later, when their cold symptoms were at their worst, the subjects exercised by running on treadmills at moderate and intense levels. The researchers reported that having a cold had no effect on either lung function or exercise capacity.

This actually is a fairly surprising result. Most athletes certainly feel their performance suffers when they're sick. I certainly feel worse training when sick, and while I haven't taken any measurements of lung capacity, I do notice that my heart rate is significantly higher. If anyone has access to the original paper, I'd be very interested in reading it. (Abstract here). Initial impressions: the sample size is pretty small. I'd be interested in seeing a crossover study. What about performance at strength exercises?

 

December 24, 2008

I recently read Hanna Rosin's piece in The Atlantic about transgender children. The subjects of the piece are children who, from a very young age (< 5) insist that they are—or want to be—the other gender. Even for parents who are basically cool with the concept of the transgendered, this seems to still require some pretty difficult decisions. My take home points from the article go something like this:
  • The current state of sex reassignment (yes, I know that some trans-people prefer the term "gender confirmation surgery", but as far as I know, sex reassignment is still the standard term) technology isn't that great. Certainly, a post-treatment female (i.e., someone who was born male) isn't as much like a biological female as you would like.
  • Sex reassignment treatment works a lot better if you haven't gone through puberty yet.
  • It seems fairly problematic to let children this yound make judgements about something as irreversible as having their genitals reconstructed. Moreover, according to this Endocrine Society review, a significant fraction of children diagnosed with Gender Identity Disorder (GID) experience spontaneous remission post-puberty.
  • There are treatments available which will block/delay puberty, so at least the children are old enough to have a better chance of making their own decisions, though if it's puberty itself that realigns the child's psychological identity with their biological identity, it's not clear that helps as much as you would like. Anyway, if that happens, you can just stop the hormone blockers and let puberty proceed normally.
  • The children in question seem much happier when they're allowed to dress and act as the gender they want to be.
  • There are some psychological treatments which may (or may not) increase the chance that the child will become happier with their biological identity, but they sound pretty uncool (e.g., encouraging extreme traditional gender roles), and after reading the Atlantic article, I came away with the impression that the treated children weren't that happy as adults. But this seems inconsistent with letting them assume their desired gender roles in the interim.

One more note: some of the children in this article seem to have adopted stereotypical opposite sex behaviors incredibly early (like 2-3 years old.) I don't know what that tells us about how preferences for such behaviors get determined, but it's interesting.

 

December 23, 2008

Eszter Hargittai has an odd post complaining about Amazon Prime. As far as I can tell, the story is this. Say there's some item X (the example she gives is this 8 GB SD card) that is sold by both Amazon and some third party seller. Ordinarily, Amazon will offer it to you from you the lowest price seller, in this case $17.13 ($14.12 + 2.99 S+H). However, if you have Amazon Prime, it will (at least some of the time) offer you the Amazon version, even if it's more nominally expensive than the third party version (currently $17.99). Presumably the theory here is that if you've signed up for Amazon Prime, you want to actually use it. Note that this is just a matter of what appears as the main result: you can always select other sellers.

Anyway, Hargittai is pretty unhappy about this (she calls it a "shady product") but I have to admit that I don't understand what the issue is. She keeps saying she's being billed twice, but I don't understand the argument here: if you didn't have Amazon Prime and you chose to buy the product from Amazon rather than a third party seller, Amazon would charge you their price + S+H (unless of course you buy $25 worth of stuff and get super-saver) so you are getting free shipping (or more precisely, prepaid shipping) from Amazon. It's just that Amazon's price isn't that great, so you would be better off buying it from someone else. What makes this confusing is that you don't need to go to their site to buy it, Amazon will let you buy it from the the third party seller through Amazon's site. So, I don't see the problem.

Anyway, Eszter may be unhappy, but I actually prefer things this way: all other things being equal, I'd rather deal with Amazon than some third party seller, and certainly if the total price is identical I'd rather have 2-day shipping than whatever yak-based delivery system the third party seller would otherwise use. I'm even generally willing to pay an extra dollar or so for that. So, I'm happy to have Amazon offer me that option preferentially—though I'm a bit curious how big the difference is before Amazon will show me the cheaper item instead.

Incidentally, I don't know if I've mentioned this, but the combination of one-click selling and Amazon Prime has an amazingly powerful lock-in effect on me. It's just so much easier to buy stuff from Amazon than bother to set up an account anywhere else, figure out shipping, etc. If I were some smaller seller, I think it might be taking a real interest in 3rd-party Internet identity systems so that people could buy stuff to me without having to register for an account any time they want to buy something from someone new.

 

December 22, 2008

A reader pointed me to this article about a driver who has ordered a license plate designed to use similar-looking letters to confuse traffic cameras. Se ewhat I mean?

I've actually heard about this plan before, though the version I heard was Bs and 8s and the idea was to rely on bad police handwriting. Anyway, I'm not sure how well this will actually work. It's true that the photo above looks like crap, but that's mostly an artifact of massive pixellation. Any reasonable camera should be able to get you a much higher resolution image. Even the not-great-looking picture here looks good enough to me to distinguish those three letters. If you also have the car model and can distinguish a few letters, I suspect you could figure out who the violator was.

UPDATE: Actually, it doesn't look like crap on my page, but it does on the original. Why? Whoever wrote the HTML decided to scale the 130x69 pic to 300 pixels wide and that didn't work out so well. If you scale to integral multiples, it's really quite readable.

 

December 21, 2008

This article describes an interesting hack on license plate cameras:
As a prank, students from local high schools have been taking advantage of the county's Speed Camera Program in order to exact revenge on people who they believe have wronged them in the past, including other students and even teachers. Students from Richard Montgomery High School dubbed the prank the Speed Camera "Pimping" game, according to a parent of a student enrolled at one of the high schools.

Originating from Wootton High School, the parent said, students duplic ate the license plates by printing plate numbers on glossy photo paper, using fonts from certain websites that "mimic" those on Maryland license plates. They tape the duplicate plate over the existing plate on the back of their car and purposefully speed through a speed camera, the parent said. The victim then receives a citation in the mail days later.

Obviously, this will work technically: you want to be able to read the license plate numbers even from photos with errors of various kinds. However, if people are actually getting tickets when you do this, then this reveals some pretty lame procedures by whoever's running the photo radar system, since presumably the photo of the driver doesn't match whatever the driver's license photo of the person you're issuing the ticket to, and of course the car model probably doesn't match either. This seems like the kind of thing you probably should check if you want to make sure that you're issuing the ticket to the right person. Actually, I had thought this was SOP.

 

December 20, 2008

According to recent news coverage [*] [*] [*] Estonia is going to start allowing voters to use mobile phones to authenticate themselves for e-voting. It's a little hard to decipher the coverage, but this article suggests that voters aren't going to use the phone for the entire process but instead are going to use Internet-capable computer terminals for voting and the phones purely for authentication:
Estonia has been at the forefront of electronic voting for a number of years. In 2005 it started using a national ID card for authenticating voters and giving the go-ahead for using mobile phones is a continuation of that, according to Silver Meikar, a member of the Estonian Parliament and a longtime proponent of e-voting.

Voters will be authenticated using a digital certificate stored on SIM (Subscriber Identity Module) cards, which are already available to Estonians.

"You still need a computer and the Internet, but now you will have a choice of using your ID card plus card reader or a mobile ID to authenticate yourself," said Meikar.

Next on the agenda for the parlilament following last Thursday's decision to allow mobile-phone authentication is to adapt the Internet voting system, which currently only supports the use of ID cards. "We are now starting to program the system, so at the moment we don't have the technical readiness," said Vinkel. Adding support for mobile authentication will take about six months, he added.

In general, I think it's pretty fair to say that computer security researchers have a pretty negative view of Internet-based voting systems of this type, regardless of the authentication mechanism. This is a fairly complicated topic, but I wanted to try to explain some of the concerns.

First, it's important to be clear what sort of system we're talking about. There are a lot of ways to use the Internet for voting (results transmission, ballot distribution, registration, etc.) and I guess you could call any of them "Internet Voting". For the purposes of this post, however, I'm talking about a system where users vote on their own computers or mobile phones which then transmit the results over the Internet back to a central consolidation point. One example of such a system is Everyone Counts though I don't plan to talk about this system specifically.

There are a number of concerns with any system of this type. A nonexhaustive list would look something like this:

  • How are voters authenticated?
  • How do you prevent remote compromise of the tabulation system/EMS?
  • How do you verify that your vote was correctly tabulated?
  • How do you prevent remote compromise of the voter's terminal?

Voter Authentication
The voter authentication problem is probably the easiest to solve from a technical perspective. First, we understand how to do remote user authentication pretty well (though user interface and user compliance remain serious problems). It's certainly a lot easier if you can force all users to take some sort of authentication token, which seems to be the situation in Estonia. Moreover, the standards for voter authentication seem to be pretty low in any case. When I worked the polls in Santa Clara County, for instance, we were told we couldn't ask for identification unless the voter roll specifically told us to, which was more or less for first-time voters. Given this, it seems like you could use SSL with client certificates based on the smartcard. It's a little hard to tell how the Estonian system works, but it's probably something vaguely like this; given that it's based on cell phones, it might be AKA or some other 3G-type authentication system.

Remote Compromise of the EMS
Remote compromise of the EMS/tabulation system seems a lot more problematic. Pretty much by definition, there needs to be some Internet accessible server to receive your votes—otherwise it's not Internet voting. This means you need to worry about compromise of that server. How serious such compromise is depends on the way you've constructed your voting system. The naive way to build the system is as a sort of virtual DRE: users send their votes to the server which records them in memory, increments counters, etc. At the end of the election, you just spit out the votes and/or counter values. In such a system, compromise of the central server is extremely serious: an attacker can simply have the system output any election results of his choice. However, there are a variety of cryptographic mechanisms for building systems that are much more resistant to such attack, and in the limit don't require trusting the central server to deliver correct results at all. I'll talk about this very briefly under the next hed.

However, cryptographic voting systems don't provide a complete solution to the problem of server compromise. In particular, while they guarantee correct tabulation (for some value of guarantee), they don't guarantee availability. Consider what happens if the central server goes down on election night and nobody can record their vote. More creatively, an attacker could selectively block voting from specific individuals based on (for instance) their voter registration. Even if an anonymous authentication mechanism were used [technical note: for instance, certificates signed with blind signatures], an attacker could use IP identification and geolocation technology to get a pretty good idea of who voters were or at least where they were and thus selectively disenfranchise certain voters. Sure, in principle the voters could protest and maybe somehow get their votes to count (though this is much more complicated than it looks, since you have to worry about people who didn't vote on election day deciding retrospectively that they should have and then claiming they were denied service), but in practice how many would do so? So, denial of service is a real concern here.

Verifying Correct Tabulation
As I said above, it's possible to produce cryptographic systems which allow the demonstration of correct tabulation without requiring you to trust the tabulator. The details are complicated, but it's easy to see how to do it if you don't mind people's votes being published. You simply submit a digitally signed copy of your vote to the server. The server publishes all the signed votes. Once the election is over, you can verify that your vote was posted and that all the votes add up. Note that this mechanism is deeply flawed: for starters, it's generally not considered OK to post every vote. However, building a system with appropriate privacy guarantees is much harder and requires a fair bit more crypto.

I'm not an expert in cryptographic voting, but as far as I can tell, all the known systems have two major drawbacks. First, they require at least some fraction of voters to check that their votes are correctly recorded. It's not clear that voters will do this in practice. Note that the system I described above doesn't have that problem, but only because we've obliterated all the privacy guarantees. The second, more serious, problem is that they're complicated and convincing the average voter that they really prove what they are supposed to prove is extremely difficult. There's a fair amount of skepticism outside the crypto community about the degree to which the public at large is willing to trust systems that they don't really understand. [Note that one could argue that that's true of current computerized systems, but they are more familiar in operation and of course there is widespread distrust of such systems.]

Compromise of the Voter Terminal
Finally, we have to consider remote compromise of the voter's computer. Again, more or less by definition it's on the Internet, and personal computers are notoriously poorly maintained and vulnerable to attack (hence botnets). This threat is the hardest to secure against. A compromised terminal can present any information to the user it pleases. For instance, it could claim you're voting for Jefferson when actually you're voting for Burr. Even if afterwards you check your vote on some other computer and discover the fraud, there's no way for the electoral system to distinguish this from user error or buyer's remorse. As long as consumer operating systems remain as insecure as they currently are, it's pretty hard to see how to deal with this problem adequately.

 

December 18, 2008

This is sort of IETF inside baseball, but also a good example of how things can go really wrong even when you're trying to do the right thing. As you may or may not know, the Internet Engineering Task Force (IETF) is the standards body responsible for most of the Internet standards (TCP, HTTP, TLS, ...) that you know and hate. The IETF is a non-membership organization and participants aren't compensated by the IETF for their contributions. Moreover, most IETF participants are working on IETF standards as part of their job. This all makes the copyright situation a bit complicated, since, at least in the US, companies tend to own work you do for them in the course of your employment.

The IETF has opted to deal with this situation with a combination notify and attest model, which works like this. There are three main ways in which people submit "contributions" i.e., text for documents, comments, etc. to the IETF:

  • As actual documents ("internet-drafts").
  • As mailing list messages.
  • As comments at IETF meetings.

The first case is the clearest: every I-D submission is required to have a boilerplate license grant attached to it, or rather a reference to a license grant. It looks (or at least until recently) something like this:

This document is subject to the rights, licenses and restrictions contained in BCP 78, and except as set forth therein, the authors retain all their rights.

That's the attest part. But what about submissions on mailing lists, stuff said at the mic at meetings, etc.? It's not really practical to expect people to start every comment with a copyright statement. Instead, the IETF has a general policy on submissions which you're given a copy of when you sign up for an IETF meeting or join a mailing list. (This policy is colloquially called "Note Well" after the title of the document). Contributing once you've read the statement is deemed to be an acknowledgement and acceptance of the policy. Note: I'm not a lawyer and I'm not taking a position on whether this will hold up in court. I'm just reporting how things are done.

OK, so you have to agree to some license terms. But what are those terms? Until recently (confusingly, the date is a bit uncertain), they were approximately that the text in your document could be reused for any IETF purpose, e.g., you could republish them, prepare derivative works for the purpose of doing revisions, etc. So far so good. Once documents were in the IETF system you could pretty much do anything IETFy with them, since you could safely (at least that's the theory) assume that the authors had granted the appropriate rights. What you couldn't do, however, was take text out of the documents and use them to prepare documents for some other standards body such as OASIS, ITU, etc. Anyway, IETF decided that it was a good idea to fix this and so a new set of license terms were prepared which involved granting this additional set of rights (the details are actually quite a bit more involved, but not really that relevant). Moreover, the new rules required IETF contributors (i.e., document authors) to attest that they had the right to submit the document under these new terms.

This last bit is where things started to go wrong. Consider what happens if you want to prepare a revision of some RFC that was written before the rule change. Ordinarily, you could do this, but now you need to submit under the new rules, and moreover you need to attest that you've obtained the necessary permissions to do so. But since all you know is that the document was submitted under the old rules, you need to go back to every contributor to the original document and get them to provide a more expansive license grant. That's fairly inconvenient already, but it gets worse. IETF doesn't really keep records of who contributed which piece of each document (remember: if it's in the system it was supposed to be automatically OK), so you don't actually know who you're supposed to contact. Even if you just take the list of people who were acknowledged in the document, this can run to tens of people, some of whom might have changed employers, died, or whatever. So, there's a reasonable chance that some documents can't practically be revised under these terms, and some IETFers are already avoiding contributing documents they would otherwise have submitted. The IETF hasn't totally ground to a halt yet, presumably due to some combination of virgin submissions, participants unaware of the issue, and participants aware of the issue but assuming they're not going to get caught.

Unfortunately, this didn't quite get figured out (for some unknown reason, people aren't really excited about spending a lot of time reading licensing agreements) until it was too late and the terms were already in effect. Next step: figure out how to unscrew things. Outstanding!

 

December 16, 2008

If you work with technology you probably occasionally need to unscrew, pry, or snip something. If it's only occasional, the usual approach here is to carry some sort of multitool, and I've been on a years long quest for a capable multitool that isn't a huge brick in your pocket. I already have a Leatherman Wave and a Charge XTi, but they're way too heavy, so I thought I would try the new Skeletool CX. The Skeletool is a stripped down multitool that nominally offers "7 tools":
  • 154CM Stainless Steel Clip Point/Sheepsfoot Combo Knife
  • Needlenose Pliers
  • Regular Pliers
  • Wire Cutters
  • Hard-wire Cutters
  • Large Bit Driver
  • Bottle Opener
  • Carabiner Clip

Yeah, I note that that's 8, but I guess they're not counting the biner.

Really, though, you'll note that four of those tools are part of the pliers, so this is 4 tools: a knife, pliers, screwdriver, and a bottle opener. On the other hand, there are two double-ended screwdriver bits, so you could argue that the screwdriver would count for 4 tools, so maybe this is 7 tools after all.

Anyway, the Skeletool weighs in at 5 ounces and feels pretty slim in your pocket. The first big problem I noticed is that the outer corner of the knife blade is pretty sharp and so every time I jam my hand in my pocket, it scrapes my hand, which is kind of unpleasant. On the suggestion of Kevin Dick, I tried putting it in the change pocket of my 501s, where it just fits and which puts the edge of the knife against the cloth rather than against my hand. This isn't perfect, but it's pretty good and doesn't overweight my pocket and doesn't involve jamming my hand.

This isn't to say that the knife doesn't do a perfectly good job of slicing your hand open, however. I unwisely ignored the advice not to cut towards yourself when cutting apart a pair of socks (you know the little plastic "T" that holds them together? just pulling it out can damage the sock) and managed to slice a cm or so gash in my finger. Some crazy glue on the cut seems to have solved that problem at least temporarily, though. I understand that the ethyl cyanoacrylates sometimes cause irritation, but I haven't found it to be a problem.

 

December 14, 2008

The thing I love about the Mac is how it just works. Take today (well, really the whole weekend) for example.

For a variety of reasons, I decided it was time to use an encrypted filesystem on my laptop. The natural choice here is FileVault, which a little net research suggests is imperfect, but is, after all, what Apple provides, thus avoiding contaminating a perfect Apple artifact with any un-Jobslike software. That said, I'm not completely crazy, so on the advice of counsel I decided to proceed deliberately:

Step 1: Take a backup
Since encrypted filesystems tend to have less attractive failure modes than ordinary filesystems, it seemed like a good idea to take a backup. Originally, my plan here was to use Time Machine (Apple product, remember), but when I actually went to run it, performance was rather less than great. I suspect the problem here is that it's working file by file because it needs to be able to build a data structure that allows reversion to arbitrary time checkpoints. In any case, I got impatient and aborted it, figuring I'd move back to regular UNIX tools. Unfortunately, dump doesn't work with HFS/HFS+, so this left me with tar. Tar is generally quite a bit slower than dump because it works on a file-by-file basis, which is an especially serious issue with a drive with bad seek time like the 4200 RPM drive in the Air. [Evidence for this theory: dd if=/dev/zero to the USB backup drive did 20 MB/s, so it's probably not a limitation of the USB bus or the external drive.] It's not clear to me that it's actually any faster than Time Machine, but it has the advantage of being predictable and behaving in a way I understand.

Step 2: Turn on FileVault
At this point, I've got a backup and things should be easy, so I clicked the button to turn on FileVault. The machine thought for a while and then announced I needed more free space (as much as the size of my home directory) to turn on FileVault.

Step 3: Clean Up
OK, no problem. I'll just move some of my data off the machine and onto the backup drive [you don't trust the original backup do you?], turn on FileVault and then copy it back. This takes a few hours, but finally I managed to clear out 18 G or so and I had enough room to turn on FileVault.

Step 4: Turn on FileVault (II)
OK, at this point we really should be ready. I started up FileVault and this time it cheerfully announced it was encrypting my home directory and things would be ready in 12 hours or so. OK, so that's not so bad, it'll be done when I wake up. No such luck. About an hour in it complained that it had an error copying a file and it had aborted. At this point, I was starting to rethink my plan; maybe encrypting my massive operational home directory isn't such a good idea. But I'm still committed to FileVault—more committed since I've put so much time into it!—so this brings us to...

Step 5: The Big Purge
At this point I decided to get serious and delete almost everthing off my home directory, turn on FV, and restore from backup. Luckily, I checked my backup only to realize I'd fumble-fingered and deleted the backup file (Doh!). Two hours to pull another backup, and then I need to delete files. At this point, we're talking real data, not just Music and stuff like that, so I need a secure delete. A little reading suggests srm is the tool for the job and I set it to run overnight. Unfortunately, the next morning it's only deleted about 2G, so this is going to take forever [Technical note: I was only using 7-pass mode, not 35-pass mode. I'm paranoid, not insane]. Luckily, there's also rm -P which does a 3-pass delete but seems to be much more than 2x faster than srm. I run that and fairly quickly have my home directory trimmed down to a svelte 2GB, leaving us ready for Step 6.

Step 6: Turn on FileVault (III)
This time when I turn on FV, things look pretty good. It encrypts everything in about an hour and then announces that it's going to delete my old Home directory— I've checked the secure delete checkbox, whatever that does. Unfortunately, whatever it does is bad since 4 hours later it's still securely deleting away. A little research suggests it's safe to abort this, so I give it a hard power reset (did I mention there's no cancel button, or rather that there is one but it's grayed out at this point? Also, no real progress bar, just the old spinning blue candy cane.). Anyway, the machine reboots just fine and I now have an allegedly encrypted home directory and a directory that's named /Users/ekr-<random-numbers>. I figure that's the old home directory and hit it with the old rm -P and it vanishes.

Step 7: Nuke the site from orbit. It's the only way to be sure
At this point, I've been doing a lot of deleting, and it's pretty hard to be sure that I haven't typoed or that the filesystem hasn't screwed me somehow and copied some of my precious precious data to some unused partition, so I decide it would be a good idea to run "Erase Free Space" with 7 passes, just to make sure. I set it for 7 pass and started it up about 5 hours ago. I'll let you know when it finishes. The current promise is 12 hours.

UPDATE (5:55 AM):: More progress on the progress bar, but still promising 12 hours.

 

December 10, 2008

What I find baffling about l'affaire Blagojevich isn't that he tried to sell a senate set. OK, so that probably wasn't going to work out, but it might have and it sure took chutzpah to try it. No, what puzzles me is that he talked about it on the freaking phone!. I mean, I worry about talking business deals on the phone, let alone doing crimes. And given that (1) it was known that he was under investigation and (b) Blagojevich was a former prosecutor, he might have suspected that, you know, the FBI was tapping his phone. Like I said, baffling.

P.S. I think we can now add "they're not willing to give me anything except appreciation. Fuck them." to "when the president does it that means that it is not illegal", "Fuck the Jews, they didn't vote for us anyway", and "the bitch set me up."

 

December 9, 2008

Network World reports on Cisco's plans to deliver a telepresence rig with automatic translation:
Cisco will add real-time translation to its TelePresence high-definition conference technology next year, enabling people in several different countries to meet virtually and each hear the other participants' comments in their own languages.

...

It will include speech recognition in the speaker's native language, a translation engine, and text-to-speech technology to deliver the words in a synthesized voice on the other end. Users will also be able to display subtitles if they choose, he said. Both Asian and Western languages will be represented in the initial set, which will later be expanded.

I don't want to sound reflexively negative, but I'm pretty skeptical that this is going to work in any kind of practical way. As described above, it depends on three separate technologies none of which work particularly well. Domain-specific speech recognition systems sort of quasi-work, though they're quite imperfect—United's IVR can barely recognize my frequent flier number. This is of course partly an artifact of bad phone quality (though it's not clear the remote mikes that these telepresence rigs use will be that much better), but it's much easier to build a domain specific system than a generic system. My understanding is that generic speech recognition systems have pretty high error rates. Wikipedia claims 98-99% for generic, continuous speech systems under "optimal condition", which includes training the system for the speaker.

This brings us to the topic of machine translation. You don't need to read up on this. Just try Google's machine translator. Even when it does a good job, it produces annoying, ungrammatical artifacts on the order of one every other sentence or so. And remember that this is written speech which is actually fairly grammatical to start with. Spoken text contains all sorts of odd artifacts, pauses, etc. that don't make the translation any easier. Quasi-grammatical English passed through an error-prone recognition system and then a not-that-accurate translator does not sound like a recipe for accurate results.

If the final stage of the translation is text-to-speech, which introduces a whole new level of fun. Again, voice synthesis does work, but often sounds kind of odd, which is part of why systems often use pre-recorded voice rather than voice synthesis.

So, this may work at some technical level, but I have a hard time believing that listening to a robotic-sounding, ungrammatical, error-prone partial translation during a teleconference is going to be anything other than annoying.

 

December 7, 2008

I caught The Andromeda Strain via Netflix the other night. It's a pretty by the numbers adaptation of the classic SF novel, but despite that I found myself ungripped. The best part about the book is how good a job Crichton does of convincing you that you're reading science history, not science fiction. This is partly a matter of the dry, matter of fact tone and partly a matter of the just-out-of-reach-but-maybe-tomorrow feel of the technology. Unfortunately, none of this comes through in the movie, which instead has the sort of antique SF feel of 2001: A Space Odyssey. Partly it's the old clothing and hair styles, but I think more importantly it's that in retrospect technology has developed rather differently from that portrayed in the movie.

Exhibit A here is the "Electronic Body Analyzer", a computer that automatically scans you, diagnoses stuff, administers medication, etc. So far so good: we don't have these yet, but maybe we could make one with some effort (though presumably ours wouldn't distract you with trippy music and then give you an unannounced injection). On the other hand, if we were to build one, it would have a zippy high-res interface with alpha-blending, drop shadows, and motion blur. It would probably not look like this:

That's a foot, by the way.

You also probably wouldn't get transmissions from headquarters by teletype.

This blind spot is all over science fiction of this era, of course (see also Aliens). Remember that in 1969 if a computer had any kind of interactive interface, it was some crappy character-based thing, so SF writers can't really be blamed for not anticipating modern interfaces. For some reason it seems easier to imagine computers being smarter than they are than them having better interfaces. (See also If Isaac Asimov Designed Your Computer).

What's odd is that none of this bothered me when I recently reread the novel. Sure I knew that it was unrealistic, but somehow reading it instead of actually seeing it play out in all its clumsy ASCII art glory let me suspend disbelief a bit.

Something else that works in the book but not the movie is the pacing. Part of the conceit is that there is this super-elaborate procedure for disinfecting the scientists as they descend into the laboratory. It's not clear why you need this at all; presumably if your scientists are sick you would be better off not to let them into the sterile are at all. Anyway, it's all neat and high-tech and lets Crichton show off his creativity, but the filmmakers seem to feel the need to show you the whole thing and it just drags really badly; they don't start seriously investigating the organism till like 50+ minutes into the movie and at this point you've kind of forgotten that it killed a town full of people.

I wonder if the miniseries is any better.

 

December 5, 2008

The Times reports that H.M., a name familiar to generations of psych undergrads, has died. H.M. was a patient who underwent surgical treatment for a seizure disorder which left him unable to form new memories (think Memento but without the tattoos and the ultraviolence.) This made him a popular subject for the study of memory. One of the most interesting features of H.M.'s condition was that he could learn some new physical skills without being conscious of it. When presented with the task he would claim never to have tried it before, but would be able to perform them anyway. The Wikipedia article and the Times obit both make good reading.
 

December 4, 2008

It's definitely starting to look like the melamine-contaminated infant Chinese infant formula incident was not an accident, but rather deliberate. Science has the story:
Researchers say the adulteration was nothing short of a wholesale re-engineering of milk. Weeks ago, investigators established that workers at Sanlu and at a number of milk-collection depots were diluting milk with water; they added melamine to dupe a test for determining crude protein content. "Adulteration used to be simple. What they did was very high-tech," says Chen. Researchers have since learned that the emulsifier used to suspend melamine--a compound that resists going into solution--also boosted apparent milk-fat content.

Sanlu baby formula contained a whopping 2563 mg/kg of melamine, adding 1% of apparent crude protein content to the formula, says Jerry Brunetti, managing director of Agri-Dynamics in Easton, Pennsylvania. Milk, he notes, is only 3.0% to 3.4% protein. Chen says a dean of a school of food science told him that it would take a university team 3 months to develop this kind of concoction.

Investigators have concluded that as-yet-unidentified individuals cooked up a protocol for a premix, a solution designed to fortify foods with vitamins or other nutrients. In this case, it was deadly. Several milk-collecting companies were using the same premix, Chen says: "So someone with technical skill had to be training them."

As I said earlier, it's pretty hard to reliably prevent intentional substitution, with any practical inspection regime. There's an arms race between the inspectors and the cheaters and since the cheaters apparent; indeed, the melamine looks like the first stage in that arms race, since it's intended to defeat a quality control check on the protein content. The cheaters presumably know what tests are being performed, and if you don't care about whether your product kills people, you probably have a lot of flexibity in what kind of countermeasures/masking agents, etc. you can use.

 
Now that it looks like there's a serious chance of bailing out the car companies, you're starting to hear the suggestion that any bailout should come with conditions. For instance, here's Pat Garafalo at ThinkProgress

More importantly though - as Pelosi and Reid said - "federal aid should come with 'strong conditions,' such as requirements that car makers build more fuel-efficient vehicles." Bill Scher at OurFuture writes, "With the auto industry in dire straits, we taxpayers have maximum leverage to demand the cars necessary to help lower energy costs, cut carbon emissions and reduce our dependency on foreign oil."

I agree that it seems likely that making cars with lower carbon emissions (which at least for now more or less means more energy efficient) would be a good thing, and absent much higher gas prices (or some Pigouvian tax) it also seems likely that they US auto manufacturers won't do this on their own, and it's obviously true that when the manufacturers are begging for a bailout is a good time to extract concessions. So, this may not be crazy policy. On the other hand, it seems somewhat problematic to put the executive branch in a position where they can just impose this sort of condition without going back to Congress. 700 billion (or whatever) is a huge amount of money and if the economy gets worse more and more companies are going to be wanting bailouts. To give the executive the discretion to impose essentially any conditions they want in return for a bailout starts to look a lot like creating a command economy with the president in command. Now, that may seem like a good thing if the current President's political views happen to line up with yours, but taking the long view, the US political system is designed to avoid giving any individual actor too much unchecked power. Try imagining this power in the hands of a politician you hate (which ought to be pretty easy seeing as we're about to see a polar transition in the presidency, so it's likely you either hate the outgoing or incoming president).

 

December 2, 2008

Today I had two people send meeting invites (Content-Type: text/calendar) to one of my GMail accounts. Ordinarily, I can read ICS files just fine: OS/X knows what to do with them: bring up iCal and add them to my calendar. Unfortunately, GMail has decided to do me a favor: instead of just letting me download the attachment and fire up the appropriate helper app, it fires up its own calendar app and offers to let me add the event to my Google calendar. My what? I don't even recall asking for a Google calendar. Apparently, I can subscribe to that calendar in iCal via CalDAV, but that's not what I want: I just want to add the event to my ordinary calendar.

OK, this is irritating but workable. I'll just forward the message to one of my other mail accounts which I read with IMAP/Emacs and then download the .ics file and open it with iCal as per usual. But nooo.... When I forward the message, Gmail strips off the .ics attachment and just sends a text version. How, uh, helpful.

Oh, did I mention that the iPhone doesn't seem to be able to handle .ics either? I read these same messages via IMAP on my iPhone but the attachment just sits there. Outstanding!