EKR: September 2006 Archives

 

September 30, 2006

Terence and I went to see Jackass Number 2 (not bad by the way) and for the second time in a row, a chain theater refused to take a credit card--they only take cash they say. Now, I've certainly seen merchants that didn't take cash before, but they're usually small ones, not big, capital-intensive businesses like movie theaters. So, what's going on?

Some theories:

Transaction costs
The usual reason that merchants resist taking credit cards is the transaction fee that merchants have to pay on every credit card transaction. Some fraction of this fee is often fixed, which penalizes small transactions--and even though $10 for a movie ticket feels pretty extortionate, it's a fairly small transaction as these things go. I don't give this theory much credence: lots of other merchants which deal primarily in small transactions (fast food restaurants, for instance) take credit cards. I don't see any reason to think movie theaters are any different.

Chargebacks
Another possibility here is that the theaters are concerned about chargebacks by dissatisfied customers. Obviously, this is going to be more of a problem with movies (which are, after all, often unsatisfactory) than fast food. I don't consider this plausible for two reasons. First, I don't expect the credit card company would actually permit you to charge back a movie ticket provided you actually watched the movie--and since this is a card present transaction, it would be pretty hard to claim that it was fraud. Second, you can already buy movie tickets with a credit card through Fandango, so wouldn't they have the same chargeback problem?

Price Discrimination
My suspicion is that the real reason to forbid credit card purchases is to allow for price discrimination. Fandango and Movietickets both levy a surcharge, which would be much harder to do if people were buying in person (the credit card associations prohibit credit card surcharges, although you can provide a "cash discount", but in any case it would be socially quite difficult.) Limiting the use of credit cards to ticketing-at-home services like Fandango allows for marget segmentation.

The drawback with this theory is that I don't know what kind of financial arrangements the movie theaters have with the ticketing services, so I don't know whether this actually allows them to effectively charge more for credit cards. However, I do know that Fandango is owned by AMC and Regal, so they at least are collecting the money.

 

September 21, 2006

Recent work on attacking HMAC highlights a point about secure protocol design. As I mentioned, the attack requires having a lot of (m,X) pairs. Now, some protocols, like SSL, use what's called authenticate-then-encrypt (AtE) in which you apply the MAC and then encrypt the message (and usually the MAC). Other protocols, like IPsec, use what's called encrypt-then-authenticate (EtA) in which you encrypt the message and then compute the MAC over the ciphertext. A well-known paper by Hugo Krawczyk showed that there were (contrived) ways of using AtE with otherwise secure algorithms which were themselves insecure. As a result, the crypto community generally recommends EtA.

What's interesting to note, however, is that encrypting the traffic (and MAC) as is done in AtE, hides the plaintext (and the MAC) from the attacker. This would seem to preclude mounting a Contini-Wang style attack on the underlying hash function. By contrast, if you use EtA, then the attacker gets a chance to attack the MAC first and then move on to the encryption.

 
One of the basic primitives used in communications security protocols is called a Message Authentication Code (MAC). What we want is a function MAC that takes a secret key k and a message m and produces a characteristic fixed-length output X. I.e.,
X = MAC(k, m)

The way that you use a MAC is that you and I share a secret key and when we want to send messages around we attach the MAC value of the message using that key. Anyone who doesn't have that key can't generate matching MAC/message pairs and so can't forge messages that appear to be from one of us. MACs are used all over cryptographic protocols because they're an efficient way of providing an integrity check.

In order for a MAC to be useful, we would like it to have (at least) the following properties, listed roughly in order of descending strength (note that we assume the attacker doesn't have the key).

  1. Given a set of MAC values X it's infeasible to construct a new (m,X) pair. (If that there's any dependence on the key at all, this looks pretty easy to meet).
  2. Given a set of (m, X) pairs, it's infeasible to recover k.
  3. Given a set of (m, X) pairs, it's infeasible to construct a new (m, X) pair.
  4. Given a set of (m, X) pairs, it's infeasible to construct a message m' that produces any of the known X values.

Because a MAC takes an arbitrary input and produces a fixed-length output which is supposed to be hard to forge, it sort of feels like a hash function, except that it's got a key. It's very natural to try to use hash functions to construct MACs and indeed many of these functions went under the name "keyed hash". It turns out that many of the obvious constructions have problems and eventually one particular hash-based construction called HMAC came to dominate the space, largely based on the fact that it came with proofs of its security—assuming you made certain assumptions about the hashes it was based on. HMAC is used more or less everywhere these days.

That was all fine before Wang et al. started successfully attacking MD5 and SHA-1 (not too out of date roundup here and update here). These attacks called into question the assumptions underlying the original HMAC proofs. Mihir Bellare produced new proofs of security with weaker assumptions.

Unfortunately, those assumptions don't seem to actually obtain to MD5 and SHA-1 in the face of the current attacks, as shown by a new paper by Contini and Yin. They demonstrate attacks on HMAC-MD4, HMAC-MD5, HMAC-SHA-0 and reduced-round HMAC-SHA-1 which lead to recovery of the key (a violation of property 2) and forgery of MACs (a violation of property 3).

That said, this is currently a don't panic situation, for a number of reasons. First, mounting the attack requires a very large number of (m,X) pairs: 247 for MD5 and 284 for SHA-0. If we assume that SHA-1 is stronger than SHA-0, then this tells us something about the strength of SHA-1. So, it requires one of the endpoints being able to do a truly enormous amount of computation. This isn't something you wouldn't notice on your SSL/TLS or SSH server, and since you seem to need those pairs for each key, it's a lot more traffic than you'd normally protected under a single key, and it's not clear how you'd force the server to compute the pairs for you. Second, it seems to still require a lot of computation (order 245 operations with MD5) in order to actually forge a MAC. I haven't studied the Contini and Wang paper that carefully, but I'd guess that the cost would be higher with SHA-1.

The bottom line, then, is that this doesn't appear to be a practical threat to current protocols (that's what the authors say too). But it's definitely an interesting result and should make you glad that people are working on transitioning away from MD5 and SHA-1.

UPDATE: "attack" -> "attach" (thanks to "that guy")

 

September 16, 2006

Crooked Timber has a long discussion about whether Word's grammar checker is a good or bad thing. A lot of the discussion seems to start from the premise that students ought to learn how to write Standard Written English (SWE), and then ask whether Word's grammar checker helps or hurts that. Most of the rest of the discussion is complaining about what a bad job Word does of reproducing/enforcing SWE. I'm not sure I think either issue is all that relevant.

First, to the question of whether it's important for students to learn to write SWE. I definitely think it's important for students to learn how to structure their thoughts in order to present them, but it seems to me that it's an open question whether learning the kind of grammar that MS Word is designed to correct is more like that or is more like penmanship—a skill which has been rendered more or less obsolete by modern typesetting 1. Sure, there are occasions where people prefer the flexibility and beauty of hand-drawn letterforms but they're getting rarer and rarer, especially as people get used to the look of machine typeset documents.

This brings me to my second point. The rules of SWE, like the rules of any language, are to some extent arbitrary2 So, while it's certainly true that Word isn't that great at reproducing what a copy-editor using the CMS or MLA Handbook would produce, a lot of the informal writing that people come in contact with is going to look like the Word style because it was produced using Word. At some point it wouldn't be surprising to see SWE evolve towards the Word style at which point complaints about MS's inability to enforce the old arbitrary rules will be irrelevant--as will complaints about people's inability to remember the new arbitrary rules since the computer will do the remembering for you.

1. Pre-modern typesetting, since single-font dot-matrix printers were already killing handwriting. Laser printers and programs like Frame just completed the job and killed manual typesetting and layout.
2. As John McWhorter points out in his otherwise fairly annoying Doing Our Own Thing, written English isn't really that much like spoken English, and in other languages such as Arabic the difference is much greater.

 

September 15, 2006

Gregg Easterbrook is back showing his ignorance of physics in Slate (why do people let Easterbrook write about science, anyway?). What prompted it this time is Lee Smolin' new book on string theory. But it's sort of comforting to see Easterbrook peddling the same line of nonsense:
And consider this. Today if a professor at Princeton claims there are 11 unobservable dimensions about which he can speak with great confidence despite an utter lack of supporting evidence, that professor is praised for incredible sophistication. If another person in the same place asserted there exists one unobservable dimension, the plane of the spirit, he would be hooted down as a superstitious crank.

I hatcheted an almost identical argument about three years ago, and it hasn't gotten any more sensible since. To make things even more fun, Easterbrook has managed to turn the topic to intelligent design (remember, he's some sort of a crypto-creationist).

Really, string theory isn't a theory at all. Creationists who oppose the teaching of Darwin have taken to deriding natural selection as "just a theory," and Darwin's defenders have rightly replied that in science, "theory" does not mean idle speculation. Rather, it is an honored term for an idea that has been elaborately analyzed, has not been falsified, and has made testable predictions that have later proven to be true. The ordering of scientific notions is: conjecture, hypothesis, theory. Pope John Paul II chose his words carefully when in 1996 he called evolution "more than a hypothesis." Yet the very sorts of elite-institution academics who snigger at creationists for revealing their ignorance of scientific terminology by calling evolution "just a theory" nonetheless uniformly say "string theory." Since what they're talking about is strictly a thought experiment (just try proving there are no other dimensions), from now on, "string conjecture," please.

Now, there is one non-stupid thing here: the term "theory" means something pretty different in the context of "theory of evolution" than it does in the context of "string theory", and when we speak of the "theory of evolution" it's not because it's "just a theory". But none of this is because "string theory" is somehow misnamed, it's just that the terms aren't really used that consistently. Moreover, "X theory" isn't really the same as "the theory of X". In my experience "X theory" tends to mean "the stuff we've learned about Xs. Consider decision theory or automata theory. In both cases, they're less a single theory like the theory of relativity and more a generic name for the field of study--and note that at least in the case of decision theory, there are a variety of models, many of which don't map that well to what we know of human behavior, but which are studied anyway.

None of this is to say that string theory doesn't have problems--I'm not expert enough to know one way or the other--but the notion that the problem it has is that it posits extra dimensions in order to make the math work out is... weird.

Oh, and one more thing: can we stop talking about the "teaching of Darwin"? We're not talking about Hillel or Confucius here; sure Darwin had the original idea of natural selection but evolution is a lot more than his "teaching", and our assessment of what's true doesn't depend at all on what Darwin personally thought. That's one of the important differences between science and religion.

 

September 12, 2006

The list of people/organizations investigating the HP pretexting scandal now includes:

Here's the thing: it's reasonably clear what HP's investigators did, and while a lot of people seem to want to get in on the action, what we're a little short on is specifics about exactly what law they think has been broken. Typically, this is a sign that while everyone thinks something bad has happened, it's not clear it's actually illegal. Not that that interferes with getting your name in the newspapers by investigating it, though...

 

September 11, 2006

Economists love to talk about risk homeostasis, the theory that when activities are made safer, people respond by acting in riskier ways. The classic example here is the Munich taxicab study, in which it was found that drivers of taxis equipped with ABS had the same accident rate as those without ABS (see here for a discussion of the experiment by Gerald Wilde, the main proponent of the theory), and drove more aggressively--with the implication being that they expected the ABS to help them recover.

There's a lot of controversy over the extent to which risk compensation is actually a factor in people's behavior, but here's an interesting result from Ian Walker at U. Bath: when riders are wearing helmets, drivers are less careful:

To carry out the research, Dr Walker used a bike fitted with a computer and an ultrasonic distance sensor to find drivers were twice as likely to get close to the bicycle, at an average of 8.5cm, when he wore a helmet.

The experiment, which recorded 2,500 overtaking motorists in Salisbury and Bristol, was funded by the Engineering and Physical Sciences Research Council.

Dr Walker, a traffic psychologist from the University's Department of Psychology, said: "This study shows that when drivers overtake a cyclist, the margin for error they leave is affected by the cyclist's appearance.

"By leaving the cyclist less room, drivers reduce the safety margin that cyclists need to deal with obstacles in the road, such as drain covers and potholes, as well as the margin for error in their own judgements.

"We know helmets are useful in low-speed falls, and so definitely good for children, but whether they offer any real protection to somebody struck by a car is very controversial.

"Either way, this study suggests wearing a helmet might make a collision more likely in the first place," he added.

Dr Walker thinks the reason drivers give less room to cyclists wearing helmets is because they see them as "Lycra-clad street warriors" and believe they are more predictable than those without.

I don't know of any end-to-end controlled trials of mortality with and without bicycle helmets. Looks like I need to spend some time with pubmed.

 
Have you ever wondered what happens when you call customer service? EG reader Nagendra Modadugu pointed me at this 12 minute movie that explains everything.
 

September 10, 2006

After years of getting free diapers in the mail (and way too small for me or Mrs. Guesswork!) I finally get something useful: Gillette sent me a Fusion razor with a single free cartridge, so I figured I'd give it a try.

First, a word about my shaving habits. I've been shaving with a Mach 3 Turbo for years now and it does a good job. I shave both my face and my head, but I only shave every 3 days or so, but I also have a pretty thick beard, so it's pretty important to me for the razor to be able to hack through the growth. I typically make two passes, one with the grain to get rid of most of the hair, and a second against it to get a smooth shave, so it's important to be able to shave against the grain with minimal irritation.

Where the Mach 3 (and Mach 3 Turbo) has three blades, the Fusion has 5 plus an extra blade on the back for letting you trim the edge of your beard, sideburns, etc. The head is fairly bulky-feeling--pretty much the same width as the Mach 3, but about 4mm taller (16 mm as opposed to 12). This doesn't seem to be blade width though—the blades appear thinner—but rather is the rubber guide directly under the blades. Despite the apparent clumsiness of the bigger head, it feels reasonably sensitive and the blades conform reasonably well to your face—at least as well as the Mach 3 did. The quality of the shave is excellent. It's at least as smooth and I think smoother than the Mach 3, and the blades seem to cut more cleanly with less irritation.

I'm not sure what to think about the trimming blade on the back. It seems like bit of a gimmick, but I suppose it could be useful to keep your beard straight, which I usually do with a trimmer. It's set perpendicular to the main blades, so I kind of expected to cut myself, but it seems like they've engineered it so you don't and it would certainly be convenient to be able to manage your facial hair with a single tool.

So, my first impression is that the Fusion is a better razor than the Mach 3, but I'm going to want to try it a few more times before I decide if it's worth the extra dollar per cartridge.

 

September 9, 2006

One of the justifications of the NSA's phone database program was that traffic analysis wasn't much of an invasion of privacy. The HP pretexting debacle shows just how bogus that is. By simple phone records analysis it was possible to determine which HP board member was leaking records to the press. And thanks to a somewhat confusing legal climate and weak access controls, the phone records are trivial to get.

That said, it's sort of surprising that the leaking board member (George Keyworth) didn't take more precautions to protect himself. It's not like it's difficult to get anonymous cell phones and e-mail accounts. Next time I leak corporate secrets I'll be sure to use one.

 

September 7, 2006

Fresh Air tonight was on the FCC's indency rules. The last guest was Terry Winter from the Parents Television Council, pitching the usual line about how we need the FCC to restrict indecency on television in order to protect children. Thing is, though, that pretty much every television manufactured since 2000 has a V-chip and so can be programmed to stop children from watching "offensive" programming. So remind me again why we all have to have our programming dumbed down to the level suitable for 12-year-olds?
 

September 6, 2006

California is getting ready to mandate warning labels on WiFi equipment. Here's the text:
22948.6. (a) A device that includes an integrated and enabled wireless access point, such as a premises-based wireless network router or wireless access bridge, that is for use in a small office, home office, or residential setting and that is sold as new in this state for use in a small office, home office, or residential setting shall be manufactured to comply with one of the following:

(1) Include in its software a security warning that comes up as part of the configuration process of the device. The warning shall advise the consumer how to protect his or her wireless network connection from unauthorized access. This requirement may be met by providing the consumer with instructions to protect his or her wireless network connection from unauthorized access, which may refer to a product manual, the manufacturer's Internet Web site, or a consumer protection Internet Web site that contains accurate information advising the consumer on how to protect his or her wireless network connection from unauthorized access.

(2) Have attached to the device a temporary warning sticker that must be removed by the consumer in order to allow its use. The warning shall advise the consumer how to protect his or her wireless network connection from unauthorized access. This requirement may be met by advising the consumer that his or her wireless network connection may be accessible by an unauthorized user and referring the consumer to a product manual, the manufacturer's Internet Web site, or a consumer protection Internet Web site that contains accurate information advising the consumer on how to protect his or her wireless network connection from unauthorized access.

(3) Provide other protection on the device that does all of the following :
(A) Advises the consumer that his or her wireless network connection may be accessible by an unauthorized user.
(B) Advises the consumer how to protect his or her wireless network connection from unauthorized access.
(C) Requires an affirmative action by the consumer prior to allowing use of the product. Additional information may also be available in the product manual or on the manufacturer's Internet Web site.

(4) Provide other protection prior to allowing use of the device, that is enabled without an affirmative act by the consumer, to protect the consumer's wireless network connection from unauthorized access.
(b) This section shall only apply to devices that include an integrated and enabled wireless access point and that are used in a federally unlicensed spectrum.
(c) This section shall only apply to products that are manufactured on or after October 1, 2007.

The basic operating principle of this measure is that consumers would be better off if they secured their wireless networks, if only they weren't too stupid to know what's good for them. The competing theory, of course, is that it's not actually that important to secure your wireless network, so people don't do it.

It should be fairly easy to measure the effectiveness of this measure: each piece of 802.11 equipment has a unique MAC Address. Each manufacturer gets a range of addresses and then typically assigns them in some predictable order. This means it's reasonably easy to determine when a particular piece of hardware was manufactured. So, starting in mid 2008 or so, you should be able to drive around and see if equipment manufactured after 2007 has substantially better security deployment (though note that people may be smoothly deploying more security, so you're looking for a change in the trendline, not necessarily just more deployment).

 

September 2, 2006

Lockheed-Martin shareholders will be glad to know that LockMart has been chosen to build the "Orion" moon spacecraft. Here's Tom Lehrer in 1965:
And what is it that will make it possible to spend 20 billion dollars of your money to put some clown on the moon? Well, it was good old American know-how, that's what. As provided by good old Americans like Dr. Wernher von Braun.

I guess we'll provide our own know-how, and it's 104 billion this time but the bit about the clown is pretty much the same--well, except for the fact that this is clown number thirteen rather than clown number one.

 

September 1, 2006

The IETF has a rather unusual method for selecting its leadership, at the heart of which is the Nominating Committee (nomcom), which selects the candidate lists. The nomcom is randomly selected from a list of eligible volunteers. The actual selection, is one of those algorithms that could only have been designed by nerds; a cryptographic public commitment and verifiable random choice scheme. It works as follows:
  1. People volunteer to serve.
  2. Their eligibility is checked.
  3. The filtered list is sorted in alphabetical order and published at time X. Say it has N members,
  4. A bunch of unpredictable data (stock prices) available only at time X + 7 days used as the seed of a cryptographic pseudorandom number generator.
  5. The output of that CPRNG is used to generate 10 unique numbers from 0 to N-1 (nerds, remember).
  6. The corresponding people on the list are the nomcom members.

So, from a cryptographer's perspective, the nice thing about this design is that it mostly can't be cheated.

  1. Because the list is published before the random numbers are known, there's no way to structure it in order to cheat.
  2. You can of course include fake members but if they're not selected they don't bias the result and if they are then they can be challenged. Similarly, you can remove people you don't like, but they can challenge the list in the week between publication and selection.
  3. Because the seed data is public anyone can verify the result.
  4. Everyone on the list has an even chance of being chosen.1

In other words, it's a classic cryptographic design in that it involves placing total trust in the math and no trust whatsoever in the people running the algorithm--and that's what went wrong this year.

The person who is responsible for running the algorithm is the nomcom chair, who is selected by the ISOC President. This year, the chair made two errors:

  1. He failed to remove at least one person who was ineligible from the list.
  2. He failed to publish the list ahead of time (step 3 above).

Error (1) wouldn't ordinarily be a big deal, because as I indicated in point (2) above it doesn't affect the outcome. But error (2) is a big deal because it means that the nomcom chair had an opportunity to bias the process. The easy way to do this looks like this:

  1. Collect a bunch of "fake" volunteers. These can be either fake names, or people who are ineligible, or who don't care if they're volunteered or not.
  2. Collect the random number output as described in step (5) above.
  3. Generate a bunch of different volunteer lists with different combinations of fake names.
  4. Each of these lists induces a bunch of different nomcom results.
  5. Select the nomcom you like best and publish the corresponding list as the list.
  6. Publish your chosen nomcom.

What stops this attack, of course, is that the list is published (a cryptographer would say "committed to") in advance. But because that wasn't done in this case, there's the possibility that the nomcom chair could have cheated. Now, as a practical matter nobody believes (at least everybody says they don't) that the nomcom chair did anything nefarious but because the whole process is predicated on the theory that he might cheat and is designed to make it impossible, having a situation where he had the opportunity puts us in an uncomfortable position.

The obvious thing to do, of course, is simple to run the process over, and that's indeed what's happened. The problem is that that isn't really adequate either because we've already seen the current nomcom and so if we really hate it, then a do-over would be nice even if we can't predict anything about the composition of the new nomcom other than that it's a random shuffle. So, given that we have one set of selections in hand, it's no longer possible for the process to be unbiased. And we can't really exclude the possibility that the chair cheated even though, as I said, nobody believes it.

As I indicated earlier (and as Phill Hallam-Baker just argued on the IETF list , a lot of the problem here is designing a system which is secure cryptographically but really intolerant of human error--and then expecting that system to give 100% assurance of fairness. All such systems involve a human element and you need to be prepared for what happens when the humans screw up and then be willing to live with less than 100% certainty.

1. This actually isn't entirely true because the CPRNG produces numbers in the range 1..2128 which doesn't quite evenly divide by N, so there's a tiny bias towards people with indices < 2128%N, but it's too tiny to be important.