August 2008 Archives

 

August 31, 2008

I'm not taking any position on the Sarah Palin "Troopergate", incident but I was intrigued by the story that Trooper Mike Wooten used a Taser on his stepson:
Payton advised that a year ago Mike Wooten had tased him. After future questioning, Payton stated that he was in the living room, when Mike was showing him and his cousin, Bristol his equipment. He stated that Mike asked him if he wanted to try it (the Taser) and he agreed. Payton then explained that Mike shot the Taser into the box inside the garage. He then taped a probe to his upper left sleeve and another on his right thigh. Mike then had him kneel on the floor.

Payton stated that the first time he pulled the trigger nothing happened because the probe on his sleeved didn't make contact. The second time he doesn't remember what happened but he knows that the tightened up. Payton was asked if it hurt him and he first said that it didn't hurt and then yes it hurt. Payton then advised that he tightened up and it hurt for only a second and was over. Payton related that the had a welt on his arm where the probe was taped.

Payton advised that Bristol Palin watched the event take place, and that she made it known that she didn't want him to be tased. Payton stated that Bristol was scared for him, and that she was [al?]most in tears. Payton related that his mother was aware that he was going to be tased and that [s?]he was upstairs with the children. Payton advised that his mother was yelling not to do it. Payton stated that he wanted to be tased to show that he's not a mommy's boy in front of Bristol. Following being tased he went upstairs to tell his mother that he was fine.

Wooten tells a version of the story in which Payton was a lot more eager to try out the taser, but even Bristol's somewhat more negative version has Payton volunteering to be tased.

Obviously, it's arguable that no matter how willing your kid is, you shouldn't be tasing them, seeing as it's not entirely safe. On the other hand, I tend to think your average 11-year old kid would think that a father who would let him try his Taser was pretty much the coolest dad ever.

 

August 27, 2008

Wired complains about a "massive iPhone security hole", namely that the keyboard lock does not work as expected:
You're a smart, safety conscious iPhone user, right? You keep the phone set to require a 4-digit passcode every time it wakes up, so if you ever lose your baby, all your personal information is safe. But if you are running v2.0.2 of the iPhone operating system, you might as well not bother. A simple hack will get anybody past your PIN code with free access to all your mail, contacts and bookmarks. Ouch!

Acting on a tip from the Mac Rumors forums, Gizmodo's Jesus Diaz whipped up a video of the exploit in action, a ridiculously easy two-step process:

1. Tap emergency call.

2. Double tap the home button.

This drops you into the iPhones "favorites" section. From here you can make calls or send e-mail, and with a few steps you can browse to the Address Book and then on to Mail, Safari or the SMS application.

I'm not saying this is the best designed feature I've ever seen. Obviously, if you have a PIN lock on your phone you'd prefer to not have it be easily bypassed. That said, it's important to be realistic about what a PIN-based lock like this can do, even in principle, remembering that this person has your phone in hand. There are two things you might want to secure:

  • People making phone calls with your account.
  • Your data.

As far as people making phone calls, your account information is embedded in the SIM card, which an attacker can just pop out and cram into their phone. You can block this by installing a SIM PIN (the iPhone supports this) which needs to be entered every time you power on your phone, but it's not built into the keyboard/screen lock.

With regard to your personal data, remember that the iPhone stores it on the flash memory somewhere. In principle, it could be encrypted (though I don't think it is), but unless there is a hardware security module, the only source of keying material entropy is the PIN, and if someone takes an image of the flash memory, they can mount a dictionary attack on the PIN. Based on the iPhone breakdowns I've seen, there doesn't seem to be an HSM anyway. Interestingly, you don't seem to be able to extract the data from the iPhone by syncing to it, at least not the trivial way: iTunes prompts you for a PIN before syncing. Of course, I don't know if that's enforced on the phone or just in iTunes. If the latter, then you should be able to write your own program that sucks the data off without asking you for a PIN.

The bottom line here is that the iPhone isn't some sort of vault for your data. If you want it protected, use strong encryption or keep it on a device you don't plan to lose.

 

August 26, 2008

In Slate, Darshak Sanghavi argues against lowering the drinking age from 21 to 18. Sanghavi makes a reasonably convincing argument that raising the drinking age has suppressed teen drinking. That's not that surprising, seeing as it's a lot harder for a 17-year-old to impersonate a 21-year-old than an 18-year-old. And then he closes with:
Of course, in the end a lot of teens will binge-drink, no matter what the law says. But that's not an argument against making the legal age 21 years old to buy and consume it. (After all, a third of high-schoolers have smoked marijuana, and few people want to legalize it for them.) Rather, the current law is best viewed as a palliative medical treatment for an incurable condition. Chemotherapy can't cure terminal cancer, but it can make patients hurt a little less and perhaps survive a little longer. Similarly, the current drinking age undeniably reduces teen binge-drinking and death a little bit, without any bad side-effects. When there's no complete cure, though, desperate people are vulnerable to the dubious marketing hype of snake-oil peddlers--which is all the Amethyst Initiative is offering up now.

I don't really have a strong opinion on the right drinking age—though I seem to remember that when I was 16 I thought 18 sounded pretty good, and I wouldn't want to sign an affidavit that I never drank before I was 21. That said, it's not true that there are no "bad side-effects", unless, that is, you ignore the hedonic benefits to the teens in question from having a few drinks. But if course once you ignore hedonic benefits, why not jack the drinking age to 31 or 41 instead of 21?

 

August 25, 2008

As everyone now knows, the BPA leaching out of polycarbonate Nalgene bottles will cause instant death, so many people have gone looking for alternatives. Although Nalgene has a new plastic offering using a copolyester called Tritan (which looks functionally, though not chemically pretty similar to polycarbonate), a lot of people have been going for metal bottles. The two leading contenders are the venerable aluminum Sigg containers and the stainless steel Klean Kanteen. I recently picked up one of each at REI and have been testing them enough to have some initial impressions.

Klean Kanteen
The big selling proposition of the Klean Kanteen is that it's stainless steel. We understand the properties of stainless steel pretty well and so you can have a fair amount of confidence it's not going to leach stuff into your water. Also, stainless is pretty tough, so you don't have to worry about damaging the bottles, which is a concern with the aluminum Siggs (the old polycarb Nalgenes are nigh indestructable, btw. I use one in the field to hammer in tent stakes). That said, I don't much like the Klean Kanteen. I've got four major complaints.

  • I don't like that it's totally uninsulated. This is a problem with any metal container, but it means it's uncomfortable to hold when you put in cold or hot beverages (polycarb insulates pretty well) and you also have to worry about condensation in your bag.
  • I don't like the screw interface for the top. The Klean Kanteen comes with either a stainless or a plastic top and I have the plastic. It's also really hard to get off. As long as it hasn't been cranked down too hard, I can unscrew a Nalgene with three fingers around the bottle and my forefinger and thumb around the lid. I can't do this with the Klean Kanteen at all, partly because the interface seems tigher and partly because the lid is untextured so you can't get a good grip. Finally, it seems easy to crossthread the top so it won't go on at all. I hear the steel tops don't do this, but from the people I know who own them they sound really loud, which could be a pain in public spaces like libraries.
  • The mouth is just a hair too big. This is supposed to be a feature so you can get ice in, but it has the same spilling problem a wide mouth Nalgene has.
  • The bottle feels hard. Whenever I'm drinking from it, I worry that I'm going to bang my teeth against it and chip them. It's very unnerving for me and doesn't happen with plastic.

I plan to return the Klean Kanteen (go REI!)

Sigg
Sigg bottles (yes, the ones that look like fuel bottles) are a backpacking standard and have had a resurgence since people went off Nalgene. They're aluminum, not stainless, with a plastic cap. Because of concerns over aluminum leaching into your drink, they're coated with some unspecified (but they swear it's safe!) proprietary enamel-type coating. I like the Sigg a lot better than the Klean Kanteen, but it's not perfect.

  • It's still uninsulated.
  • The screw interface is better. You can remove the top with two fingers, but it requires about 4 complete turns (compared to about 2 for the Klean Kanteen and 1.5 for a Nalgene), and it's not that easy to get reseated without crossthreading it.
  • The mouth is much better for drinking out of, much better than a wide-mouth Nalgene (about the same as a narrow-mouth Nalgene) but of course there's a tradeoff here, since you can't get ice cubes in. The mouth of the bottle is very nicely rounded and comfortable to drink out of, though occasionally you can feel the threads. It doesn't feel at all like I'm going to chip my teeth.
  • I have two concerns about durability: Internet reviews suggest that the aluminum dents very easily and I also wonder how well the coating will last.

I'm keeping the Sigg, but it's not a perfect replacement for Nalgenes. It's nicer in some ways, especially aesthetically (yeah, yeah, I know it's a bottle). When I brought it home, Mrs. Guesswork announced that she wanted to steal it, which she certainly doesn't say about Nalgenes. It's also about 17% lighter than a Nalgene (149g versus 180g). However, I don't like the lack of insulation or the fact that the cap isn't permanently attached, and I doubt it will replace my Nalgene in my pack, though I suspect I will favor it for everyday drinking, if only for comfort reasons (and less spilling down the front of my shirt).

One more thing: I was sort of surprised by how much I felt like I was going to chip a tooth with the Klean Kanteen but not at all with the Sigg or a Nalgene. I'm not sure what materials effects create this impression, but I wonder if simple hardness could be an explanation. It looks like tooth enamel is quite a bit harder than aluminum, but that stainless steel is of comparable if not greater hardness than enamel. [*]. It's not like I'm biting on the bottles, but I wonder if somehow the hardness translates into something you can feel when the material taps against your teeth (these particular measurements were Knoop test, where you look for indentation at a given pressure).

 
Three stories about the TSA's name-based security scheme this week.
  • A muslim airline pilot (an American Gulf War I veteran who converted) has lost his flight priviliges because he is on "some TSA list" and is suing
  • James Robinson, an airline pilot and retired National Guard Brigadier General says he get hassled whenever he tries to fly
    But there's one problem: James Robinson, the pilot, has difficulty even getting to his plane because his name is on the government's terrorist "watch list."

    That means he can't use an airport kiosk to check in; he can't do it online; he can't do it curbside. Instead, like thousands of Americans whose names match a name or alias used by a suspected terrorist on the list, he must go to the ticket counter and have an agent verify that he is James Robinson, the pilot, and not James Robinson, the terrorist.

    "Shocking's a good word; frustrating," Robinson -- the pilot -- said. "I'm carrying a weapon, flying a multimillion-dollar jet with passengers, but I'm still screened as, you know, on the terrorist watch list."

    ...

    But although the list is clearly bloated with misidentifications by every official's account, CNN has learned that it may also be ineffective. Numerous people, including all three Robinsons, have figured out that there are ways not to get flagged by the watch list.

    Denise Robinson says she tells the skycaps her son is on the list, tips heavily and is given boarding passes. And booking her son as "J. Pierce Robinson" also has let the family bypass the watch list hassle.

    Capt. James Robinson said he has learned that "Jim Robinson" and "J.K. Robinson" are not on the list.

  • The 9th Circuit has ruled that people have a right to sue to get off the no-fly list.

Maybe I'm not cynical enough, but I find the TSA's behavior vis-a-vis the watch list to be somewhat confusing. Here you've got a system that's clearly very inconvenient for a large number of apparently innocent people (even the low range estimates of the size of the watch list are 400,000 people) is trivial to bypass, and really has no evidence that it's useful at all. And rather than somehow quietly roll it back, TSA's response has been to dig in and make it extremely difficult for people on the list. Moreover, they threaten the airlines even for telling people they are on the list. Ordinarily, one can explain the TSA's behavior by recourse to Schneier's "security theater" model, and maybe it's just the circles I travel in, but I don't get the sense that the general public somehow believes this works. And even if they do, would they really be annoyed to hear that Capt. Robinson is slipping through the cracks? Actually, now that I've said that, there is a beyond cynical rationale here for why TSA is so intransigent about removing people: they like it when it comes out that some 10-year old kid is on the watch list. Sure, people realize it's nuts, but that's the evidence that TSA is doing everything it can; they care so much about your security that they'll even stop grandma from flying.

 

August 23, 2008

One problem I ran into lately was how to decorate LaTeX text. In particular, I had some source code and wanted to draw arrows between elements. This isn't something that LaTeX supports out of the box, but once again comp.text.tex to the rescue. The two major solutions appear to be:
  • pstricks (only works if you're generating PostScript)
  • tikz/pgf

Because I generate PDF directly with pdflatex, pstricks was out. Luckily, tikz + PGF works handily. The relevant code looks something like this.

\documentclass[letterpaper]{article}

\usepackage{listings}
\usepackage{tikz}

\lstset{escapechar=\%}

\tikzstyle{every picture}+=[remember picture]
\tikzstyle{na} = [baseline=-.5ex]

\begin{document}
\begin{lstlisting}
void example(FILE *fp)
{
  int c;

  while((c=fgetc(fp)!=EOF)){
    if(c=='X')
      %\tikz[na] \coordinate(source);%goto done;
    fputc(c,stdout);
  }

%\tikz[na] \coordinate(target);%done:
  exit(0);
}
\end{lstlisting}

\begin{tikzpicture}[overlay]
\path[->, red, thick] (source) edge [bend right] (target);
\end{tikzpicture}
\end{document}

The output looks like this.. In order to make tikz work you need to:

  • Set tikz coordinates at source and destination
  • Create a tikz picture that draws an arrow between the coordinates. This must be set as an overlay.
  • Run pdflatex twice. This is necessary because the document needs to be built in order to generate the coordinates that tikz will then want to draw on.
I've also demonstrated this using the listings package, which ignores any LaTeX commands in the source it's formatting. In order to bypass this you need to set up an escape character (see the lstset directive), which lets you escape into LaTeX from listings. Hence the bracketing of the tikz commands with %.
 

August 22, 2008

Turns out that the AntiVirus software on Premier's EMS wasn't the cause of miscounting after all:
The error occurs when multiple memory cards are being uploaded at the same time, and it is more likely to occur in jurisdictions that have several voters and use touch-screen voting systems, said Premier spokesman Chris Riggall.

Allen, Texas-based Premier, a unit of North Canton-based Diebold Inc., supplies touch-screen voting systems as well as scanners for paper ballots. The problem is more likely to occur in touch-screen systems because they use more memory cards, one for every touch screen.

Premier said in its product advisory that the problem can be corrected as long as officials monitor whether the memory cards are being uploaded, and if they are not, reload them until they are.

Joe Hall has the details. The Premier reports aren't that clear. Here's the "technical background".

The GEMS poster works by receiving concurrent uploads from the memory cards and then saving that data in temporary files for posting to the election databse in a serialized manner, i.e. one at a time. This design is used to optimize the database access performance as well as the upload data performance.

The issue identified is a logic error that allows the poster to attempt to post a file that is still being received when two or more files are received in sequence, and the first file takes longer to save than the second file. If a sharing violation occures, the posting of the first file is the one affected. Note that files typically take very few milliseconds to save, whereas large files, with large number of votes, can take up to 100 milliseconds.

This kind of race condition isn't exactly uncommon in concurrent systems. On the other hand, if there's one race here, perhaps there are others. It's worth asking if there are ways where the file would be marked successfully uploaded but the votes get lost.

 

August 21, 2008

Hovav Shacham, Stefan Savage, Terence Spies and I have been working on some exciting, exciting technology in the field of paper cryptography and we were pleased to present it at the CRYPTO Rump Session. Slides here.
 

August 20, 2008

Some colleagues (Hovav Shacham, Brendan Enright, Scott Yikel, Stefan Savage) and I have been tracking the aftermath of the Debian OpenSSL PRNG bug (see Hovav's Work-In-Progress presentation at USENIX WIP here). One of the questions that comes up is what you can do with this Here's what's obvious (I'm talking about SSL only below):
  • If the server key was generated with the weak PRNG, you can guess the server key and:
    • Impersonate the server.
    • Passively decode traffic encrypted with static RSA (which a lot of traffic is). This doesn't help with ephemeral Diffie-Hellman (DHE).
  • If the server key is strong but the server has a weak PRNG:
    • If the server has a DSA private key, you can recover it. This isn't much of an issue for SSL but SSH does use DSA reasonably often.
    • This doesn't directly allow you to recover traffic in static RSA mode. The reason for this is that in static RSA mode, the client generates the only secret data (the PreMaster Secret).
  • If the client stack is insecure, then you could in principle guess the client's random values. However, none of the major browsers use OpenSSL, so this is probably limited to non-browser clients.

But this raises the interesting question: can you passively attack DHE mode? In this mode, the server generates a fresh DH key for each transaction. Knowing the server's long-term private key doesn't help here— that just lets you impersonate the server. So, the implementation used to generate the long-term key doesn't matter. However, unlike RSA DHE requires the server to generate secret random values, so if the server is running a broken version, this may give us a way in.

We're not the only ones to think along these lines: along these lines: Luciano Bello describes a partial attack and has posted a patch to Wireshark to attack DHE connections:

If an eavesdropper can explore the complete private key space (the all possible numbers for Xc or Xs), he/she will be able to get access to the shared secret. With it all the communication can be deciphered. That's what this patch can do.

A Wireshark with this patch and a list of possible private keys will try to brute force the share secret. If one of the parties is using the vulnerable OpenSSL package the communication is totally insecure and will be decrypted.

Bello demonstrates attacking a connection between a broken client and a decent server. However, the attack as described doesn't work with secure clients (which, as I said, is pretty much any browser) and broken non-toy Web servers (the situation is different for non-Web servers (e.g., IMAP and POP servers which run out of inetd): even if the server's PRNG is broken, there isn't a fixed-size list of keys it generates.

To understand why, you need to understand the vulnerability better. Effectively, the vulnerability stopped any invocations of RAND_seed() from mixing data into the PRNG. The only time new seed data gets mixed in is when you get new randomness values via RAND_bytes(). Each time you call RAND_bytes() the current process ID gets mixed into the PRNG. So, for a given PID and a given sequence of invocations of RAND_bytes(), you always get the same string of random values. These values are (statistically) unique, but predictable: you can say "the nth value will always be one of the following 2^15 values depending on the PID". However, it should be clear that even for a given PID, you can generate an arbitrary (well, almost) number of distinct values. So, if you had a process which generated a million DH keys in sequence, they'd all be different. Unfortunately for Bello's attack, this is exactly how many real Web servers work. For instance, Apache w/ Mod_SSL forks off a bunch of long-lived server processes which each handle many requests. Bello's attack would potentially work on the first connection, but the second connection would not be on the key list. You need another 2^15 values to handle the second connection. We've confirmed this by setting up a server, connecting to it, and pulling out more than 2^15 distinct public keys. So, you need to do something more complicated.

What follows is our initial analysis of Apache with Mod_SSL, which we're currently working on confirming. The details may not be quite right, but I suspect the general contours are.

With Apache and Mod_SSL it turns out that RAND_bytes() gets called in the parent process before it forks off the subprocesses, so each subprocess has both the parent process and the subprocess PIDs mixed in. So, you have 2^30 distinct PID combinations and therefore random value streams to deal with. In general, however, since the parent process forks off an initial set of children immediately and children aren't killed or started that often, the entropy is probably a lot less than 2^30, and even 2^30 is still searchable if you've got even modest computer power.

So, if you get to observe the server from the time of startup, you're in fine shape. As soon as you observe a connection, you check your table of known keys (basically a bigger version of Bello's table that takes into account both parent and child PIDs). [Actually, you can save some compute time by building a table of ServerRandom values, which saves you doing the modular exponentiation to compute the public key for a given private key.] That tells you what the PID pair of the server process you're observing is, and of course it's current state. You've got the private key so you can decrypt the connection. To handle the next connection to that server process, you roll the PRNG forward to compute the next expected key. When the next connection comes in, you repeat this process, so at any given time you know the next value for each active PID pair.

If you're not lucky enough to see the server from the time of startup, then life gets more complicated, since you don't know where in its random number stream each server process is. So, you would need to try candidate numbers of connections. Unfortunately, there's another complicating factor: TLS handshakes with Diffie-Hellman and RSA key exchanges involve different patterns of random values: the DH exchange involves an extra 128-byte random value for the Xs (the DH private key) No problem you say, we'll just compute reasonably sized sections of the random value stream and look for matches within the probable zone. Unfortunately, this doesn't look like it's going to work. As I said earlier, each time you invoke RAND_bytes() the PID gets mixed into the PRNG. In other words: RAND_bytes(128); RAND_bytes(32); does not produce the same 160 bytes as RAND_bytes(32); RAND_bytes(128);. This means that every connection introduces one bit of entropy: whether DHE or RSA was used. If you're not observing these connections, this entropy quickly adds up and it becomes impractical to search the space. It's possible that there's some analytic attack on the PRNG that would let you reduce this search space, but nothing popped out at us on casual inspection. This suggests that if you have a server which only does DHE, you can attack individual connections, but if it does both DHE and RSA, you need to observe all the connections from the server to make sure you know the DHE/RSA pattern.

I should mention one more issue: OpenSSL uses a random blinding value to stop remote timing attacks on static RSA. If you can predict the blinding value, then it may be possible to mount a timing attack on the static RSA key, even if it was generated with a strong PRNG. We're still looking into this as well.

As I said at the beginning all this is preliminary and only partly confirmed. We're hoping to have definitive results sometime in the next few weeks and be publishing soon after that.

UPDATE: Fixed Luciano's name to be "Luciano" from "Lucian". Also, I should add that the Wireshark work was done by Paolo Abeni and Maximiliano Bertacchini as well as Luciano Bello.

 

August 18, 2008

Previously, I had mentioned that DEET appeared to work by blocking mosquitos ability to detect food. A new study being reported in PNAS (Abstract. Full text behind paywall.) claims that mosquitos actually are repelled by DEET:
The insect repellent DEET is effective against a variety of medically important pests, but its mode of action still draws considerable debate. The widely accepted hypothesis that DEET interferes with the detection of lactic acid has been challenged by demonstrated DEET-induced repellency in the absence of lactic acid. The most recent hypothesis suggests that DEET masks or jams the olfactory system by attenuating electrophysiological responses to 1-octen-3-ol. Our research shows that mosquitoes smell DEET directly and avoid it. We performed single-unit recordings from all functional ORNs on the antenna and maxillary palps of Culex quinquefasciatus and found an ORN in a short trichoid sensillum responding to DEET in a dose-dependent manner. The same ORN responded with higher sensitivity to terpenoid compounds. SPME and GC analysis showed that odorants were trapped in conventional stimulus cartridges upon addition of a DEET-impregnated filter paper strip thus leading to the observed reduced electrophysiological responses, as reported elsewhere. With a new stimulus delivery method releasing equal amounts of 1-octen-3-ol alone or in combination with DEET we found no difference in neuronal responses. When applied to human skin, DEET altered the chemical profile of emanations by a "fixative" effect that may also contribute to repellency. However, the main mode of action is the direct detection of DEET as indicated by the evidence that mosquitoes are endowed with DEET-detecting ORNs and corroborated by behavioral bioassays. In a sugar-feeding assay, both female and male mosquitoes avoided DEET. In addition, mosquitoes responding only to physical stimuli avoided DEET.

In the Times article, the original researchers seem unconvinced:

Leslie B. Vosshall, a researcher at Rockefeller University who was involved in the earlier study, said that her team stood by its work, and that its findings were based on a variety of experiments. So for now, at least, there still appear to be some mysteries surrounding DEET.

As I understood the original work, DEET does have a repellent effect at high concentrations and looking at their (schematic) diagrams, it's not clear to me that the DEET filter paper would have indeed blocked the food, especially as they did controlled trials with solvent instead of DEET. This is relevant to my interests, but luckily I don't need to understand how DEET works in order to slather myself with smelly insect repellent, plastic dissoving goodness.

 

August 17, 2008

I caught the last 15 minutes of the men's Olympic 10K final today. I wanted to watch the whole thing, but I had to go to a friend's house and they didn't give us any warning, so I missed the first ten minutes. [*]. Anyway, the race was fairly slow (yeah, yeah, I know, it's a bit odd to call sub 4:30 miles slow), but it looked to me like the athletes were holding back for a big kick in the last few laps. At the bell lap, WR holder Kenenisa Bekele (ETH) and Sileshi Sihini (ETH) made an incredible surge, leaving the rest of the field behind, and eventually Bekele put a full second on the rest of the field. Haile Gebrselassie, probably the greatest long distance runner of the past 25 years, if not of all time, had kept with the pack up to this point but just couldn't seem to stick with the break. [Gebrselassie used to hold the 10K WR, and has since moved up to the marathon but didn't compete there in Beijing because of concerns over pollution.] The winning time was 27:01.17, which is fairly far off the WR pace.

BTW, it's not clear to me that MS has done themselves any favors by having NBC serve video exclusively via Silverlight. I'm not saying it's MS's fault, but watching video at 2 frames per second isn't exactly a great advertisement for their technology.

 
Paul Hoffman just alerted me to the fact that comments were getting caught in the spam filter. I've flushed the filter, dialed it down a bit, and will check regularly. That said, if a comment doesn't show up in 24 hours, please complain to me via email.
 

August 16, 2008

Several people have pointed me to this XKCD comic:

Without addressing the design of the Diebold system, I'm not sure I agree with the implicit argument here.

First, it's important to distinguish where the AV software is running. It's not on the voting machine proper but on the election management system (EMS). These machines run Windows (the Diebold touchscreen machines run Windows CE and the optical scan machines seem to run some monolithic embedded app [*]). The way the Diebold system works is that the results are fed into the EMS and then the EMS tabulates them and declares the winner.

I can imagine a number of arguments for why you shouldn't have an EMS that has an antivirus system on it. Let's take them in turn:

You shouldn't be using a computer to tabulate the votes.
This isn't inherently crazy, but given the complexity of elections in the United States, it seems fairly unrealistic. In the Nov 2004 election, my local ballot had 27 separate contests on it. Goggin et al. report that single-person manual counting of optical scan paper ballots takes about 5 seconds per race per ballot (with a 1.5% error), so a 27 contest ballot would take about 2.25 minutes to tally. If you use 4 person teams as used in the California TTBR, then Palo Alto, which had about 5,000 votes in that election, would have required about 750 man hours to do a complete count. Even with only a single person per ballot, you're looking at nearly 200 man hours, so that's 12 people to get results within 8 hours. This isn't impractical, but it's a big change from the existing systems, so we have computers in the game somehow.

If you're going to use computer counting, then you need to somehow program the computers, and that means some sort of EMS. Now, you could use precinct count opscan and/or DREs but tabulate manually, for instance from the results tapes out of the precinct machines. Then you wouldn't have to worry about accuracy of the EMS tabulation, which would remove this particular alleged problem (though of course you would still have to worry about compromise of the EMS during the ballot definition phase). On the other hand, the tabulation part of the equation is the most auditable and transparent part of the process. The State publishes precinct totals, so you can add them up yourself and verify that the totals are computed correctly. I'm not saying there's no risk here, of course, but if you're going to remove computers from the system, this isn't where I would start.

Your EMS shouldn't be running on a general purpose system.
A lot of the polling place devices run on non-standard embedded systems. That doesn't necessarily make them less susceptible to malware. It's true that the generic malware that you see on Windows systems isn't going to run on these platforms, so the attackers would have to get their hands on the target software, but that hardly seems like an insuperable obstacle. Of course, your generic AV software won't run on these platforms, but it's not clear that that's somehow an advantage.

You shouldn't need AV on the EMS because your machines should never be exposed to any kind of malware.
I certainly agree that it's critically important to isolate the trusted county central machines from any source of infection (see here for our EVT paper on techniques for this.) But no isolation system is perfect, so it hardly seems like a bad idea to have some sort of security software (AV, etc.) as a backup in case you accidentally contaminate your EMS. I'm not saying that you should rely on AV; if you discover that your EMS is potentially compromised, but your AV system doesn't say anything, it's probably not safe to assume everything is OK. On the other hand, if your AV does signal some kind of infection, you should definitely be paying attention.

The AV is useless.
One of the big failings of AV software is that it's not that great at detecting new kinds of malware. Modern AV systems are OK at detecting known kinds of malware (e.g., generic viruses), but they they're not that great at detecting new kinds of malware—which is why they need regular updates—and even worse when that malware has been specifically designed to evade that AV software. And since any kind of malware that specifically targeted election results would need to be specially designed, your average AV package is not particularly likely to detect it. That said, AV packages typically do contain some anomaly detection, and you might get lucky and have the malware fall afoul of that. However, given that the EMS probably needs less user level flexibility than your average system, you might be able to design a much more aggressive anomaly detection system (e.g., tripwire + something else) that would have a better chance of catching stuff.

The AV is dangerous.
Given that Diebold is blaming counting errors on the AV, this seems like an argument with some force. I have no opinion on whether the AV is to blame, but given that the EMS is running on a system with a huge amount of software (i.e., the OS), it's not clear why you should be especially concerned about the AV as opposed to that other stuff.

 

August 15, 2008

At long last, TLS 1.2 has been published:

1.2.  Major Differences from TLS 1.1

   This document is a revision of the TLS 1.1 [TLS1.1] protocol which
   contains improved flexibility, particularly for negotiation of
   cryptographic algorithms.  The major changes are:

   -  The MD5/SHA-1 combination in the pseudorandom function (PRF) has
      been replaced with cipher-suite-specified PRFs.  All cipher suites
      in this document use P_SHA256.

   -  The MD5/SHA-1 combination in the digitally-signed element has been
      replaced with a single hash.  Signed elements now include a field
      that explicitly specifies the hash algorithm used.

   -  Substantial cleanup to the client's and server's ability to
      specify which hash and signature algorithms they will accept.
      Note that this also relaxes some of the constraints on signature
      and hash algorithms from previous versions of TLS.

   -  Addition of support for authenticated encryption with additional
      data modes.

   -  TLS Extensions definition and AES Cipher Suites were merged in
      from external [TLSEXT] and [TLSAES].

   -  Tighter checking of EncryptedPreMasterSecret version numbers.

   -  Tightened up a number of requirements.

   -  Verify_data length now depends on the cipher suite (default is
      still 12).

   -  Cleaned up description of Bleichenbacher/Klima attack defenses.

   -  Alerts MUST now be sent in many cases.

   -  After a certificate_request, if no certificates are available,
      clients now MUST send an empty certificate list.

   -  TLS_RSA_WITH_AES_128_CBC_SHA is now the mandatory to implement
      cipher suite.

   -  Added HMAC-SHA256 cipher suites.

   -  Removed IDEA and DES cipher suites.  They are now deprecated and
      will be documented in a separate document.

   -  Support for the SSLv2 backward-compatible hello is now a MAY, not
      a SHOULD, with sending it a SHOULD NOT.  Support will probably
      become a SHOULD NOT in the future.

   -  Added limited "fall-through" to the presentation language to allow
      multiple case arms to have the same encoding.

   -  Added an Implementation Pitfalls sections

   -  The usual clarifications and editorial work.

V-T day!

 

August 14, 2008

Declan McCullagh reports on the MBTA's claim that the MIT researchers have no first amendment right to publish their research:
First Amendment protection does not extend to speech that advocates a violation of law, where the advocacy "is directed to inciting or producing imminent lawless action and is likely to incite or produce such action." The Individual Defendants' conduct falls squarely within this well established zone of no protection.

First, unless restrained, the Individual Defendants would have given their Presentation, and related materials (which have not yet been made available) to one of the world's largest hacker conferences. Advocacy in favor of illegal behavior, in this context, is likely to incite or produce illegal behavior. Second, the Presentation, and likely the related code and materials, unequivocally constitute advocacy in favor of a violation of law.... the Individual Defendants are vigorously and energetically advocating illegal activity, and this advocacy, in the context of the DEFCON Conference, is both directed to inciting or producing imminent lawless action, and likely to produce such action. Therefore, the Individual Defendants enjoy no protections under the First Amendment.

I've reviewed the MIT group's slides, and while they do involve a certain level of hype, the general tone isn't that out of place in the security community. It didn't strike me as "advocacy in favor of illegal behavior". Rather, it simply described a set of vulnerabilities, some description of how they could be exploited, and the impact of exploitation. Obviously, this sort of disclosure could result in some illegal behavior, but that's a potential result of any paper describing vulnerabilities. Unless I'm missing something, the rule the MBTA is proposing would effectively allow the banning of publication of any security vulnerabilities. Incidentally, the bit about the "context of the DEFCON conference" is odd. Perhaps the MBTA would be so good as to provide a list of venues at which it's ok to publish your results. Full Disclosure? W00T? USENIX Security? The New York Times?

The Individual Defendants' DEFCON presentation constitutes commercial speech. Commercial speech is any "speech that proposes a commercial transaction." Here, the Presentation is full of marketing, and self-promotional statements. It is not a research paper. As commercial speech advertising illegal activity, it receives no First Amendment protection.

What a bizarre statement. Leaving aside the question of whether self-promotion is sufficient to make something commercial speech (I'm not a lawyer but my understanding is that it isn't), when was the last time you saw an research paper that wasn't full of marketing and self-promotional statements?

 

August 13, 2008

In today's Threat Level, Ryan Singel quotes a bunch of DNS types complaining about how USG isn't signing the DNS root (sorry for the long blockquote, but the context is important):
Those extensions cryptographically sign DNS records, ensuring their authenticity like a wax seal on an letter. The push for DNSSEC has been ramping up over the last few years, with four regions -- including Sweden (.SE) and Puerto Rico (.PR) -- already securing their own domains with DNSSEC. Four of the largest top-level domains -- .org, .gov, .uk and .mil, are not far behind.

But because DNS servers work in a giant hierarchy, deploying DNSSEC successfully also requires having someone trustworthy sign the so-called "root file" with a public-private key. Otherwise, an attacker can undermine the entire system at the root level, like cutting down a tree at the trunk. That's where the politics comes in. The DNS root is controlled by the Commerce Department's NTIA, which thus far has refused to implement DNSSEC.

The NTIA brokers the contracts that divide the governance and top-level operations of the internet between the nonprofit ICANN and the for-profit VeriSign, which also runs the .com domain.

"They're the only department of the government that isn't on board with securing the Domain Name System, and unfortunately, they're also the ones who Commerce deputized to oversee ICANN," Woodcock said.

"The biggest difference is that once the root is signed and the public key is out, it will be put in every operating system and will be on all CDs from Apple, Microsoft, SUSE, Freebsd, etc," says Russ Mundy, principal networking scientist at Sparta, Inc, which has been developing open-source DNSSEC tools for years with government funding, He says the top-level key is "the only one you have to have, to go down the tree."

...

"We would want to bring the editing, creation and signing of the root zone file here," to IANA, Lamb said, noting that VeriSign would likely still control distribution of the file to the root servers, and there would be a public consultation process that the change was right for the net.

But changing that system could be perceived as reducing U.S. control over the net -- a touchy geopolitical issue. ICANN is often considered by Washington politicians to be akin to the United Nations, and its push to control the root-zone file could push the U.S. to give more control to VeriSign, experts say.

...

Woodcock isn't buying the assurances of NTIA that it is simply moving deliberatively.

"If the root isn't signed, then no amount of work that responsible individuals and companies do to protect their domains will be effective," Woodcock said. "You have to follow the chain of signatures down from the root to the top-level domain to the user's domain. If all three pieces aren't there, the user isn't protected."

Without getting into how important/useful DNSSEC is (see Tom Ptacek's quite negative assessment for a contrary opinion), I'm having some trouble understanding the arguments on offer here.

It's true that the system is arranged in a hierarchy and that if the root were signed that would be nice, but as far as I know that's not at all necessary for DNSSEC to be useful. As long as the TLD operators (.com, .org, .net, etc.) sign their zones, than it's quite possible to have a secure system. All that needs to happen is that someone publishes a list of the public keys for each root zone and that list gets integrated into the resolvers. There's plenty of precedent for this: when certificate systems were first designed, the concept was that there would be a single root CA and that all other CAs would be certified by the root CA. This isn't how things actually worked out, though. Instead there were a bunch of different CAs and their public keys got built into browsers, operating systems etc. There are order 100 such CAs in your average browser, so it's not at all like 300 signing keys would necessarily be infeasible to distribute with resolvers.

In some respects, the situation with DNS is superior because each signing key would be scoped. So, for instance, the key for .com couldn't be used for .org. This is different than current PKI systems where any of those 100 or so CAs can sign any hostname, so a compromise of any of those CAs is quite serious. By contrast, if you're a .com customer, you don't need to worry so much about the security of .zw unless you happen to do a lot of business with sites in Zimbabwe.

With that in mind, if the root were to be signed tomorrow, it's not like it would be tremendously useful unless the individual TLS operators start signing their zones. It sounds like some of them are starting to, but at least from my perspective, this seems rather more important than the root. Now, it's true that having the root signed might have some sort of PR impact in terms of incentivizing the TLS operators, but technically, I don't see that it makes that much of a difference. The primary advantage is that it wouldn't be necessary to distribute a file full of individual zone keys, but that doesn't really seem to be the primary obstacle to DNSSEC deployment/usefulness, and given how fraught the political issues around signing the root are, it seems worth thinking about how we can make progresss without it.

UPDATE: I should mention that another way in which TLD keys are simpler than CA keys is that because of this problem where any CA can sign any name the browser vendors have to have a fairly complicated vetting process to decide whether a given CA follows appropriate practices and can be trusted. But because the TLD registries are already responsible for the zones they would be signing and can't sign names outside their zones, it's relatively clear who should and shouldn't be included on the list.

 

August 12, 2008

SF Gate has this article about the mysterious semi-disappearance of Clear (the airport Verified Identity bypass people's) laptop at SFO:
The Clear service speeds registered travelers through airport security lines. Verified Identity Pass operates the program at about 20 airports nationwide.

New enrollments in the program were suspended after a laptop with names, addresses and birthdates for people applying to the program disappeared from a locked Verified Identity Pass office at the airport. The files on the laptop were not encrypted, but were protected by two passwords, a company official said.

A preliminary investigation showed that the information was not compromised, said Steven Brill, CEO of Clear, but the TSA is still reviewing the results of its forensic examination of the computer.

In case you didn't know this already, multiple passwords don't add a lot of value when the attacker has physical possession of the computer. Passwords only protect access when the operating system is running. However, typically computers can be booted from some media other than the hard drive, e.g., CDROM or a USB stick. In that case, you can boot any operating system you want and read the laptop hard drive directly, regardless of what passwords there are. On many computers, you can configure the BIOS so that the machine can only be booted from the hard drive, and then some password is needed to reconfigure the BIOS. I can't tell whether this machine was configured this way. If it were, you could try guessing the password, or you could just open the case and read the hard drive directly in another machine. This, of course, is why you want to encrypt the hard drive.

I'd also be interested in hearing what forensics were performed. Neither of these procedures would leave much in the way of electronic evidence, especially if the computer was already off—both these attacks would require rebooting the computer, though of course the attacker could just let the battery run down, which would help cover up an intentional reboot. If you removed the hard drive, that might leave tool marks on the case, screws, etc. but then you'd have to know what tool marks were there before from assembly, repair, etc. In any case, it's not clear to me that one could really tell whether this sort of attack would be readily detectable.

 

August 11, 2008

In an interview, in today's WSJ, Steve Jobs confirms that the iPhone has a remote "kill switch":
Apple raised hackles in computer-privacy and security circles when an independent engineer discovered code inside the iPhone that suggested iPhones routinely check an Apple Web site that could, in theory trigger the removal of the undesirable software from the devices.

Mr. Jobs confirmed such a capability exists, but argued that Apple needs it in case it inadvertently allows a malicious program -- one that stole users' personal data, for example -- to be distributed to iPhones through the App Store. "Hopefully we never have to pull that lever, but we would be irresponsible not to have a lever like that to pull," he says.

I don't find this rationale very convincing. As far as I know, neither Windows nor OS/X has any sort of remote software deactivation feature, and we know that there are malicious programs out there that steal users' personal data. In fact, the situation is quite a bit better with the iPhone than with either of those two programs because (unlike the iPhone), these operating systems allow the user to install arbitrary software. The only ways that a user could get malicious software on their iPhone are if Apple distributes it through appstore or the user jailbreaks their phone—and it's hard to see why Apple needs to protect you if you've deliberately done something unauthorized. So, this seems less necessary for an iPhone than for a commodity PC.

While a switch like this might not be useful for routine malware, one could argue that because you're on a closed network (AT&T in the US), the network operator needs to be able to deactivate software that is a serious threat to the network (e.g., a rapidly spreading worm). However, unless you expect to be constantly plagued with such worms, then you don't really need this fine grained a kill switch—you just want to pull the phone off the network entirely. This is especially true since it seems unlikely that this feature will work in the face of truly malicious code. All you need is for there to be one iPhone privilege escalation vulnerability and the malware will simply be able to deactivate the remote check from happening at all, thus protecting itself. There's no reason to believe that iPhone's security is much better than that of your average software system, so such vulnerabilities are likely to exist.

What a switch like this really is good for, however, is letting Apple retroactively decide that a given app is something they don't want you running—even if you do want to run it—and take it away from you. That explanation seems a lot more consistent with Apple's general policy of deciding yes/or no on every app people might want to run.

 

August 9, 2008

Some MIT students broke the fare card system used by the Massachusetts Bay Transit Authority (slides here) and were scheduled to present at DEFCON. MBTA responded by seeking (and obtaining) a temporary restraining order forbidding them from disclosing "for ten days any information that could be used by others to get free subway rides." [*]. Unfortunately for the MBTA, the presentations had already been distributed on CDROM to DEFCON attendees, so this didn't have quite the impact one might have wanted. Plus the MIT Tech published a copy of the slides, so the information is pretty much out there now. Some thoughts:

  • Attempts to suppress this sort of information rarely work well. That's especially true in this case because the best attack in terms of cost/benefit ratio is also the obvious attack: making duplicates of the card magstripes to mount a replay attack. As soon as you know this is possible—and something that simple is hard to hide—then the game is over.
  • According to this Wired article the researchers didn't notify the MbtA and refused to give them details of the vulnerabilities they found:
    On August 5th, the court documents reveal, a detective with the transit police and an FBI agent met with the MIT students, Rivest, and an MIT lawyer to discuss their concerns and inquire about what the students would disclose in their talk. But the students would not provide the MBTA with a copy of the materials they planned to present in their talk or information about the security flaws they found in the transit system.

    I'd be interested in hearing more about their reasons for choosing not to reveal the information. Is it just that they didn't trust the MBTA?

  • There's sort of a collective action problem here. If organizations respond to notifications of security vulnerabilities by trying to suppress them, researchers are likely to respond by simply publishing all the details unilaterally so there's no possibility of suppression. So, while it may be MBTA's best strategy to try to suppress this information (and I'm not saying it is (see above), but they clearly think it is), it is likely to lead to a regime in which organizations don't get any warning before disclosure, which doesn't seem optimal.

Of course, this is a ritual that's happened at DefCon and Black Hat before, so it wasn't exactly unexpected. Still, you'd think that organizations would get smarter about not trying to suppress stuff once it's already too late.

 

August 7, 2008

For some unknown reason, Netflix has Law and Order, Law and Order: SVU, and Law and Order: Criminal Intent on DVD, but they only have SVU and CI available for Instant Play. Now, I readily admit that original L&O is better, but is that really the reason why if I want to watch it I've got to get it on physical media?
 
MT seems to have decided to rename the RSS feed to rss.xml. I think I've fixed this, but this is a test post to see if things improve. Outstanding!

UPDATE: checking to see if the RSS 1.0 feed is fixed.

 

August 6, 2008

Jayson Ahern from TSA has posted a defense of their laptop border search policy:
First, it's important to note that for more than 200 years, the federal government has been granted the authority to prevent dangerous people and things from entering the United States. Our security measures at the border are rooted in this fundamental fact, and our ability to achieve our border mission would be hampered if we did not apply the same search authorities to electronic media that we have long-applied to physical objects--including documents, photographs, film and other graphic material. Indeed, there are numerous laws that apply to such material at the border including laws regarding intellectual property rights, technical data that can be imported or exported only under state department license and child pornography.

In the 21st century, terrorists and criminals increasingly use laptops and other electronic media to transport illicit materials that were traditionally concealed in bags, containers, notebooks and paper documents. Making full use of our search authorities with respect to items like notebooks and backpacks, while failing to do so with respect to laptops and other devices, would ensure that terrorists and criminals receive less scrutiny at our borders just as their use of technology is becoming more sophisticated.

This result would be ironic given that this same technology actually enables terrorists and criminals to move large amounts of information across the border via laptops and other electronic devices. At the end of the day, we have a responsibility to search items -- electronic or otherwise -- that are being transported across our borders and that could potentially be used to harm our nation's citizens or that are otherwise contrary to law.

It seems to me that this fails to recognize a number of important respects in which your laptop is different from physical objects like documents, photographs, etc.

First, unlike drugs or currency, you don't need to actually carry information across the border in order to bring it into the country. For starters, you just put it on some Web site (GMail, any file sharing site, etc.) and download it once you've entered the country. Standard encryption tools easily suffice to hide the data from interception by the authorities. You don't even need special software; you can use SSL to contact the site. If you're using GMail, Google will even serve you ads relevant to your interest: "Get your discount surface-to-air missiles here." Of course, if you don't want this, you can PGP encrypt your data with some static key you memorize. Even if for some reason you can't figure out how to operate GMail, you can just copy the data onto a CDROM and ship it to yourself. Even if customs can search them—and I interpret this policy as saying they can't search USMail—as a practical matter it's trivial to hide your in digital music or digital video, so even if they do search your mail it's unlikely you'll get caught.

Second, even if you have to bring the data across with you, Digital data is trivial to hide. For instance, a 2G flash memory chip is about 10x10x2 mm. I can think of lots of ways to hide a chip like that in your gear: for instance in a chip-style cash card. Even if you can't contrive to hide this somewhere in your gear, remember that customs needs a much higher level of suspicion to do a body cavity search, so you can simply swallow the chip to bring it across the border. Basically, you can't stop a dedicated attacker from smuggling even large quantities of digital data across the border.

Ahern talks about preventing "dangerous people and things from entering the United States", but this conflates two different issues. For the reasons above, it's not really possible to stop "dangerous" digital data from entering the US. Now, you might be able to stop dangerous people from entering the US if they were stupid enough to forget to erase incriminating data from their laptops and you catch them during your search, but now that it's public knowledge that CBP is searching laptops, we would expect competent terrorists or child pornographers to take note of that, so you should mostly expect to catch the incompetent, and more likely average people who are carrying contraband.

The third way in which laptops are different is that taking your laptop away is extremely invasive. Even if we ignore the arguments (which have already been aired extensively) about how much it compromises your privacy to have all the stuff on your laptop exposed, having your laptop taken away from you is incredibly inconvenient, as anyone who's ever had a hard drive crash can tell you. As I understand the policy, CBP claims that they can just take your equipment indefinitely. Without arguing about whether they're legally allowed to, it should be noted that they could just image the hard drive. This isn't quite as good since they don't get to do a complete search—you could be hiding your flash chips on the motherboard somewhere—but given the ease with which you can hide your media (see above), this seems like it's good enough to catch the stupid people.

 
Sorry about the hideous look and feel. I upgraded to MT 4.12 and I haven't imported my templates yet.

UPDATE:I've got the templates partly tuned, so at least they're less`hideous. I'll be tuning them more over the next few days. If you see any functional problems (i.e., other than it looks ugly), please let me know in the comments.

 
Are there other religions besides the Mormons who offer live Web chat?
 

August 4, 2008

Peter Saint-Andre recently suggested that we add a "Short Authentication String" (SAS) mode to TLS. SAS is only one solution to a not too uncommon problem: preventing man-in-the-middle attacks on public key protocols without the use of a third-party authentication system such as certificates or Kerberos. The general assumption is that you have some low-bandwidth, non-machine readable, trusted side channel (e.g., telephone) 1 that isn't good enough to do a real key exchange but that you want to use to bootstrap your way to a secure channel. You only need to do this once: after the first time you can have your software memorize the peer's public key or some other keying material and use that to provide continuity.

I'm aware of three major techniques here: fingerprints, password authenticated key agreement/exchange, and short authentication strings.

Fingerprints are probably the best known technique; they're what's used by SSH. You compute a message digest of your public key (or your self-signed certificate in DTLS-SRTP) and then communicate it to the other party over the trusted channel. Then when you do the key exchange over the untrusted channel, each side compares the other side's fingerprint to the key they presented. If they match, you're golden. If not, you may have been subject to a man-in-the-middle attack (or something else has gone wrong). The advantage of this technique is that you can compute a single static fingerprint and use it all the time, and the fingerprint can be public (this is what makes certificate systems work after all). Another advantage is that it's already compatible with TLS without any changes to the protocol. The disadvantage is that the fingerprint has to be comparatively long in order to prevent exhaustive search attacks on your public key where the attacker generates candidate private keys until they find one that has the right fingerprint. The complexity of this attack is dictated by the size of the fingerprint, so if you want 64 bits of security (probably the practical minimum), you need a 64 bit fingerprint, which means you're reading 16 hex digits over the phone, which starts to get into the inconvenient range.

Another approach is a password-authenticated key exchange (PAKE) system like EKE or SRP. These systems let you securely bootstrap a low-entropy secret password up to a secure channel [I'm not going to explain the crypto rocket science here] in a way that isn't subject to offline dictionary attacks; the attacker needs to form a new connection to one of the sides for each password guess it wants to verify. The big advantage of a scheme like this is that the password can be short—32-bits is probably plenty. The disadvantage is that you can't use a single password, you need a different one for each person you are trying to authenticate with. Otherwise one counterparty can impersonate the other. Again, this is compatible with TLS as-is, since TLS provides an SRP cipher suite.

Finally, there are SAS schemes such as that described by Vaudenay. The idea here is that the key agremeent protocol lets the parties jointly compute some value which they can then read over the secure channel. You need to take some care doing this because the protocol needs to stop an attacker from forcing the SAS to specific values, but there are well-known techniques for that (again, see the Vaudenay paper). One problem with schemes like this is that you can't exchange the SAS over the trusted channel until after you've done the key exchange, whereas with the other two schemes you can exchange the key in advance—though you don't have to with the fingerprint scheme and even with the SRP scheme there are ways to do it afterwards [technical note: do your standard key exchange with self-signed certs, then rehandshake with SRP over the other channel when you want to verify the connection and then pass a fingerprint over the SRP-established channel.].

None of these schemes is perfect. Optimally you'd be able to have a short, public, static authenticator, but I'm not not aware of any such system, so you need to figure out which compromise is best for your environment.

1. ZRTP carries the SAS in the established audio channel trusting voice recognition to provide a secure channel. There are some reasons why I don't think this will work well, but they're orthogonal to what I'm talking about here.

 

August 2, 2008

Science has an interesting article about the effect of cooling vests on athletic performance. It's clear that overheating has a major negative effect on performance, so the logic here is that if you cool off before competition you'll take longer to overheat:
Since the 1970s, numerous studies have shown that precooling can dramatically affect some measures of athletic output. A 1995 study of 14 male runners found that if they were first chilled for 30 minutes in a chamber at 5°C, they could run on a treadmill at a certain level of exertion for an average of 26.4 minutes, a whopping 3.8 minutes longer than they averaged otherwise.

Olympic events are typically races over fixed distances, however, and the few studies of race times show much smaller improvements. In 2005, BYU's Hunter and colleagues studied 18 female cross-country runners, who had ingested encapsulated thermometers, as they participated in 4- and 5-kilometer races. Some wore ice vests for an hour before their race, and, on average, their core body temperatures were half a degree lower than those who did not, even at the ends of the races. But the researchers found only an insignificant difference of a few seconds in the two groups' average times.

Similarly, Kirk Cureton and colleagues at the University of Georgia, Athens, put nine male and eight female runners through simulated 5-kilometer races on treadmills. When the runners wore ice vests during a 38-minute warm-up of jogging and stretching, they finished the time trial 13 seconds faster on average than when they warmed up without them. That was a 57-meter lead over their warmer selves, and "even if it was 10 meters it would be important," Cureton says.

But Cureton and colleagues found that temperature differences vanished by race's end, suggesting that precooling is less valuable for long races like the marathon. It likely helps for races lasting between a minute and an hour, Cureton says. It definitely hurts in sprint events.

13 seconds is huge: the difference between the top three athletes in the 5K at the 2004 Olympics was less than a second. On the other hand, it's not going to take an ordinary athlete and make them an Olympian. Not much use for me either, since I do mostly longer distance events.