April 2007 Archives


April 28, 2007

Climbing rope is expensive, but for some reason prices vary quite dramatically. So, while your basic 60m rope in 10.2-10.5 mm usually runs about $150 for standard or $170-180 for dry treated (mostly useful if you're ice climbing or your rope would otherwise be likely to get significantly wet), 20% discounts (to $120 for non-dry are pretty common). But right now Backcountry.com is running a special on Beal Edlinger 10.2mm 60 meters: $97 for dry treated. I've heard pretty good things about this rope and bought the non-dry version (now out of stock) for $83 yesterday.

Acknowledgement: Thanks to Eu-Jin Goh for pointing this deal out to me.


April 27, 2007

The recent vogue for carbon offsets has inevitably created a backlash. The basic claim is that the offsets don't really lead to reduced emissions. Here's a prototypical such comment from Jonathan Adler at Volokh:
An investigation by the Financial Times suggests that many carbon offsets are illusory, and that there is little assurance that purchasing carbon offsets does much of anything to reduce carbon dioxide emissions. Specifically, the report found:
- Widespread instances of people and organisations buying worthless credits that do not yield any reductions in carbon emissions.

- Industrial companies profiting from doing very little - or from gaining carbon credits on the basis of efficiency gains from which they have already benefited substantially.

- Brokers providing services of questionable or no value.

- A shortage of verification, making it difficult for buyers to assess the true value of carbon credits.

- Companies and individuals being charged over the odds for the private purchase of European Union carbon permits that have plummeted in value because they do not result in emissions cuts.


The bottom line is that if Al Gore and Leo DiCaprio truly want to be sure they are reducing their carbon footprint, they are going to have to reduce their own energy consumption, rather than paying others to do it for them.

First, let me say that I have no idea whether carbon offsets actually reflect real reductions by others or not.1 However, it seems to me the standard of "truly want[ing] to be sure" is an unreasonably high bar. An enormous number of the things that you do have carbon footprints that are hard to verify. One obvious way of reducing your carbon emissions is to buy a fuel efficient car, like a hybrid. But what's the additional energy cost of manufacturing a hybrid? I don't know and you probably don't either. Maybe it's zero and maybe it's huge (remember the Dust to Dust flap back in 2006). It seems to me that the best one can reasonably expect a consumer to do is act according to the best knowledge they currently have.

Second, any carbon reduction measure that people follow will almost inevitably involve a lot of paying others to reduce their footprint for you, unless you expect that someone is going to give you all that energy efficient tech for free. People (mostly conservatives) often bring up nuclear power as an example of a non carbon-emitting energy technology but surely everyone expects that if Gore is in favor of nuclear power he's going to lobby for his local utility to build a nuclear plant rather than setting up a pebble bed reactor in his back yard. That sure sounds like paying someone else to reduce your footprint for you.

Even if we assume that carbon offsets totally don't work, e.g., that the people selling them take your money and use it to gas up their Gulfstream Vs, that doesn't necessarily make them a bad idea. Think of them rather as a tax on carbon consumption (an idea that Adler appears to favor). Remember that the purpose of a Pigouvian tax is to align people's incentives with the externality costs of their behavior. In order to serve that purpose it doesn't much matter where the money goes as long as its collected (and in fact distributing the proceeds of a real carbon tax would turn out to be a somewhat tricky issue). From that perspective, the key point is that those who buy offsets are demonstrating that they have internalized the externality costs (or at least are trying to) and if it happens that the money actually somehow decreases emissions by others that's a nice bonus.

This of course raises the question of whether the price of the offsets actually is right to be a Pigouvian tax. The answer turns out to be sort-of. Wikipedia claims that the social cost of CO2 emissions is around $12 ton of CO2 (.3 tons of carbon). The cost of credits varies widely. Terappass's credits (which is what the Oscars used) sell for about $8/ton. Carbonfund's sell for $5.50/ton, which seems a bit low. But remember that that price is based on the externality cost alone. If you factor in that there's some probability that your money actually is going to reduce carbon emissions somewhere, than these numbers don't seem that far off.

1. I'm not unaware of the rhetorical context in which Adler and others make this argument, namely that Gore, etc. are supposed to be hypocrites for wanting others to reduce their emissions while not reducing their own. I'm simply ignoring it for the purposes of analysis.


April 26, 2007

I've written before about Cynthia Beall's work on oxygen adaptation. This week's Science has a short article with some more information about the biochemistry behind it:
But exactly how do these women manage to carry extra oxygen in their blood? They do not produce more hemoglobin the way Andeans living at high altitude do. One possibility is that the women with high oxygen have an adaptation that Beall is exploring independently in these same Tibetan villagers. She found that some villagers exhale extra nitric oxide in their breath, a sign of additional amounts of the gas in their blood. In those Tibetans, nitric oxide dilates the blood vessels so they can pump more blood and oxygen to organs and tissues, as measured by images of heart and lung blood vessels. The Tibetans can boost their blood volume--and so pump more oxygen to their tissues--without producing more hemoglobin or raising the blood pressure in their lungs. That's the reverse of what happens when mountaineers suffer from oxygen deficiency: The blood pressure in their lungs rises, the blood vessels constrict, and fluid builds up, suffocating the lungs.

The next step, says Beall, is to try to see whether these two lines of research meet. She wants to find the underlying gene behind the women's high-oxygen blood--and see whether it is related to genes that regulate levels of nitric oxide in the blood. She notes, however, that it's quite possible that the Tibetans have evolved more than one way to boost blood oxygen, and that these are independent adaptations. Gladwin suggests that Beall's team also measure nitric oxide and blood pressure in the lungs in pregnant women, who are under the most physiological stress at altitude and presumably would benefit most from this adaptation. "Study the pregnant women," he says, "because that's where you'll see evolution in action."

I wish I knew more about oxygen metabolism at high altitude, but a brief lit search seems to support the nitric oxide connection, in particular that there's some evidence that low nitric oxide levels make you susceptible to high altitude pulmonary edema (HAPE), as well as that you can use nitric oxide to treat HAPE. Given this, it's not too surprising that Viagra, which also operates via nitric oxide, appears to improves high altitude exercise performance for some people. Interestingly, in both treatment studies, one group of people responded and one did not, reinforcing the genetic variation in nitric oxide response theory.

One of the notable (though not surprising) aspects of high-altitude mountaineering is the semi-controversy over the use of supplemental oxygen, which many conider prudent but some old-school climbers seem to regard it as weak. (This mirrors a general attitude split in climbing circles about whether risk is something that should be minimized to the greatest extent possible or what makes climbing fun.) I'd be interested to see how attitudes towards viagra develop, especially if it becomes clearer that there's a specific physiological basis for nitric oxide treatment, rather than just a matter of some people being tougher than others.


April 24, 2007

Picked up The Andromeda Strain at the library. Not as good as the book, though reasonably well done. [The book, btw, is extremely well executed, totally inconsistent with Crichton's subsequent descent into hackery. Crichton has (or had) a good sense for how science works as well as a good ear for how to generate convincing faux technical detail.]

One thing struck me, though. As with Asimov 25+ years earlier [*] the extrapolations of what computers can and can't do is strangely off. The movie includes the following gadgets:

  • An automatic medical analyzer with voice recognition (in the book)
  • A computer capable of fully simulating the growth of the Andromeda organism based purely on X-ray crystallography and electron microscopy (not in the book)

However, when they're studying the conditions under which Andromeda can grow, some scientist has to watch a CRT display the results of each growth plate one after another looking for anomalies. Better yet, the anomalous results display the legend "NO GROWTH". Of course, this sort of pattern matching is childs play for software. Why the computer can't detect these is not explained. Incidentally, this scene does not seem to be in the book. On the contrary, Crichton explicitly has the computer flag the relevant growth conditions, which is exactly what you'd want.

This blind spot about how to search for stuff on a computer would be echoed in Jurassic Park where the girl hacker navigates some 3D VR-style ile manager ("It's a UNIX system! I know this!") that no hacker would be caught dead using. (Though as I recall said file manager was a real demo on SGI systems of the time). Of course, both movies were made in time periods where not that many people had first-hand experience with computers and your average disk drive couldn't really hold enough files to make searching necessary. Now that nearly everyone has direct experience with search engine, I'd be surprised to see this particular mistake made in contemporary fiction. More likely screenwriters will assume that you can just type any random thing into Google and the information will just pop out. (I actually remember someone mentioning this about a recent movie but can't recall which one it was.)


April 22, 2007

One of the nice features of OS/X is that since it's BSD if you forget your password you can reboot single user and just change it. You just press Apple-S and boot. Or so I thought. I tried it today on the Mac Mini which I use as an ad hoc DVD player, only to have it come up normally. Natural thought: maybe it's the peripherals.

Step 1: replace the TV with a monitor. No joy.
Step 2: replace the wireless keyboard and mouse with a regular keyboard (no mouse). Machine comes up in single user but I can't seem to type anything. Or rather, I can type but nothing happens.
Step 3: scrounge an old USB mouse and plug it in. Hooray, I can type. Mission accomplished!

Now I just have to pull all that stuff off and wire it back up the way it was. I guess that's some kind of incentive to remember my password.


April 21, 2007

Apparently the City of Boston's free wireless service has some kind of censorware on it, since it's blocking Boing Boing:

Jake tried to access Boing Boing from Boston's free WiFi network and got this notice -- topped by the seal of the Mayor of Boston no less! Banned in Boston -- first they came for the Mooninites, then they came for the Boingers.

Want to defeat censorware? Let freedom ring!

Update: Seth sez, "The phrase 'Banned combination phrase found' is a characteristic message of the censorware Dan's Guardian. It seems some combination of words has triggered the 'isItNaughty' flag (that's what they call it). It would be an interesting legal case to see if you had the right to file a Freedom Of Information Act for the settings and block logs to find out the exact reason you got censorware'd."

This seems like a not particularly attractive development. Ubiquitous Internet (free or otherwise) is a really cool thing and is going to enable all sorts of applications we've just begun to experience (and of which today's low-speed cellular Internet access is just a pale shadow). But it's going to be pretty lame if that network can be arbitrarily censored by random bureaucrats. I know, I know, this is just the free network: you can always pay for some sort of commercial wireless. That's not really true, though; a taxpayer subsidized free WiFi network is going to make it pretty unattractive for commercial providers to enter the market, so you'll just be stuck with the censored version unless you're willing to plug in somewhere.

Moreover, I'm not a constitutional lawyer (or indeed any kind of lawyer), but it's not entirely clear to me that this is constitutional. The First Amendment requires that government fora have viewpoint neutrality (as a somewhat strained analogy, consider the situation with advertisements in subways, where the government has been forced in at least some cases to allow pro-marijuana reform advertisements once they allowed any advertisements). Given that unlike subway advertisements Web access doesn't require subjecting others to your speech, it would seem that there would be a stronger case that censorship was impermissible here. Stepping back from the constitutional question, it's pretty hard to understand what the rationale for banning Boing Boing is (though I've heard it suggested that it was for making fun of the Great Comedy Central Boston Bomb Scare of 2007, which I suppose is possible, but isn't something you'd want to get out of you were the guy who made this decision.


April 19, 2007

Watching Flightplan (sort of the Jodi Foster version of Nightmare at 20000 Feet) which takes place on the "new E474", which seems to be an A380. Like all newfangled megasuperhyperjumbo aircraft it appears to be configured with a lounge complete with a wet bar. No doubt you've seen artists conceptions of the A380 configured the same way. It's of course true that the A380 has room for such amenities, but then so did the 747. In reality, of course, by the time you're actually allowed on the A380, they'll probably have removed the seats entirely so you can travel freeze dried and packed in a cardboard box.
Matthew Yglesias links to Tim Lee's post about wireless networks:
As I argued in an op-ed last year, this is silly. Accessing someone else's wireless network, especially for casual activities like checking your email, is the very definition of a victimless crime. I've done the same thing on numerous occasions, and I deliberately leave my wireless network open in the hopes that it will prove useful to my neighbors.

The only concrete harm opponents of "piggy-backing" can come up with is that the piggy-backer might commit a crime, such as downloading pirated content or child pornography, with your connection. But remember that there are now thousands of coffee shops, hotels, and other commercial locations that offer free WiFi access, and most of them don't make any effort to verify identities or monitor usage. So someone who wants to get untraceable Internet access can go to any one of those establishments just as well as they can park outside your house.

Which isn't to say that there are no reasons people might not want to share their network connections with the world. If sharing your Internet access creeps you out, by all means set a password. And there's almost certainly work to be done educating users so that people are fully informed of the risks and know how to close their network if they want to do so.

So, I certainly agree that piggy-backing isn't much to worry about [*], but that doesn't mean that it's a great idea to run your wireless network completely open. Most home access points are some kind of NAT, which provides a substantial amount of security againt attacks from the Internet, at least primitive port-scanning type attacks. If your machines are properly secured, this isn't necessary, but if they're not—as is reasonably common—then it provides a useful backup.

On the other hand, if someone is on your wireless network, then they will get a private address on the same network block as you and be able to talk directly to your machines, which is a substantially inferior security situation. So, as a belt and suspenders move, it's certainly understandable why one would want to keep people off one's wireless network. This becomes even more true as people start moving hardware that would usually be physically wired onto wireless networks as an alternative to running Cat5 through the entire house.


April 16, 2007

So, California had this great idea to incentivize hybrid vehicles: they would let you drive them in the HOV lane. It was a big pain since you had to get a toll pass transponder and fill out a some forms, but eventually they gave you some stickers you could put on your car so you wouldn't get pulled over. But in January the state stopped issuing the stickers, suddenly rendering them a lot more valuable. This has had two big side effects (aside from the fact that since I never got around to getting stickers for Mrs. Guesswork's Prius so I'm stuck in the slow lane with the rest of the proles.)
  1. Cars with HOV stickers now command a significant price premium on resale. This site claims it's $4K.
  2. People are stealing the stickers off cars.



April 15, 2007

Saw 300 last night. I was just fine with magic and monsters but one thing tweaked me. In trying to convince the Spartan council to send reinforcements, the queen says that they shouldn't let "a king and his men have been wasted to the pages of history." But of course, the Greeks used scrolls, since codices weren't available yet.
A year ago, I wrote about the case of Tatyana McFadden, a wheelchair-bound athlete who wanted to race with/against runners in track:
In some sense, McFadden considers her most recent lawsuit a victory in itself: She finally has reached the last impediment, she said. She wants the Maryland Public Secondary Schools Athletic Association to count her wheelchair racing results in region and state meets toward the overall team competition. The MPSSAA contends that it already has exceeded its obligations by adding eight nonscoring wheelchair events to this year's track championships.


Instead, it made Bowler a hero to most able-bodied runners. At Mdrunning.net, a popular Internet site that features a chat forum and message boards, Bowler's letter -- especially when combined with McFadden's decision to file another lawsuit -- created a frenzy. The board's proprietor, local distance-running guru Brad Jaeger, argued that awarding McFadden points at the state meet would "absolutely ruin the whole sport." Teams usually win the state championship by scoring about 70 points in the state meet, Jaeger reasoned. So if Maryland awarded McFadden the usual 10 points for first place -- right now, she's only asking for one point -- that would drastically alter the meet. And should McFadden compete in the maximum four events? Atholton virtually would be ensured a state title.

Ultimately, there are three somewhat orthogonal issues here:

  1. Whether McFadden races alone or at the same time as non-wheelchair athletes.
  2. Whether her performance is compared to (for purposes of placing) non-wheelchair athletes.
  3. Whether her performance counts against the team score.
Last year McFadden claimed she just wanted to race alongside others (the first issue) but now wants to score points.
The McFaddens had simply hoped the judge would allow Tatyana to compete at the same time as runners. In most of her previous high school races, McFadden competed -- often alone -- in events designated for wheelchair athletes. She would score one team point for each event.

"The judge said many, many times the scoring system was not part of the case," Tatyana said. "I don't care about points."

First, recall that wheelchair performances are dramatically superior to non-wheelchair performances. The gap between wheelchair and non-wheelchair performances significantly exceeds the ordinary male/female gap. I'm not sure whether this is actually unsafe in the sense that it poses a threat to non-wheelchair athletes, but I'm fairly confident it could me made reasonably safe by segragating McFadden into her own lane until the point where she would be far ahead. Actually, it's the fact that she's so much faster that makes this possible, since she will quickly be far away from the other runners.

However, this gap also means that having her compete against ordinary runners, either individually or in aggregate (counting towards team scoring) is incredibly distortionate. Either her team will always dominate (remember she will win 3-4 events) or every other team will have to field wheelchair athletes (presumably by co-opting non-disabled athletes, as I suggested previously.)

So, what's the rationale for allowing this? Fundamentally, it's being suggested that it's unfair that the disabled not be allowed to compete on the same team as others. Certainly we've come to think of fairness as a basic social norm and so this argument is superficially compelling—if at all possible McFadden should be allowed to compete. But that doesn't actually give you a complete answer because it doesn't explain why she should compete in track. If you look at the wheelchair racer that McFadden is using, it's basically a hand-powered tricycle. The assumption that people seem to have is that this should be viewed as an unusual form of running, but it's actually just as reasonable if not more so to view it as an unusual form of cycling. Of course, if wheelchairs were treated as bicycles, McFadden would be at a significant performance disadvantage. Treating McFadden that way would be no more fair than treating her as just another runner. he problem here is that wheelchair racing is a fundamentally distinct sport from both cycling and running and that unfortunately for McFadden, it doesn't have much of a constituency.

As for team scoring, at some level, the set of events which is included in Track and Field is arbitrary (what does the shot put have to do with the two mile relay?). But it's not clear to me at least that any basic fairness norm implies that a certain sport (especially one which is highly unpopular) should be included in that set.


April 10, 2007

I thought Open Source beer was absurd, but now someone claims to be building an Open Source car. As with the beer, the difficult part of building a car isn't that you're missing a design. It's that manufacturing it has very large economies of scale. Even something as simple and relatively forgiving as a bumper or tire requires a fairly substantial manufacturing operation. Now take a look at a carburetor:

Now, I've done a bit of metalwork and given enough time, the specs, and tens of thousands of dollars worth of machine tools I could probably actually manage to manufacture a semi-working carburetor, but it would take me weeks and here's an object that costs about $120 if you buy it retail. Now, a carb has a lot of moving parts but it doesn't take a lot of stress or have particularly fine tolerances, unlike, say, a piston or the frame. Now, of course you could say you'll buy anything really complicated as parts, but it quickly turns out that that's more or less the whole car. I guess you could still open source the pattern for those fuzzy dice...


April 9, 2007

As I mentioned earlier, all symmetric authentication mechanisms have some level of key/password equivalence. However, this can be removed with asymmetric (aka "public key") techniques.

The simplest such technique is very much like a challenge/response technique except that the response sent by the AP is a public key digital signature. The way that this works is that the VP knows the AP's public key but not the AP's private key. The VP then provides the AP with a random challenge and the VP returns: Sign(Kpub, Challenge.1. This technique is what's used in SSL client authentication and in the SSH RSA and DSA modes (sometimes used with certificates). A related technique is to have the AP have a encryption/decryption key rather than a signature key and have the VP encrypt a message under that key (this is how SSL server authentication often works).

So, if these techniques are so great, why aren't they used all the time? There are a number of reasons:

  1. Public keys are hard to work with. They require the AP to have a public key pair stored on a disk somewhere (impairing portability) and requires some way to carry a fairly heavyweight data object (~100 bytes of binary data) to the server. People put up with this for SSH but they don't like it.
  2. It doesn't provide mutual authentication (and inherently can't because the server only has the public key). Obviously, you can have the server have a public key pair as well, but that makes the key management problem even more annoying.

The first problem is soluble, at least in part if you're willing to trade in some security. The idea here is that you generate your key pair by hashing your password. Then the AP doesn't need to store a public key (again, assuming that you have some other method of authenticating in the other direction). The security tradeoff here is that an attacker can now mount a dictionary attack on your communications with the server unless they're encrypted. He just captures a transcript and then keeps generating trial passwords until he finds one that generates the right private key and hence signature. Of course, this problem already existed with challenge/response-based password mechanisms so the problem hasn't been made any worse.

We can also improve the problem of the key that gets copied to the VP by storing a hash of the key rather than the full key. The AP then needs to provide the public key which can be compared to the hash. This brings the size of the data stored by the VP down to about 128-160 bits. An alternative is to simply give the VP the password on initial registration and let him compute the private/public key pair and then "forget" the password. This obviously needs to be used with some technique to ensure that the private keys are different for multiple VPs even if you use the same password all the time. None of these techniques solve the mutual authentication problem, which needs to be attacked by other means.

This brings us to the topic of zero-knowledge password proofs (also known as password-authenticated key agreement. These protocols use public key techniques to allow two parties who jointly share a password to establish a shared key which is not accessible to any attacker. These protocols can also be constructed so that the VP does not have a password/key equivalent. This differs from the easier-to-understand public key techniques in two important ways. First, they support mutual authentication natively. Second, they don't allow the attacker to mount a dictionary attack on a single connection. For each guess he wants to check he needs to do an online communication with one side. The major remaining attack is that the VP can mount a dictionary attack on his stored value in an attempt to recover the AP's password (though of course the AP/VP terminology is less useful in a mutually authenticated environment). Short of using high-entropy passwords/keys, this attack doesn't seem removable.

1. Actually standard practice is for the AP to provide some randomness of its own, but the attacks where that's relevant aren't that important for understanding the concept.


April 7, 2007

In our previous episode, we talked about key equivalence in physical locks and password systems. As you'll recall, conventional password systems have the problem that the authenticating party (i.e., the user, hereafter called the AP for generality) needs to provide their password to the verifying party (VP, i.e., the server). This has (at least) two bad properties:
  1. An attacker who can intercept your communication with the verifying party or who temporarily controls the verifying party can capture your authenticator (password) when you use it to log in and use it to impersonate you to that verifying party.
  2. An attacker who can intercept your communication with the verifying party or who temporarily controls the verifying party can capture your authenticator (password) and use it to impersonate you to other verifying parties with which you used the same password (and you know you do)

The way to solve the first problem is to have a protocol that allows the AP to prove that they know the password without actually revealing it to the VP. The standard solution to this is what's called a challenge-response protocol. The VP provides the AP with a randomly chosen challenge (technically the challenge just has to be one the VP hasn't used before, but this is almost always chosen randomly) and the AP computes some one-way function of the password/key as the response. The VP stores a copy of the password/key and can thus independently recompute the response. If they match, then the VP knows the AP is who he says he is (or at least knows the password/key).

But wait, last time I said that it was bad for the VP to have the password:

This has a big problem. If someone breaks into the server and gets a copy of the password list they get a copy of everyone's password and can impersonate users. This is what's called a password equivalent or a key equivalent for reasons that will become clear a little later. This lets them leverage a disclosure exploit (i.e., one that lets them read files on a system) into an intrusion exploit (i.e., one that lets them break in or pose as another user). It also means that the password file has to be stored with very strict permissions.

Previously, we solved this problem by storing the hash of the password, but that worked because the AP gave the VP the password to hash. In a challenge-response system the VP needs to independently compute the response. Now, you can of course compute the response based on the password hash rather than the password, i.e., response = F(challenge, H(password)) but that doesn't solve the problem because the VP's password file contains H(password). So, while you don't actually have the password you have a value which is equivalent to it, hence the term password equivalent. Anyone who compromises the password file can impersonate the AP to the VP. So, we've solved the problem of someone intercepting1 the authentication exchange being able to impersonate the AP but we've actually made the problem of password file theft worse.

We can improve the problem somewhat by making sure that each VP has a different password. Then at least you can't compromise one VP and use it to attack another. Of course, it's not practical to believe that people will actually use a different password for each of the 30 web sites they have logins for, but you can solve this problem by hashing in the name of the VP to the stored password. I.e., the VP stores H(VP-name, password)2 and the response is computed using that value as the input. So, if you get at a VP's password file you can impersonate APs to that VP, but not to any other VP. This is an improvement (call it weak password equivalence), but it's not perfect. However, it's the best we can do with symmetric cryptography. In our next installment, we'll see how to improve the situation still further.

1. Well, mostly. An attacker can still mount a man-in-the-middle attack on a single authentication, and then pose as the AP for the duration of that session, but he can't reuse the captured authenticator later. Moreover, this attack can be fixed by binding the challenge-response to a cryptographically protected channel between client and server. One example of this is TLS pre-shared key mode (RFC 4507). 2. Yeah, I'm sure you'd rather use HMAC, but a hash is close enough to get the idea across and is mostly secure in most settings.


April 6, 2007

A British jail is changing all its locks because the keys were shown on TV:
An ITN team mistakenly filmed keys on a visit to Feltham Young Offenders' Institution, West London — sparking fears they could be copied.

It meant the nick's 11,000 locks and 3,200 keys all had to be replaced.

First, I'm fairly skeptical that you can reverse-engineer the keys for a lock based on just seeing the key on TV (and unless the lock is incredibly badly engineered, I don't see how you can do it with the lock) unless it's some extreme close-up shot, in which case it should be easy for the jail to figure out what keys were compromised and just rekey them, rather than the whole jail. Second, keys are just part of the jail defense-in-depth system, so hopefully compromise of keys isn't a disaster. After all, it's not that hard to pick most locks, so you can't count on only the lock anyway.

In general it's not that attractive a property of a security system that just seeing one of the elements allows the attacker to break the system. This is sort of inherent in the construction of ordinary physical locks but even there you could improve the situation a bit by (for instance) putting the beveled sections of the key on the inside rather than the outside so just looking at the key doesn't reveal much information. It's of course harder to cut keys that way with conventional cutting machines but arguably that's a feature since it means that you need specialized equipment to duplicate the keys1, which presents a modest barrier. The bottom line is still that with physical lock systems if you can examine the key (even briefly) or the lock (sometimes quite extensively) you can typically figure out what the key looks like enough to get in.)

In digital security systems, by contrast, we can do quite a bit better. Let's start by talking about a simple password system like you would use to log in to your bank (and like people used to use to log in to their computers back when they were multiuser). The way this works is that you type your password into your Web browser and it's sent over the Intertubes (hopefully encrypted with SSL) to the server on the other side, which needs to check it. The easiest way to do this is to have the server just store a copy of the password locally and do a memory comparison.

This has a big problem3. If someone breaks into the server and gets a copy of the password list they get a copy of everyone's password and can impersonate users. This is what's called a password equivalent or a key equivalent for reasons that will become clear a little later. This lets them leverage a disclosure exploit (i.e., one that lets them read files on a system) into an intrusion exploit (i.e., one that lets them break in or pose as another user). It also means that the password file has to be stored with very strict permissions. The fix for this problem is well known. You don't store the password itself but rather you store a one-way function (originally computed with DES but now typically with a hash function) of the password. Call this H(Password). When the user provides their password you compute H(password) and compare it to the stored value. If they match, the user is in. This scheme has the advantage that compromise of the password file is much less dangerous. In fact, on old Unix systems password files used to be publicly readable until it became clear that you could simply try a bunch of candidate passwords until you got a hash that matched (this is called a dictionary search) at which point we went back to hiding the passwords. Even so, a dictionary search is a lot harder than just reading the passwords off the disk.

Even with this fix, simple passwords have the big problem that if you can convince the user to authenticate to you just once then you know their password (it doesn't help here that users tend to use the same password on multiple sites). This is the basis of both (pre-SSL) password sniffing attacks and of phishing. So, the state we have now is that we can make examining the lock basically useless (as long as people choose really strong passwords) but since authenticating requires presenting a copy of the key, if you can examine the key (e.g., by impersonating the lock) you can impersonate the user as much as you want. This is the state of nearly all Web-based login systems today, but it can can be improved upon quite a bit by some cryptography. I'll get to that next.

1.By contrast, the major security feature on "do not duplicate" keys is often the stamp that says "DO NOT DUPLICATE" (the capital letters are what make it mandatory.2.) Sometimes, but not always the blanks are restricted, but obviously the stamp has nothing to do with that.

2. In this document, the keywords "MUST", "MUST NOT", "REQUIRED", "SHOULD", "SHOULD NOT", and "MAY" are to be interpreted as described RFC 2119.

3. Note to advanced readers, don't bother me about timing analysis. I'll try to write that up later.

UPDATE: In the comments, Chris WalshByrd reminds me that someone actually has copied a Diebold key from a picture on a web site. I haven't seen the relevant picture, but I suspect it's a lot better than your average picture on a TV, which tend to be taken from funny angles and be fairly low resolution.


April 4, 2007

Hillary Clinton is pushing some sort of rural broadband service plan:

The Rural Broadband Initiatives Act. This legislation will extend and improve access to broadband services in small towns across America. It creates a policy and action framework to ensure that the federal government employs an effective and comprehensive strategy to deploy broadband service and access in the rural areas of the United States. The bill will also establish a Rural Broadband Innovation fund to explore and develop cutting edge broadband delivery technologies to reach underserved rural areas. The Rural Broadband Initiatives Act has been endorsed by the Communications Workers of America.

Speaking as someone who suffered with ISDN for years and just cut over to (my only real option) Comcast "business" service at $120/month (a significant savings) in order to get decent speed and some static IP addresses, I've just gone one question: "Palo Alto is rural, right?"


April 3, 2007

This is interesting. Aircell is going to offer airborne WiFi:
AirCell paid $31.3 million at an FCC auction last year to take over radio frequency once used for expensive air-phone service and reallocate it to Internet and cellphone service. The Internet service already has the approval of both the FCC and the Federal Aviation Administration. Mr. Blumenstein says AirCell, a closely held Colorado company that provides communications for private jets, is building out its network of 80 to 100 ground towers and talking to multiple airlines. No customers have been named yet.

"It can't happen soon enough," said Henry Harteveldt, a travel technology analyst at Forrester Research Inc.

AirCell will install equipment on airliners that will act as a WiFi hotspot in the cabin and connect to laptop computers and devices like BlackBerrys that have WiFi chips. In all, it will cost about $100,000 to outfit a plane with less than 100 pounds of equipment, and the work can be done overnight by airline maintenance workers, AirCell says.

What makes the service particularly attractive to airlines is that they will share revenue with AirCell. The service will cost about the same as existing WiFi offerings. Mr. Blumenstein says it will charge no more than $10 a day to passengers. It will also offer discounted options for customers and tie into existing service programs like T-Mobile, iPass and Boingo. Speeds will be equivalent to WiFi service on the ground.

At some level this is super-convenient and $10/day is pretty good. I've certainly had plenty of times when I was on the plane and realized I'd forgotten some file and wished I had Internet access. Even lousy Internet access would be pretty convenient in such cases. On the other hand, one of the nice things about being on a plane is that it forces you to actually work on whatever it it is you're supposed to be working on rather than surfing the net. AirCell says they're planning to block VoIP service (it would be sort of interesting to hear exactly how they're going to do that...) but of course there is plenty of interest in cell service, especially in Europe:

OnAir and AeroMobile both install "pico cell" receivers on planes that connect to cellular phones, allowing them to operate at low power to minimize technical problems. The pico cell then routes calls to cellular networks through a satellite link.

Only about 14 calls or fewer can be successfully made at a time per flight, and airline crews can turn the system off during takeoff and landing. If you make the 15th call, you'll get some kind of indication of "no service."

Apparently they're going with a circuit-switched model with admission control, which is pretty old-school PSTN. If people were using Skype, they'd get a quite different experience: as the network got more loaded people would just get worse call quality (dropouts, etc.), but nobody would ever be told "no". This is of course the way that traditional telephony networks work, but as far as I know there's no technical reason you couldn't offer a packet-switched type service that interfaced to the airplane picocell and then bridged back to the GSM network (though you'd need to do something more sophisticated than just pass the media packets back and forth, silence suppression, etc.) That's probably a primarily cultural issue—the providers seem fairly closely tied to the cell phone providers, who are big on this kind of reserved bandwidth system (typically under the name of quality of service).


April 2, 2007

Colorado's 9News reports on the TSA's Red Team tests of the screeners at DIA. They're not doing the most impressive job:
The Transportation Security Administration (TSA) screeners failed most of the covert tests because of human error, sources told 9NEWS. Alarms went off on the machines, but sources said screeners violated TSA standard operating procedures and did not hand-search suspicious luggage, wand, or pat down the undercover agents.

The Red Team uses very expensive chemical simulates in the test devices that look, smell and taste like real explosives, except they do not explode. To the CTX bomb detection machines at DIA, they are real explosives, according to a former Red Team leader.

Sources told 9NEWS the Red Team was able to sneak about 90 percent of simulated weapons past checkpoint screeners in Denver. In the baggage area, screeners caught one explosive device that was packed in a suitcase. However later, screeners in the baggage area missed a book bomb, according to sources.

Of course the TSA says this test is unrepresentative, but that this kind of result should be kept secret:

Morris says other agents, not with the Red Team, test and train screeners every day at the nation's 450 airports and says screeners pass most of those tests. In those kinds of tests, he said Denver has done well in the past.

However, tests done by the Department of Homeland Security's Office of Inspector General and the U.S. Government Accountability Office in 2006 found widespread failures. According to the GAO, screeners at 15 airports missed 90 percent of the explosives and guns agents tried to sneak past checkpoints.


Most test results, including results from the Red Team, are secret, classified as SSI or sensitive security information. Morris says they do not make them public because they could point out holes in the system.

So, there are two types of information here: the first is that the security screening has an incredibly high false negative rate. The second is the specific things you could do to get past security. It's certainly true that knowing specific ways to exploit security would be useful to terrorists, but it's not clear that the Red Team did anything particularly surprising or sophisticated here. In one of the tests, the agent appears to merely have outbluffed the screeners.

Now, one could argue that the mere fact that screening is so inaccurate is in and of itself useful to the terrorists, since this makes airports a more attractive target. But then this is hardly secret information. First, GAO tests showing very similar results have already been published. Second, all you have to do is know how the screening technology works (and that's no secret) and watch how screening is performed to know that it's not going to work that well. On the other hand, it's perfectly clear why TSA would wish to keep such embarassing information secret.


April 1, 2007

Apparently Google Earth has replaced the area photography of post-Katrina New Orleans with pre-Katrina images.
According to the GEC and my sources at Google, the imagery for New Orleans was actually changed last September. The previous imagery was directly after the storm struck, and was of inferior quality. Although the imagery of New Orleans is from pre-Katrina now, it is of better quality. If you have the Plus or Pro version of Google Earth you have the option to load two sets of post-Katrina imagery by logging out of the primary database. I think Google should consider getting more recent high quality imagery for New Orleans so it at least represents the present condition.

Apparently, Google selected a new set of high resolution photos for New Orleans. The only problem is that the new images are pre-Hurricane Katrina. So, all the damage that was caused by Katrina has now been erased in the Google Earth/Maps imagery database. CBS News says this move has sparked outrage and conspiracy theories in New Orleans. Ironically, the people in New Orleans have been some of the biggest fans of Google Earth as it helped save lives during and after the disaster. And, up until the recent update, residents used the pictures to illustrate damage to insurance adjusters, and to plan reconstruction efforts. Some of the conspiracies are that the local government itself requested the change to try and encourage tourism to come back to New Orleans.

Obviously, until the day that real-time satellite imagery is ubiquitous (probably not as far away as you'd think!) there's going to some tension between image quality and timeliness: is a timely but fuzzy image better or worse than a crisp but out-of-date image? While the answer does seem kind of obvious in this case and in other cases where the changes are dramatic and well-known, what about when the freeway on-ramp from my house is blocked this morning but the best images are from last week? It's not entirely clear to me that the modern fuzzy imagery is the right answer.

Current mapping and nav systems deal with this by treating maps as static and then overlaying meta-information (e.g., traffic, your directions), on top of the map. But if you had accurate remote imaging it might be more appropriate to simply display that—or maybe not. I certainly find it a lot easier to read traffic by seeing car density (and speed of motion) than the green and red lines on the Yahoo map displays, but there might be a display technique that would be easier yet. After all, maps are typically easier to get directions off than aerial imagery.