Security: Airport: December 2007 Archives


December 28, 2007

Schneier notes the TSA's new rules about lithium ion batteries. Here's their overall policy:
The following quantity limits apply to both your spare and installed batteries. The limits are expressed in grams of “equivalent lithium content.” 8 grams of equivalent lithium content is approximately 100 watt-hours. 25 grams is approximately 300 watt-hours:
  • Under the new rules, you can bring batteries with up to 8-gram equivalent lithium content. All lithium ion batteries in cell phones are below 8 gram equivalent lithium content. Nearly all laptop computers also are below this quantity threshold.
  • You can also bring up to two spare batteries with an aggregate equivalent lithium content of up to 25 grams, in addition to any batteries that fall below the 8-gram threshold. Examples of two types of lithium ion batteries with equivalent lithium content over 8 grams but below 25 are shown below.
  • For a lithium metal battery, whether installed in a device or carried as a spare, the limit on lithium content is 2 grams of lithium metal per battery. Almost all consumer-type lithium metal batteries are below 2 grams of lithium metal. But if you are unsure, contact the manufacturer!

This seems like it will be a lot of fun. I'm really looking forward to watching TSA reps try to figure out whether a given device has over 8-gram equivalents of lithium in it, let alone trying to add up the watt hours in various devices to decide if they are over 300 (note that 8 grams is claimed to be about 100 watt-hours, so what if you have 302 watt-hours, which is over 300, but probably less than 25 grams). This "contact the manufacturer" thing is pretty nuts. TSA needs to have a list to decide what they want to accept. Why don't they just publish it?

Another thing that's weird is that you can't have spare batteries in your checked luggage, but you are allowed to have such batteries installed in your devices. I'm sure my laptop will contain any fires or explosions. Outstanding!


December 27, 2007

Linos, Linos, and Colditz's BMJ paper on airport screening is getting a lot of attention. LLN write:
We systematically reviewed the literature on airport security screening tools. A systematic search of PubMed, Embase, ISI Web of Science, Lexis, Nexis, JSTOR, and Academic Search Premier (EBSCOhost) found no comprehensive studies that evaluated the effectiveness of x ray screening of passengers or hand luggage, screening with metal detectors, or screening to detect explosives. When research teams requested such information from the US Transportation Security Administration they were told that evaluating new screening programmes might be useful, but it was overshadowed by "time pressures to implement needed security measures quickly."16 In addition, we noticed that new airport screening protocols were implemented immediately after news reports of terror threats (fig 1).

It's unsurprising that there are no real studies on this topic, but it's not at all clear that even if we wanted to do some it would be practical, or even possible to do so. The authors suggest a controlled trial of screening effectiveness at detecting specific types oxsxsf attacks:

After informing the airport managers, gaining approval from research ethics committees and police, and registering our trial with one of the acceptable International Committee of Medical Journal Editors trial registries, we would select passengers at random at the check-in desks and give each traveller a small wrapped package to put in their carry-on bags. (We would do this after they have answered the question about anyone interfering with their luggage.) A total of 600 passengers would be randomised to receive a package, containing a 200 ml bottle of a non-explosive liquid, a knife, or a bag of sand of similar weight (control package) in a 1:1:1 ratio. Investigators and passengers would be blinded to the contents of the package. Our undercover investigators would measure how long it takes to get through security queues and record how many of the tagged customers are stopped and how many get through. A passenger who is stopped and asked to open the wrapped box would be classed as a positive test result, and any unopened boxes would be considered a negative test result.

This study design seems problematic as a measure for screening effectiveness. Security screening is fundamentally different from screening for diseases because disease screening isn't adversarial.

To take the simplest case, consider genetic diseases. When you screen for Tay-Sachs, the Tay-Sachs gene isn't trying to figure out how to evade your screen. Even in cases like cystic fibrosis where there are genotypes which produce pathology but aren't detectable with standard screening methods (the basic CF screen only detects 80% of mutations) there's not selective pressure for the undetectable genotype, just pressure against the detectable ones. The undetectable genotypes don't increase in the population.

To take a slightly more complicated case, consider non-genetic diseases, which do evolve. HIV, for instance, regularly evolves resistance to the antiretrovirals we use to treat it. [Warning, I'm working from general principles here. If there are cases of evolved resistance to screening, I'd love to hear about them.] Screening is a different case, though, for at least two reasons. First, the reason you get HIV drug resistance is to a great extent due to selective pressure between the genotypes present in a given patient, so when you treat that patient with antiretrovirals, this exerts selective pressure against the susceptible genotypes and so you end up with a much higher fraction of resistant genotypes within the patient. But of course when you're doing screening, any nontrivial fraction of detectable organisms leads to a positive result and (presumably) treatment, so you don't get as much selective pressure between the detectable and undetectable variants. Second, virii and bacteria aren't intelligently trying to evade your screening, so even if there is some evolved stealth, you would likely have plenty of time to adapt and test your screening technology.

By contrast, in the case of airline screening, you have an intelligent attacker with a very short reaction cycle, so as soon as they know what kind of screening you are using they can move to evade it. Also, you don't need each attacker to independently evolve defenses—as soon as someone figures out a defense technique, they can tell a lot of other attackers about it. (This is also why signature-based virus detection is such a hard problem with relatively high false negative rates). This makes the problem of evaluating whether a given set of screening techniques work as the authors propose very problematic. By the time you've done your effectiveness study, it's already obsolete.

More importantly, this study design sort of confuses a technique (stopping people from bringing weapons through the security checkpoint) with the goal (stopping people from blowing up airplanes). But of course thse aren't the same thing. For instance, you could jump the fence and smuggle explosives into the sterile area. So, the question you really want to ask is whether airport security decreases the chance of planes being bombed. In order to do this, you need a different study design: one which compares various security regimes in terms of the number of terrorist attacks that occur under them. This is a much harder study to do, for a number of reasons.

First, you have the "outrun the bear problem". Say that you have both good and bad security and terrorists preferentially attack airports with bad security. This doesn't necessarily tell you that if everyone adopted good security you would see fewer attacks. The terrorist might just be lazy enough to choose the softer targets, but would mount attacks anyway—this is a variant of the adaptiveness problem. We just don't understand the supply model that well.

Second, ignoring this problem, it's not clear we have enough data to do a meaningful study, because the number of terrorist attacks is so low. Remember that there have been no successful US airline hijackings or bombings since September 11th 2001, so if you'd run a study of this type starting in 2002, you would not be able to reject the null hypothesis that good airline security (assuming, as seems likely, that there's existing variation in screening quality) was useless. We just don't know whether the reason we haven't had any attacks in over five years is because of good security or because people aren't trying, and you'd need a lot more data to get a significant result.

Given these issues, it's pretty hard to imagine what kind of study would let you decide these issues. That's not to say that I think that the current flavor of airport security is useful, but that doesn't mean that it's that meaningful a criticism that there aren't studies that show that it is.