December 2009 Archives

 

December 31, 2009

This decade retrospective post is in conformance with Section 123(a)(1)(j)(ii)(c) of the American Recovery and Reinvestment Act of 2009.

During this decade, I had the opportunity to use many great fasteners, but in my opinion the best of these was the 10-24 rack mount screw—Allen head, of course, superior to the #2 Phillips (too finicky), and the Robertson (too Canadian). Other excellent choices include the zip tie, 5 minute epoxy, and duct tape.

 
I'm probably late to the party here but I wanted to make note of the NYT's recent article on water safety. (þ Melanie Schoenberg). While there's certainly some stuff here one might be distressed about, the article is written in such a way that it's pretty hard to evaluate how serious the issue actually is.

The article seems to make three major factual claims:

  • The Safe Water Drinking Act only regulates a small fraction of the potentially hazardous chemicals potentially found in drinking water.
  • Many municipal water systems contain chemicals at levels which, while legal, may be unsafe (e.g., are above EPA safe levels).
  • People are getting sick from this.
I don't doubt that the first of these is true: according to the article, 60,000 plus chemicals are used within the US (I'm actually surprised it's this low, since the PDR has over 4000 drugs and MSDS.COM claims to have 3.5 million data sheets), and it's not clear how you would plausibly analyze all of these, let alone determine permissible levels for each of these. I'm not saying this is desirable, but it's not necessarily a disaster either. Ultimately, you can either have an "default accept" or "default deny" policy here; given how sensitive modern analytic techniques are, if your policy is "default deny" you're going to spend a lot of time removing trace concentrations of harmless chemicals from your water supply. On the other hand if it's "default accept" you're going to end up with a lot of chemicals in your water that you don't really know are safe.

Given the first point, the second isn't surprising either. With that said, I'm not sure that the Times is really representing the situation that accurately. For instance, here's the report for Palo Alto, where I live. The Times reports "1 contaminant below legal limits, but above health guidelines", with the contaminant being alpha particle activity at a mean rate of 4.56 pCi/L. Let's see if we can put this in perspective. Assume humans are made entirely of water and rescale into kg, so we have 4e-12 Ci/kg of human body mass. A Ci is 37e+9 disintegrations/s so multiplying out we have .148 disintegrations/kg/s. If we assume that all the alpha particles are from U-238, and the alpha particles are being emitted at 4.270 MeV (~ 7e-13 J), then we get 1e-13 J/kg/s. If we assume that all of these are absorbed (not crazy since alpha particles have a very short path in the body) then we're getting 1e-13 Grays/s or 2e-12 Sv/s (multiply by the 20 Q factor for alpha particles) or .03 mSv/year. For comparison, the background level of radiation is 2.4 mSv/year. Obviously this isn't something you should be that thrilled about, but it's not clear to me that a 1% increase in your radiation dose is that bad either.

Given that, why does the NYT list this as above the health level? The answer seems to be that their safe value for alpha particles is zero (the legal limit is 15 pCi/L): the maximum level of alpha particle activity in neighboring Mountain View is 2.56 pCi/L, but it's still listed as having 5 "above health" samples (Chicago had one reading of .88 pCi/L and is also listed as a positive). This all makes me wonder if something is wrong here and the NYT is showing false positives. Of course, when you're processing a lot of data it's easy to make mistakes—assuming this is a mistake. It could be that I'm confused or that it's just the alpha particle threshold that's too low. I e-mailed the times to ask them for a copy of the raw data, but I haven't heard anything yet.

This brings us to the final point: the Times writes:

All told, more than 62 million Americans have been exposed since 2004 to drinking water that did not meet at least one commonly used government health guideline intended to help protect people from cancer or serious disease, according to an analysis by The Times of more than 19 million drinking-water test results from the District of Columbia and the 45 states that made data available.

...

And independent studies in such journals as Reviews of Environmental Contamination and Toxicology; Environmental Health Perspectives; American Journal of Public Health; and Archives of Environmental and Occupational Health, as well as reports published by the National Academy of Sciences, suggest that millions of Americans become sick each year from drinking contaminated water, with maladies from upset stomachs to cancer and birth defects.

This seems to conflate a bunch of issues. There seems to be a lot of variance in the data, with some tests showing positive results and some negative results (or low levels) for the same toxin even in the same area. It's very different to drink water with a toxin in it once than it is to drink it ever day for 10 years. I spent a couple days in Boston in 2007, but I'm not overly concerned about the fact that I might have been exposed to twice the legal limit of haloacetic acids in the two to four liters of water I drank while I was there. More generally, while one positive test may qualify as an exposure, it's not clear what that means as far as the real level of risk people are incurring. And of course there's a difference between cumulative toxins (e.g., arsenic) and acute toxins (e.g., e. coli). Speaking of e. coli, "maladies from upset stomachs to cancer and birth defects" covers a lot of territory; it's one thing if a sewer system occasionally fails to remove all the bacteria from the water supply (not that that's good) and another if it delivers hot and cold running cyanide from the tap.

Obviously, when you read this article you're supposed to be scared, but the way the article is written (and the opaque data presentation) doesn't make me feel like I have enough data to know if I should be or not.

P.S. San Francisco really does have great water. Almost good enough to make up for destroying Hetch Hetchy..

 

December 30, 2009

I flew back from Soviet Canuckistan last night and got to experience the new security measures firsthand. The high order bit is that nearly all carry-on baggage is banned. They make exceptions for a few things like women's purses, medicine, baby stuff, cameras, and laptops (allegedly no chargers but we saw exceptions) but even then you can't carry them in a significant bag: the security lines were full of people carrying their naked laptops. Luckily, Mrs. Guesswork was carrying some stuffable cloth bags which we were able to use as for our laptops, paperwork, a book, etc. My co-worker Derek wasn't as lucky, but the airline customer service rep did provide him with a substitute:

After you've checked all your valuable stuff, you get to go through security. The magnetometer and the bag x-ray are the same, but once you get through that, they hand-search all your stuff as well as giving you an extremely thorough pat-down, said pat-down extending to going through your wallet, presumably in order to verify that your money won't explode. All this was still quite a bit slower than the ordinary security screening, however. As reported previously, the FAs required you to stay in your seat for the last hour of the flight, but didn't try to stop you from having what remained of your stuff in your lap during that time.

As usual, TSA is being pretty uncommunicative about the rationale for the new restrictions. My impression based on Transport Canada's statement is that TSA required a whole bunch of new security restrictions including the hand searches and pat downs and that this created really long wait times at Canadian airports. So while restricting carry-on doesn't serve any real security purpose it does reduce the amount of searching that has to be done and therefore somewhat ameliorates the waiting time problem.

Obviously, keeping you in your seat for the last hour of the flight is pretty pointless. Even if terrorists can't blow themselves up from their seats, nothing stops them from detonating a bomb 61 minutes before landing. This just seems like fighting the last war.

On the other hand, doing really extensive searches of people probably does add some security value. This isn't to say that there's no way for someone to smuggle explosives onto the plane with the current level of screening, but this presumably does increase the required level of sophistication. On the other hand, it's a huge hassle for travelers—I never travel with checked luggage if I can avoid it, but the new restrictions more or less require you to check bags. As I said earlier, the cost/benefit analysis hasn't really changed since before the attempted attack. If it wasn't worth doing this level of searching a month ago, it isn't worth doing it now just because we're freaked out that someone finally tried the attack we knew would eventually come. And if it is worth doing now, then it was worth doing before so why weren't we doing it?

I can't see any reason to have different levels of screening for domestic and international flights. It's not like it's that much easier to lay your hands on explosives in Canada or Europe than in the US, so what stops a terrorist from flying to the US without any weapons or anything, getting explosives and then boarding a plane in the US? The added security is particularly silly on flights which originate in Vancouver and Toronto; ordinarily you clear customs and immigration in the US, so at least in theory terrorists might board the plane in say Frankfurt and not be apprehended until they arrive in San Francisco, at which point it's too late (of course, if the no-fly list actually worked, this would be less of an issue, but since it's actually pretty lame...). However, in many Canadian airports, including YVR and YYZ you clear immigration and customs in Canada (and this is done by TSA agents so there's no concern about not trusting foreigners) and when you land you just get off the plane. For flights from those airports, there's no meaningful distinction between domestic and international flights even if there would have been otherwise.

Ideally, in a week or two the panic response will die down, TSA will relax their restrictions and we'll go back to when we thought just having to take your shoes off was annoying. Reading the tea leaves, though (see, for instance, William Saletan's post here), I suspect that instead this will accelerate the deployment of whole body scanners as an alternative to the pat-downs. Ironically, Wikipedia reports that the first airport deployment of whole body scanners was in Schiphol, the airport where Umar Abdulmutallab (thanks to Wikipedia for the name) boarded; it would be interesting to know if he went through the scanners. Of course whole-body scanners don't let you scan carry-on luggage any faster, so it's hard to see how anything other than a lower level of paranoia will improve that.

 

December 27, 2009

Since some clown from Nigeria decided to try to blow up a 777, apparently the TSA has decided to give us some new security procedures. They're sooper secret, but apparently pretty cool:

TSA has a layered approach to security that allows us to surge resources as needed on a daily basis. We have the ability to quickly implement additional screening measures including explosive detection canine teams, law enforcement officers, gate screening, behavior detection and other measures both seen and unseen. Passengers should not expect to see the same thing at every airport.

Anyway, the new rules appear to apply to international flights into the US and include secondary screening for everyone, requiring passengers to stay in their seats for the final hour of the flight without any carry-on baggage in your lap, including laptops, pillows, and blankets. The other major restriction is restricting you to one carry-on bag. (There are rumors of a no electronics policy but that seems to be only sporadic). I just saw a report on Canadian TV about how much this is slowing things down in Canadian airports and I'm looking forward to experiencing it myself on Tuesday.

At least for me, it's pretty hard to see any rational connection between these restrictions and security (see here for the thread on the TSA blog where commenters express frustration and TSA doesn't even confirm that these restrictions are policy, let alone defend them). Certainly, if you were carrying a bomb you could set it off at any point during the flight. In fact, it's not clear to me that there is anything special about the last hour, except that I guess it's more likely to be over the US, for whatever that's worth. As for limiting you to one carryon, I suppose that's designed to minimize the number of bags they have to screen.

More to the point, it's not clear that any new security measures are required. Eventually someone was bound to try to blow up a bomb on a plane and someone eventually did. It's not like we didn't know that you could carry plastic explosive on your body through the magnetometer, so what exactly has changed that merits reassessing the method of screening, let alone the screening effectiveness/inconvenience tradeoff? I suppose one could argue that maybe this attack is potentially part of a coordinated effort and thus tightened security efforts are temporarily appropriate while we investigate if he had any collaborators, but if that's true at some point TSA should revert to their previous policies. I don't see any reason to keep them at this level indefinitely.

 

December 20, 2009

This is seriously not good. It turns out that both military aircraft and drones transmit unencrypted video feeds of their activities:
How'd the militants manage to get access to such secret data? Basically by pointing satellite dishes up, and waiting for the drone feeds to pour in. According to the Journal, militants have exploited a weakness: The data links between the drone and the ground control station were never encrypted. Which meant that pretty much anyone could tap into the overhead surveillance that many commanders feel is America's most important advantage in its two wars. Pretty much anyone could intercept the feeds of the drones that are the focal point for the secret U.S. war in Pakistan.

Using cheap, downloadable programs like SkyGrabber, militants were apparently able to watch and record the video feed - and potentially be tipped off when U.S. and coalition forces are stalking them. The $26 software was originally designed to let users download movies and songs off of the internet. Turns out, the program lets you nab Predator drone feeds just as easily as pirated copies of The Hangover.

And here's the real scandal: Military officials have known about this potential vulnerability since the Bosnia campaign. That was over 10 years ago. And, as Declan McCullagh observes, there have been a series of government reports warning of the problem since then. But the Pentagon assumed that their adversaries in the Middle East and Central Asia wouldn't have the smarts to tap into the communications link. That's despite presentations like this 1996 doozy from Air Combat Command, which noted that that "the Predator UAV is designed to operate with unencrypted data links."

...

Meanwhile, military officials assure are scrambling to plug the hole. "The difficulty, officials said, is that adding encryption to a network that is more than a decade old involves more than placing a new piece of equipment on individual drones," the Journal notes. "Instead, many components of the network linking the drones to their operators in the U.S., Afghanistan or Pakistan have to be upgraded to handle the changes."

So, obviously this isn't the best design anyone has ever heard of. It would be interesting to ask whether the control channels used to send commands to the drones are are similarly unprotected.

In any case, there are two major technical obstacles to adding encryption to a system like this. The first is key management, as mentioned in the first linked article; you need to somehow get keys to the relevant people. The second is the problem of sending encrypted data around, as mentioned in the last graf.

I'm not overly worried about the need to upgrade individual network elements in between the drones and the operators: unless those elements actually process the data instead of passing it along, they should be relatively indifferent to whether the video is encrypted or not. I can imagine a couple of types of processing that would cause problems. For instance, if the intermediate elements compress the data with some lossy compression algorithm, this will interact badly with encrypted data, which is not only incompressible but also extremely sensitive to any damage. But if they're just relaying the data (which seems likely given that this seems to be all built with commodity protocols), that seems unlikely to cause any problems. It's not like all the routers on the Internet need to be upgraded whenever you get a new version of your Web browser.

As usual, the key management problem is more serious, as suggested by this paragraph:

"Can these feeds be encrypted with 99.5 percent chance of no compromise? Absolutely! Can you guarantee that all the encryption keys make it down to the lowest levels in the Army or USMC [United States Marine Corps]? No way," adds a second Air Force officer, familiar with the ROVER issue. "Do they trust their soldiers/Marines with these encryption keys? Don't know that."

As there are no encryption keys at all in the current environment, it's hard to see how the situation could get any worse by giving them to every marine in the field, but it's understandable that one would want to do a little better. In this case, we actually have two kinds of capabilities to deal with: those required to view the video feed and (in the case of drones) those required to remotely control them. These aren't necessarily going to be issued to the same people: you may want soldiers in the field to be able to view the video feed from the drones, but fun as it might be there's no real reason to let them pilot the thing. Since only authorized pilots are likely to be allowed to operate the drone, key management here seems pretty simple: just have a key manually shared between the operator and the drone.

One-way video feeds to soldiers in the field require a slightly more sophisticated system, but it's not inherently complicated, as we can use the same schemes used for broadcast encryption: We have a key of the day (or the hour or whatever) and we use that to encrypt the video. Each device has its own key and we periodically broadcast the key of the day encrypted under the device key. If a device gets lost or stolen, we just stop encrypting under that key. This doesn't work that well for video encryption because it's easy to get a decryption box and so attackers can just get a box and extract the key. Presumably soldiers in the field do a better job of keeping their viewing units in their possession and don't deliberately give them to the Taliban and we can periodically verify that they still have them. And as noted above, it's not like any encryption system we deploy is going to make the system less secure, so it's not like it has to be perfect.

Acknowledgement: Perry Metzger pointed this story out to me.

 

December 12, 2009

Terence Spies recently pointed me to the results of survey on a variety of controversial philosophical issues. It's actually surprising how little consensus there is on some pretty straightforward questions:

Newcomb's problem: one box or two boxes? [* -- EKR]

Other 441 / 931 (47.3%)
Accept or lean toward: two boxes 292 / 931 (31.3%)
Accept or lean toward: one box 198 / 931 (21.2%)

... Teletransporter (new matter): survival or death?

Accept or lean toward: survival 337 / 931 (36.1%)
Other 304 / 931 (32.6%)
Accept or lean toward: death 290 / 931 (31.1%)

...

Zombies: inconceivable, conceivable but not metaphysically possible, or metaphysically possible? [* -- EKR]

Accept or lean toward: conceivable but not metaphysically possible 331 / 931 (35.5%)
Other 234 / 931 (25.1%)
Accept or lean toward: metaphysically possible 217 / 931 (23.3%)
Accept or lean toward: inconceivable 149 / 931 (16%)

One thing that surprises me is that quite a few more people (56.4%) accept physicalism of the mind than survival in the teletransporter scenario (36.1%). I'm not saying that there is a straight line reduction from physicalism to survival, but you're think they'd be pretty connected. In other news, 72.8% of philosophers accept or lean towards atheism.

Another odd feature of this survey is that the questions are deliberately sketchy unless you're familiar with the jargon (hence my links above). The survey authors explain this as follows:

The questions are phrased in a minimal way, in part because further clarification would usually be tendentious and would call for still further clarification in turn. Of course any philosopher can find ambiguity or other problems in such a question, so a number of "other" options are available. Nevertheless, we strongly encourage you to adopt the most natural interpretation of each question and to report an acceptance or a leaning toward one side or the other wherever possible.

For the record: my positions are two boxes, survival, and no idea.

 

December 7, 2009

Cryptome has posted the lawful intercept compliance policies of a bunch of ISP and telephone providers. I haven't done more than skim these, but so far what's shocking seems to be how un-shocking they are. It's certainly no secret that network providers need to comply with law enforcement requests for lawful intercept, and the purpose of these guides seems to be to streamline the process by documenting what the LEA neads to provide in order to get an intercept. There seems to be some complaining on /. about the fact that these policies contain the the amount providers expect to be reimbursed for various kinds of activities, but given that (1) the providers do in fact have to comply with subpoenas and (2) CALEA provides for reimbursement, it's not like it's unreasonable for them to get reimbursed. At less than $100 per request, it's not like it's going to be a big revenue source for Yahoo.

A little more distressing is that none of the policies I looked at (remember I didn't study them that carefully) seem to explicitly say that they won't provide intercept services except when legally required to (this is not the same as when legally permitted to.) That's something I think I would like my service provider to adopt as a policy, but I can't say I'm surprised that they haven't done so.

 

December 6, 2009

Check out this fascinating NYT article on the use of fake badges by New York City police officers (þ Emergent Chaos). The executive summary is that unlike other jurisdictions, the NYPD treats badges like they are made of gold:
In many other cities officers are allowed to have more than one badge, or do not get penalized for losing their badge if promptly reported.

"I remember asking in Miami, 'What happens if you lose a shield?' " said John F. Timoney, the departing chief of police there, who was a first deputy commissioner in New York. "They said, 'You get another one.' It's no big deal."

Mr. Timoney said that he never had a dupe, but that plenty of friends did. "They were so paranoid, they would get a dupe, then they would hide the original in a safe until they retired," he said.

...

Fake badges cause so much concern that when officers are promoted or retire and are required to turn in their shields, they must place them in a special mold at Police Headquarters to ensure that they fit. That's because most duplicates are purposely made slightly smaller to distinguish them from the original.

Metal badges, while an important symbol of authority, are a lousy method of actually establishing legitimate authority. I have no idea whatsoever what a legitimate NYPD badge looks like and I doubt you do either. Moreover, as this article establishes, it's relatively straightforward to make a fake that is mostly indistinguishable from the real thing (you noticed that the fake badges are deliberately different, right?) An identification card is a much better choice: they're probably not any harder to forge (though potentially you could use holograms and the like as anti-forgery measures), but they have the advantage of being biometrically tied to the holder, so if you do lose your badge then it can only be immediately used by someone who looks a lot like you, which is a lot better than use by anyone who picks it up.

Given that, other than tradition, fetishization of the badge, etc. it's not clear what the virtue of keeping a really tight rein on legitimate badges is. Indeed, if officers are so terrified of losing their real badges that they respond by getting fake badges, then the result may be that they take less care with them than if they were merely told to be careful with minimal penalties. Moreover (as has often been observed about fake ids), you've just created a real infrastructure in the production of legitimate-appearing badges. So, whereas ordinarily if someone wanted to impersonate a policy officer they might need to buy a rare lost badge or find someone to do a custom job, now there are plenty of people set up to make high quality duplicates.

Oh, I should mention that this passage reflects a rather odd theory of authenticity:

Called "dupes," these phony badges are often just a trifle smaller than real ones but otherwise completely authentic.

Marcel Duchamp, call your office..