Misc: October 2010 Archives


October 24, 2010

As you no doubt know, Wikileaks just dumped a whole pile of documents about the war in Iraq [the Guardian has good coverage here]. The big news story seems to be that the US military more-or-less ignored torture of detainees by the Iraqi military. This data dump has been answered by the usual denunciations of Wikileaks as having damaged national security. For instance, Chairman of the Joint Chiefs Mike Mullen tweets (yes, tweets!):
Another irresponsible posting of stolen classified documents by Wikileaks puts lives at risk and gives adversaries valuable information.

And of course, here is Geoff Morrell, the Pentagon Press Secretary:

"There are thousands of Iraqi names in these documents that have been compromised. 300 of whom we believe are particularly in danger and we have shared that information with our forces in Iraq for them to take prophylactic measures to protect them," Pentagon Press Secretary Geoff Morrell said Friday.

Assange's defense of the leaks is similarly predictable:

At a packed press conference held in hotel in Central London Saturday, WikiLeaks founder Julian Assange declared, "This disclosure is about the truth. We hope to correct some of that attack on the truth that occurred before the war, during the war, and which has continued on since the war officially concluded." Added the tall, wan, Australian-accented Assange: "There are approximately 15,000 civilians killed by violence in Iraq. That tremendous scale should not make us blind to the small human scale in this material. It is the deaths of one and two people per event that killed the overwhelming number of people in Iraq."

I'm still trying to work out my opinion on this topic, but I do have some incomplete observations:

As far as I know (and I don't think anyone has claimed otherwise) Wikileaks didn't steal this information; they didn't break into the Pentagon and photocopy the data. Rather, someone else made a copy and handed it over to Wikileaks. Wikileaks is simply disseminating it (hence the obligatory references to the Pentagon Papers). So their ethical position is much more like that of the NYT in 1971, than it is to the people who leaked the information.

Additionally, the time period during which a site like Wikileaks is necessary to disseminate this kind of information is coming to a close. In 1971, Daniel Ellsberg had to go to a huge operation—the New York Times—in order to get wide dissemination of the Pentagon Papers. Today a handful of people with a bunch of servers can do the same thing as the Times and get the attention of basically every major newspaper worldwide. As technology gets better, distributing this kind of information gets easier and easier. There have been several designs for worldwide anonymous, resilient, distribution systems (e.g., Publius), and it's already possible to do worldwide data distribution with peer-to-peer systems like BitTorrent, so it's already likely that with a bit of technical savvy you could distribute this kind of data beyond the ability of anyone to shut it down, at which point you won't need a middleman like Wikileaks.

While of course there have been claims that Wikileaks is being irresponsible, it appears they did make some attempt to filter the information to remove the most obviously dangerous information:

But Assange said that Wikileaks and the four newspapers that it shared the documents with back in June, including the New York Times, decided to redact all Iraqi names from the war logs.

In an environment where something like Wikileaks doesn't exist and people just self-publish over an uncontrolled service, then even this minimal level of redaction is less likely to happen.

This brings us to the question of whether this sort of leak is in fact a threat to national security. Now, obviously, one could claim that the mere disclosure of bad behavior by the US and/or Iraqi militaries is itself a threat to national security, but I'm not really prepared to sign on to that expansive (and instrumental) a definition of national security. At that point you might as well argue that people who publish information about the now-cancelled Koran Burning are in an ethically problematic position. I'm not sure where to draw the line here, but I think many not most people believe that just because information is embarassing (and potentially will make people think worse of the US) is insufficient reason for it to be secret.

On the other hand, it seems clear that the publication of operational information (e.g., the names of US agents, informants, etc.) has a weaker claim to legitimacy. First, it bears less on the general public interest in knowing what the government is doing and second it presents a more direct harm to national security. As I said above, it's unclear that the particular documents in question actually reveal this information, and since Assange claims otherwise, it seems like the question remains open. Regardless, since Wikileaks says they do some kind of redaction, it seems like they're in a pretty different ethical position from an organization which just passes through any information they get without any filtering.

With that said, the US government has something of a history of claiming national security for information that's more embarassing than anything else. And since it seems clear that the government has at best not been entirely forthcoming, this rather weakens whatever arguments they want to offer about the need for secrecy:

More than 15,000 civilians died in previously unknown incidents. US and UK officials have insisted that no official record of civilian casualties exists but the logs record 66,081 non-combatant deaths out of a total of 109,000 fatalities.

This seems like the kind of information that the public has the right to know, but obviously the government didn't think so. I don't know to what extent organizations like Wikileaks are a reaction to a lack of government transparency/openness, but I'm not so sure that Wikileaks is solely responsible for whatever collateral damage results from the publication of this kind of material.


October 15, 2010

In a previous post, I trashed the stick-on badges that companies like to issue visitors. This doesn't mean I'm any more fond of the plastic RFID badges that get issued to employees. For those of you who haven't had a chance to see these, your typical employee ID is a plastic card with your picture, your name, and an embedded RFID device. For instance, this. In many (most?) companies, the door locks don't use keys but rather are RFID receivers activated by your badge.

I don't mean to give you the impression that I'm inherently against proximity-card activated locks. On the contrary, if you've ever tried to lean a 20 pound box against the door while you figured out which of the four near-identical Schlage-style keys on your key ring matches your office door, you can easily appreciate the virtues of remote door lock activation (side note: one of the coolest features about the Prius when it first came out). However, the actual implementation leaves something to be desired.

Let's start with the combination of the proximity key (a good idea) with the photo badge (a less good idea). As with visitor badges, the security offered by a plastic card with your name and photo on it is relatively minimal. First, my experience is that employees don't do a very good job of checking badges ever. As I said before, I routinely float around other people's companies without any badge at all and nobody ever stops me. Even if employees did check badges, at most this would be a cursory visual inspection and it's trivial to make a plastic badge that looks like that of any random company you choose, as long as you know what it looks like. Sure enough, a little image searching quickly turned up images of badges for Google, Cisco, and Apple. So, badges are next to useless for verifying people inside the security perimeter. (One exception: if you see someone doing something suspicious, you might ask for their badge and they might have been lame enough not to have forged one.)

Badges are potentially of some use at the security perimeter, where they can be processed by machines rather than fallible humans. Potentially, that is, except for two problems. First, RFID proximity cards are laughably easy to clone. As I understand it, you can even do this remotely so you just hang out somewhere that employees go by and you can make as many cloned badges as you want. Second, it's trivial to enter the building without being badged in: despite corporate policies prohibiting it, at nearly every company I've ever visited people with legitimate badges (or at least ones that the reader accepted!) have let me follow them into the building, even though I wasn't displaying any ID at all. Think how easy it would be if I was wearing a plausible looking but nonfunctional piece of plastic.

This isn't to say you couldn't make a badge system work: you'd need a system where the badges really couldn't be copied and where there was strong enforcement against any kind of tailgating. That's not impossible but it's very different from current environment in many of not most organizations.