EKR: October 2010 Archives

 

October 25, 2010

While I love my Kindle, it does have some annoying features. The most annoying feature, as I've mentioned before, is the lack of a touch screen. However, nearly as annoying is that while you can read books on any Kindle you own, they're completely tied to your account. Seeing as the price of Kindle books is often nearly as high as the price of the corresponding paper book, this seems like a pretty significant drawback. Now, Amazon is apparently relaxing this restriction, but only fractionally:
Second, later this year, we will be introducing lending for Kindle, a new feature that lets you loan your Kindle books to other Kindle device or Kindle app users. Each book can be lent once for a loan period of 14-days and the lender cannot read the book during the loan period. Additionally, not all e-books will be lendable - this is solely up to the publisher or rights holder, who determines which titles are enabled for lending.

Now, I'm not saying I wasn't fully aware that Kindle books weren't transferable when I bought my Kindle and willing buyer/willing seller and all that. However, with that said I will observe that this is a pretty small concession (apparently it matches the behavior of the Nook). Kindle books would be a lot more like real books if I could lend them out permanently or at least semi-permanently.1 The analogous restriction would be that only one person could have the book attached to their Kindle account at once. That's still a pretty big pile of DRM, but at least in my case it would make me a lot more willing to shell out for Kindle books.

1. Digression: The early programming language implementation, "Turbo Pascal" was distributed on a "just like a book" basis, in which you were allowed to use it on multiple computers as long as there was no chance of it being used in two places at once."

 

October 24, 2010

As you no doubt know, Wikileaks just dumped a whole pile of documents about the war in Iraq [the Guardian has good coverage here]. The big news story seems to be that the US military more-or-less ignored torture of detainees by the Iraqi military. This data dump has been answered by the usual denunciations of Wikileaks as having damaged national security. For instance, Chairman of the Joint Chiefs Mike Mullen tweets (yes, tweets!):
Another irresponsible posting of stolen classified documents by Wikileaks puts lives at risk and gives adversaries valuable information.

And of course, here is Geoff Morrell, the Pentagon Press Secretary:

"There are thousands of Iraqi names in these documents that have been compromised. 300 of whom we believe are particularly in danger and we have shared that information with our forces in Iraq for them to take prophylactic measures to protect them," Pentagon Press Secretary Geoff Morrell said Friday.

Assange's defense of the leaks is similarly predictable:

At a packed press conference held in hotel in Central London Saturday, WikiLeaks founder Julian Assange declared, "This disclosure is about the truth. We hope to correct some of that attack on the truth that occurred before the war, during the war, and which has continued on since the war officially concluded." Added the tall, wan, Australian-accented Assange: "There are approximately 15,000 civilians killed by violence in Iraq. That tremendous scale should not make us blind to the small human scale in this material. It is the deaths of one and two people per event that killed the overwhelming number of people in Iraq."

I'm still trying to work out my opinion on this topic, but I do have some incomplete observations:

As far as I know (and I don't think anyone has claimed otherwise) Wikileaks didn't steal this information; they didn't break into the Pentagon and photocopy the data. Rather, someone else made a copy and handed it over to Wikileaks. Wikileaks is simply disseminating it (hence the obligatory references to the Pentagon Papers). So their ethical position is much more like that of the NYT in 1971, than it is to the people who leaked the information.

Additionally, the time period during which a site like Wikileaks is necessary to disseminate this kind of information is coming to a close. In 1971, Daniel Ellsberg had to go to a huge operation—the New York Times—in order to get wide dissemination of the Pentagon Papers. Today a handful of people with a bunch of servers can do the same thing as the Times and get the attention of basically every major newspaper worldwide. As technology gets better, distributing this kind of information gets easier and easier. There have been several designs for worldwide anonymous, resilient, distribution systems (e.g., Publius), and it's already possible to do worldwide data distribution with peer-to-peer systems like BitTorrent, so it's already likely that with a bit of technical savvy you could distribute this kind of data beyond the ability of anyone to shut it down, at which point you won't need a middleman like Wikileaks.

While of course there have been claims that Wikileaks is being irresponsible, it appears they did make some attempt to filter the information to remove the most obviously dangerous information:

But Assange said that Wikileaks and the four newspapers that it shared the documents with back in June, including the New York Times, decided to redact all Iraqi names from the war logs.

In an environment where something like Wikileaks doesn't exist and people just self-publish over an uncontrolled service, then even this minimal level of redaction is less likely to happen.

This brings us to the question of whether this sort of leak is in fact a threat to national security. Now, obviously, one could claim that the mere disclosure of bad behavior by the US and/or Iraqi militaries is itself a threat to national security, but I'm not really prepared to sign on to that expansive (and instrumental) a definition of national security. At that point you might as well argue that people who publish information about the now-cancelled Koran Burning are in an ethically problematic position. I'm not sure where to draw the line here, but I think many not most people believe that just because information is embarassing (and potentially will make people think worse of the US) is insufficient reason for it to be secret.

On the other hand, it seems clear that the publication of operational information (e.g., the names of US agents, informants, etc.) has a weaker claim to legitimacy. First, it bears less on the general public interest in knowing what the government is doing and second it presents a more direct harm to national security. As I said above, it's unclear that the particular documents in question actually reveal this information, and since Assange claims otherwise, it seems like the question remains open. Regardless, since Wikileaks says they do some kind of redaction, it seems like they're in a pretty different ethical position from an organization which just passes through any information they get without any filtering.

With that said, the US government has something of a history of claiming national security for information that's more embarassing than anything else. And since it seems clear that the government has at best not been entirely forthcoming, this rather weakens whatever arguments they want to offer about the need for secrecy:

More than 15,000 civilians died in previously unknown incidents. US and UK officials have insisted that no official record of civilian casualties exists but the logs record 66,081 non-combatant deaths out of a total of 109,000 fatalities.

This seems like the kind of information that the public has the right to know, but obviously the government didn't think so. I don't know to what extent organizations like Wikileaks are a reaction to a lack of government transparency/openness, but I'm not so sure that Wikileaks is solely responsible for whatever collateral damage results from the publication of this kind of material.

 

October 23, 2010

As I mentioned earlier, it's hardly surprising that when Google was cruising your neighborhood collecting WiFi signals, they would collect some personal information. It seems Canada's privacy commissioner, Jennifer Stoddard travelled to Mountain View to check things out the expected:
The personal information collected included complete e-mails, e-mail addresses, usernames and passwords, names and residential telephone numbers and addresses. Some of the captured information was very sensitive, such as a list that provided the names of people suffering from certain medical conditions, along with their telephone numbers and addresses.

It is likely that thousands of Canadians were affected by the incident.

Technical experts from the Office of the Privacy Commissioner travelled to the company's offices in Mountain View, Calif. in order to perform an on-site examination of the data that was collected. They conducted an automated search for data that appeared to constitute personal information.

To protect privacy, the experts manually examined only a small sample of data flagged by the automated search. Therefore, it's not possible to say how much personal information was collected from unencrypted wireless networks.

It's not clear why an investigation was needed here. Of course Google collected personal information; that's inevitable whenever you go around sniffing people's networks. The relevant questions are: (1) what to do with that information and (2) what sort of procedures would stop it happening again. Stoddard's recommendations on this point seem pretty plausible:

In light of her investigation, the Privacy Commissioner recommended that Google ensure it has a governance model in place to comply with privacy laws. The model should include controls to ensure that necessary procedures to protect privacy are duly followed before products are launched.

The Commissioner has also recommended that Google enhance privacy training to foster compliance amongst all employees. As well, she called on Google to designate an individual or individuals responsible for privacy issues and for complying with the organization's privacy obligations - a requirement under Canadian privacy law.

She also recommended that Google delete the Canadian payload data it collected, to the extent that the company does not have any outstanding obligations under Canadian and American laws preventing it from doing so, such as preserving evidence related to legal proceedings. If the Canadian payload data cannot immediately be deleted, it needs to be secured and access to it must be restricted.

But you didn't need an investigation to tell you that.

One thing still puzzles me, though: "If the Canadian payload data cannot immediately be deleted, it needs to be secured and access to it must be restricted." Does this imply that access hasn't already been restricted? If not, why not? I certainly understand why Google might need to keep it around as fodder for more pro forma investigations, but other than that, why can't it be destroyed or at least completely locked down?

 

October 21, 2010

Ingo Boltz attempts to resurrect the Caltech/MIT "FROG" ballot approach. His idea is that you divide the job of building an e-voting system into two parts:
  • A "vote generator" module, which has a DRE-style UI, but instead of recording votes on an electronic memory, outputs a human-readable but also machine-readable paper ballot.
  • A "vote casting" module, which processes the output ballots from the vote generator and tabulates the results.

This is a familiar design (the technical term is an Electronic Ballot Marker (EBM)). What's new is that Boltz suggests that the vote generator (i.e., the EBM) be built by the usual voting machine vendors, the vote casting module (i.e., the tabulation device) be built by some open source group in cooperation with academic security experts. It's not clear to me that this really changes the situation.

The reason why people like EBM designs is that they appear not to require trust in the EBM itself. The idea here is that because the EBM generates a human-readable paper ballot, even if it's compromised the user will noticed that the paper is wrong before its cast. So, you have the convenience of a DRE combined with the security of an optical scan system. Unfortunately, the available human factors evidence suggests that humans do a very poor job of checking the output of this kind of device. I'm not aware of research done specifically on EBMs, but Everett has studied the question of how often users noticed that malicious DREs changed their votes and found that less than 40% of voters actually check. This implies that a malicious EBM could actually do quite a bit of damage and thus remains a security critical component.

In some respects, the tabulator is actually a less security critical component. While the tabulation operation is of course security critical, we have a number of techniques for verifying correct tabulation even in the face of a not-totally-trustworthy tabulation device (manual recounts, audits, re-scanning, etc.). So if we employ those techniques—which aren't in wide use now—then we actually don't need to worry so much about the tabulator itself being designed by a team of geniuses.

This is good because the whole idea that an open source collaboration involving academic security experts will deliver a really secure system seems to me to be fatally flawed. The reason that researchers have been so effective at attacking electronic voting systems isn't because they are so smart and the voting vendors are so dumb—though of course many of the researchers are very smart and in many cases the design of the systems has left much to be desired—but rather because bulding secure software systems is incredibly difficult. Obviously I can't speak for all researchers but while I feel pretty comfortable in my ability to attack voting machines, I wouldn't want to accept a contract to build a machine which couldn't be attacked by others. This is in large part why so many security researchers want to design software independent systems that don't require trusting the software at all.

 

October 15, 2010

In a previous post, I trashed the stick-on badges that companies like to issue visitors. This doesn't mean I'm any more fond of the plastic RFID badges that get issued to employees. For those of you who haven't had a chance to see these, your typical employee ID is a plastic card with your picture, your name, and an embedded RFID device. For instance, this. In many (most?) companies, the door locks don't use keys but rather are RFID receivers activated by your badge.

I don't mean to give you the impression that I'm inherently against proximity-card activated locks. On the contrary, if you've ever tried to lean a 20 pound box against the door while you figured out which of the four near-identical Schlage-style keys on your key ring matches your office door, you can easily appreciate the virtues of remote door lock activation (side note: one of the coolest features about the Prius when it first came out). However, the actual implementation leaves something to be desired.

Let's start with the combination of the proximity key (a good idea) with the photo badge (a less good idea). As with visitor badges, the security offered by a plastic card with your name and photo on it is relatively minimal. First, my experience is that employees don't do a very good job of checking badges ever. As I said before, I routinely float around other people's companies without any badge at all and nobody ever stops me. Even if employees did check badges, at most this would be a cursory visual inspection and it's trivial to make a plastic badge that looks like that of any random company you choose, as long as you know what it looks like. Sure enough, a little image searching quickly turned up images of badges for Google, Cisco, and Apple. So, badges are next to useless for verifying people inside the security perimeter. (One exception: if you see someone doing something suspicious, you might ask for their badge and they might have been lame enough not to have forged one.)

Badges are potentially of some use at the security perimeter, where they can be processed by machines rather than fallible humans. Potentially, that is, except for two problems. First, RFID proximity cards are laughably easy to clone. As I understand it, you can even do this remotely so you just hang out somewhere that employees go by and you can make as many cloned badges as you want. Second, it's trivial to enter the building without being badged in: despite corporate policies prohibiting it, at nearly every company I've ever visited people with legitimate badges (or at least ones that the reader accepted!) have let me follow them into the building, even though I wasn't displaying any ID at all. Think how easy it would be if I was wearing a plausible looking but nonfunctional piece of plastic.

This isn't to say you couldn't make a badge system work: you'd need a system where the badges really couldn't be copied and where there was strong enforcement against any kind of tailgating. That's not impossible but it's very different from current environment in many of not most organizations.