SYSSEC: December 2008 Archives


December 20, 2008

According to recent news coverage [*] [*] [*] Estonia is going to start allowing voters to use mobile phones to authenticate themselves for e-voting. It's a little hard to decipher the coverage, but this article suggests that voters aren't going to use the phone for the entire process but instead are going to use Internet-capable computer terminals for voting and the phones purely for authentication:
Estonia has been at the forefront of electronic voting for a number of years. In 2005 it started using a national ID card for authenticating voters and giving the go-ahead for using mobile phones is a continuation of that, according to Silver Meikar, a member of the Estonian Parliament and a longtime proponent of e-voting.

Voters will be authenticated using a digital certificate stored on SIM (Subscriber Identity Module) cards, which are already available to Estonians.

"You still need a computer and the Internet, but now you will have a choice of using your ID card plus card reader or a mobile ID to authenticate yourself," said Meikar.

Next on the agenda for the parlilament following last Thursday's decision to allow mobile-phone authentication is to adapt the Internet voting system, which currently only supports the use of ID cards. "We are now starting to program the system, so at the moment we don't have the technical readiness," said Vinkel. Adding support for mobile authentication will take about six months, he added.

In general, I think it's pretty fair to say that computer security researchers have a pretty negative view of Internet-based voting systems of this type, regardless of the authentication mechanism. This is a fairly complicated topic, but I wanted to try to explain some of the concerns.

First, it's important to be clear what sort of system we're talking about. There are a lot of ways to use the Internet for voting (results transmission, ballot distribution, registration, etc.) and I guess you could call any of them "Internet Voting". For the purposes of this post, however, I'm talking about a system where users vote on their own computers or mobile phones which then transmit the results over the Internet back to a central consolidation point. One example of such a system is Everyone Counts though I don't plan to talk about this system specifically.

There are a number of concerns with any system of this type. A nonexhaustive list would look something like this:

  • How are voters authenticated?
  • How do you prevent remote compromise of the tabulation system/EMS?
  • How do you verify that your vote was correctly tabulated?
  • How do you prevent remote compromise of the voter's terminal?

Voter Authentication
The voter authentication problem is probably the easiest to solve from a technical perspective. First, we understand how to do remote user authentication pretty well (though user interface and user compliance remain serious problems). It's certainly a lot easier if you can force all users to take some sort of authentication token, which seems to be the situation in Estonia. Moreover, the standards for voter authentication seem to be pretty low in any case. When I worked the polls in Santa Clara County, for instance, we were told we couldn't ask for identification unless the voter roll specifically told us to, which was more or less for first-time voters. Given this, it seems like you could use SSL with client certificates based on the smartcard. It's a little hard to tell how the Estonian system works, but it's probably something vaguely like this; given that it's based on cell phones, it might be AKA or some other 3G-type authentication system.

Remote Compromise of the EMS
Remote compromise of the EMS/tabulation system seems a lot more problematic. Pretty much by definition, there needs to be some Internet accessible server to receive your votes—otherwise it's not Internet voting. This means you need to worry about compromise of that server. How serious such compromise is depends on the way you've constructed your voting system. The naive way to build the system is as a sort of virtual DRE: users send their votes to the server which records them in memory, increments counters, etc. At the end of the election, you just spit out the votes and/or counter values. In such a system, compromise of the central server is extremely serious: an attacker can simply have the system output any election results of his choice. However, there are a variety of cryptographic mechanisms for building systems that are much more resistant to such attack, and in the limit don't require trusting the central server to deliver correct results at all. I'll talk about this very briefly under the next hed.

However, cryptographic voting systems don't provide a complete solution to the problem of server compromise. In particular, while they guarantee correct tabulation (for some value of guarantee), they don't guarantee availability. Consider what happens if the central server goes down on election night and nobody can record their vote. More creatively, an attacker could selectively block voting from specific individuals based on (for instance) their voter registration. Even if an anonymous authentication mechanism were used [technical note: for instance, certificates signed with blind signatures], an attacker could use IP identification and geolocation technology to get a pretty good idea of who voters were or at least where they were and thus selectively disenfranchise certain voters. Sure, in principle the voters could protest and maybe somehow get their votes to count (though this is much more complicated than it looks, since you have to worry about people who didn't vote on election day deciding retrospectively that they should have and then claiming they were denied service), but in practice how many would do so? So, denial of service is a real concern here.

Verifying Correct Tabulation
As I said above, it's possible to produce cryptographic systems which allow the demonstration of correct tabulation without requiring you to trust the tabulator. The details are complicated, but it's easy to see how to do it if you don't mind people's votes being published. You simply submit a digitally signed copy of your vote to the server. The server publishes all the signed votes. Once the election is over, you can verify that your vote was posted and that all the votes add up. Note that this mechanism is deeply flawed: for starters, it's generally not considered OK to post every vote. However, building a system with appropriate privacy guarantees is much harder and requires a fair bit more crypto.

I'm not an expert in cryptographic voting, but as far as I can tell, all the known systems have two major drawbacks. First, they require at least some fraction of voters to check that their votes are correctly recorded. It's not clear that voters will do this in practice. Note that the system I described above doesn't have that problem, but only because we've obliterated all the privacy guarantees. The second, more serious, problem is that they're complicated and convincing the average voter that they really prove what they are supposed to prove is extremely difficult. There's a fair amount of skepticism outside the crypto community about the degree to which the public at large is willing to trust systems that they don't really understand. [Note that one could argue that that's true of current computerized systems, but they are more familiar in operation and of course there is widespread distrust of such systems.]

Compromise of the Voter Terminal
Finally, we have to consider remote compromise of the voter's computer. Again, more or less by definition it's on the Internet, and personal computers are notoriously poorly maintained and vulnerable to attack (hence botnets). This threat is the hardest to secure against. A compromised terminal can present any information to the user it pleases. For instance, it could claim you're voting for Jefferson when actually you're voting for Burr. Even if afterwards you check your vote on some other computer and discover the fraud, there's no way for the electoral system to distinguish this from user error or buyer's remorse. As long as consumer operating systems remain as insecure as they currently are, it's pretty hard to see how to deal with this problem adequately.


December 14, 2008

The thing I love about the Mac is how it just works. Take today (well, really the whole weekend) for example.

For a variety of reasons, I decided it was time to use an encrypted filesystem on my laptop. The natural choice here is FileVault, which a little net research suggests is imperfect, but is, after all, what Apple provides, thus avoiding contaminating a perfect Apple artifact with any un-Jobslike software. That said, I'm not completely crazy, so on the advice of counsel I decided to proceed deliberately:

Step 1: Take a backup
Since encrypted filesystems tend to have less attractive failure modes than ordinary filesystems, it seemed like a good idea to take a backup. Originally, my plan here was to use Time Machine (Apple product, remember), but when I actually went to run it, performance was rather less than great. I suspect the problem here is that it's working file by file because it needs to be able to build a data structure that allows reversion to arbitrary time checkpoints. In any case, I got impatient and aborted it, figuring I'd move back to regular UNIX tools. Unfortunately, dump doesn't work with HFS/HFS+, so this left me with tar. Tar is generally quite a bit slower than dump because it works on a file-by-file basis, which is an especially serious issue with a drive with bad seek time like the 4200 RPM drive in the Air. [Evidence for this theory: dd if=/dev/zero to the USB backup drive did 20 MB/s, so it's probably not a limitation of the USB bus or the external drive.] It's not clear to me that it's actually any faster than Time Machine, but it has the advantage of being predictable and behaving in a way I understand.

Step 2: Turn on FileVault
At this point, I've got a backup and things should be easy, so I clicked the button to turn on FileVault. The machine thought for a while and then announced I needed more free space (as much as the size of my home directory) to turn on FileVault.

Step 3: Clean Up
OK, no problem. I'll just move some of my data off the machine and onto the backup drive [you don't trust the original backup do you?], turn on FileVault and then copy it back. This takes a few hours, but finally I managed to clear out 18 G or so and I had enough room to turn on FileVault.

Step 4: Turn on FileVault (II)
OK, at this point we really should be ready. I started up FileVault and this time it cheerfully announced it was encrypting my home directory and things would be ready in 12 hours or so. OK, so that's not so bad, it'll be done when I wake up. No such luck. About an hour in it complained that it had an error copying a file and it had aborted. At this point, I was starting to rethink my plan; maybe encrypting my massive operational home directory isn't such a good idea. But I'm still committed to FileVault—more committed since I've put so much time into it!—so this brings us to...

Step 5: The Big Purge
At this point I decided to get serious and delete almost everthing off my home directory, turn on FV, and restore from backup. Luckily, I checked my backup only to realize I'd fumble-fingered and deleted the backup file (Doh!). Two hours to pull another backup, and then I need to delete files. At this point, we're talking real data, not just Music and stuff like that, so I need a secure delete. A little reading suggests srm is the tool for the job and I set it to run overnight. Unfortunately, the next morning it's only deleted about 2G, so this is going to take forever [Technical note: I was only using 7-pass mode, not 35-pass mode. I'm paranoid, not insane]. Luckily, there's also rm -P which does a 3-pass delete but seems to be much more than 2x faster than srm. I run that and fairly quickly have my home directory trimmed down to a svelte 2GB, leaving us ready for Step 6.

Step 6: Turn on FileVault (III)
This time when I turn on FV, things look pretty good. It encrypts everything in about an hour and then announces that it's going to delete my old Home directory— I've checked the secure delete checkbox, whatever that does. Unfortunately, whatever it does is bad since 4 hours later it's still securely deleting away. A little research suggests it's safe to abort this, so I give it a hard power reset (did I mention there's no cancel button, or rather that there is one but it's grayed out at this point? Also, no real progress bar, just the old spinning blue candy cane.). Anyway, the machine reboots just fine and I now have an allegedly encrypted home directory and a directory that's named /Users/ekr-<random-numbers>. I figure that's the old home directory and hit it with the old rm -P and it vanishes.

Step 7: Nuke the site from orbit. It's the only way to be sure
At this point, I've been doing a lot of deleting, and it's pretty hard to be sure that I haven't typoed or that the filesystem hasn't screwed me somehow and copied some of my precious precious data to some unused partition, so I decide it would be a good idea to run "Erase Free Space" with 7 passes, just to make sure. I set it for 7 pass and started it up about 5 hours ago. I'll let you know when it finishes. The current promise is 12 hours.

UPDATE (5:55 AM):: More progress on the progress bar, but still promising 12 hours.