November 2008 Archives

 

November 30, 2008

Groklaw has an interesting article on the implications of the guilty verdict in the Drew case. (see also the amicus brief submitted by EFF et al.) The basic point here is that the legal theory under which Drew was prosecuted was that she had violated the Computer Fraud and Abuse Act by accessing MySpace's site in violation of their terms of service. As the amici observe, terms of service are often extremely vague (YouTube's Community Guidelines prohibit "bad stuff"), users generally don't read, and that sites probably ofte expect you to violate them (did you know that Google's TOS prohibit use if "you are not of legal age to form a binding contract with Google"?). They conclude that it's a really bad idea to give them the force of criminal law—should it really be possible to put 17 year olds who use Google in jail?

All this is of course true, but it seems like a pretty strong argument against terms of service in general. If they're ridiculously vague and nobody reads them anyway, then how does it make sense for them to be treated as some kind of enforceable contract? OK, so you can't do time, but check out this clause from Facebook's TOS:

You agree to indemnify and hold the Company, its subsidiaries and affiliates, and each of their directors, officers, agents, contractors, partners and employees, harmless from and against any loss, liability, claim, demand, damages, costs and expenses, including reasonable attorney's fees, arising out of or in connection with any User Content, any Third Party Applications, Software or Content you post or share on or through the Site (including through the Share Service), your use of the Service or the Site, your conduct in connection with the Service or the Site or with other users of the Service or the Site, or any violation of this Agreement or of any law or the rights of any third party.

As I read this, if I as a Facebook user do something that causes Facebook some liability—even if I'm otherwise complying with the TOS—I've just agreed to indemnify them from any loss. That seems like a pretty substantial obligation to take on; I wonder if the average user has thought about it.

 

November 29, 2008

This NYT article on preemptive demolition by property owners is extremely odd. The basic story is the (totally unsurprising) fact that property owners in NYC who expect their property to be declared historic landmarks (and therefore changes will be highly restricted) are preemptively demolishing historic features in order to avoid the designation. What's striking about this article and it's <predecessor, is the essentially complete absence of the property owners. The only players who get coverage are the Landmarks Commission and the preservationists who claim that the Commission is dragging its heels.

But once the building's distinctive features had been erased, the battle was lost. The commission went ahead with its hearing, but ultimately decided not to designate the structure because it had been irreparably changed. Today a 16-story luxury condominium designed by Robert A. M. Stern is rising on the site: the Related Companies is asking from $765,000 for a studio to $7 million or more for a five-bedroom unit in the building.

The strategy has become wearyingly familiar to preservationists. A property owner -- in this case Sylgar Properties, which was under contract to sell the site to Related -- is notified by the landmarks commission that its building or the neighborhood is being considered for landmark status. The owner then rushes to obtain a demolition or stripping permit from the city's Department of Buildings so that notable qualities can be removed, rendering the structure unworthy of protection.

"In the middle of the night I'm out there at 2 in the morning, and they're taking the cornices off," said Gale Brewer, a city councilwoman who represents that part of the Upper West Side. "We're calling the Buildings Department, we're calling Landmarks. You get so beaten down by all of this. The developers know they can get away with that."

I'm sure that preservationists do find this to be a wearingly familiar strategy, but it's not exactly unexpected: put yourself in the position of the property owner. At the moment, you have control of a building and can mostly do what you want with it (subject of course to the existing zoning regulations), and suddenly you hear that you're going to be subject to a whole bunch of new, annoying restrictions, which you can evade by doing some minor surgery on your building (if you own a lot of property, you might find this to be a wearyingly familiar story). What would you do? In general, my sympathies here are mostly with the property owners, but that's mostly because the preservationists don't seem to have any sense that they're using the power of the state to inflict costs on others. That said, I'm not exactly a fan of having hog slaughtering operations or meth labs set up next door to my house, so it's not like I don't have any sympathy for zoning. I'd just like to see a more balanced presentation.

 
OK, so opinions differ about whether or not it's a good idea to encourage the use of self-signed certificates for SSL servers. As I read the situation, the basic arguments go like this:

For:
Active attacks are relatively uncommon but passive sniffing is a big problem, so the world would be better off if people used SSL, even if there is no real authentication of the server. Moreover, if you use SSH-style "leap-of-faith" authentication techniques where you memorize the server's certificate and get worried if it changes, you are fairly resistant to active attack.

Against:
Active attacks are a real threat, and people are already way too willing to ignore warnings from the browsers about invalid certificates. If we encourage self-signed certs, people will only get more lax.

This is a contentious issue in the security community, but few of us are in a position to do much other than rant. On the other hand, if you work for a major browser vendor, you do get to do something. It was big news (at least in the narrow security community) a while back when Firefox 3 took a much more aggressive line on invalid certificates. I was initially sort of sanguine about this turn of events, since many security types have long been worried about users ignoring error messages (see above), but (at minimum) the execution seems to be a little lacking.

Here's how things shake out when you use Firefox to connect to some site with an "invalid" cert. First, you get the following error screen:

So, first, this looks like a hard error to any sane person. In the past few weeks I've seen several people not really know what to do here, and even I've done a double take at least once before I realized it was just a certificate error I could override (note that the dialog doesn't in any way suggest that this could be intentional and/or safe in certain circumstances). Anyway, once you figure out what's going on, you click "Or you can add an exception..." which takes you to the following screen:

I'm not sure I entirely agree with Firefox's opinion about when you should add an exception. I get https: URLs fairly often in contexts when I'm not overworried about security. If you would have been willing to retrieve the page with HTTP, you should also be willing to retrieve it with HTTP over TLS. Maybe it's bad policy by the server but it's not unsafe as far as I can tell.

If you click "Add Exception..." you then get:

Note that you can't actually add the exception at this point. Every button is grayed out besides "Get Certificate" and "Cancel". When you click "Get Certificate", the browser fills in the information, giving us the following dialog:

Now you can confirm the exception and after four separate dialogs, you can finally get to the original page you were looking for.

Whatever one's position on self-signed certs, this all seems unnecessarily clumsy. I'm particularly struck by dialog 3, where they force you to download the certificate, despite the fact that Firefox absolutely has a copy, having obtained it when it first contacted the server. Why doesn't it just fill in the dialog instead of forcing you to click through? It's one thing to give you an alarming warning, but the rest of this feels a lot like editorializing via UI. You know what I'm talking about here: we don't think you should be doing, so despite the fact that you're insisting on it, we'll make it as inconvenient and irritating as possible. I don't know if that was what was in the programmer's heads or not, but it seems to me like one could produce a rather better UI even if your underlying objective was to discourage self-signed certs.

 

November 26, 2008

As you may have heard, President-Elect Obama may need to give up his Blackberry for "security reasons":

But before he arrives at the White House, he will probably be forced to sign off. In addition to concerns about e-mail security, he faces the Presidential Records Act, which puts his correspondence in the official record and ultimately up for public review, and the threat of subpoenas. A decision has not been made on whether he could become the first e-mailing president, but aides said that seemed doubtful.

...

Diana Owen, who leads the American Studies program at Georgetown University, said presidents were not advised to use e-mail because of security risks and fear that messages could be intercepted.

"They could come up with some bulletproof way of protecting his e-mail and digital correspondence, but anything can be hacked," said Ms. Owen, who has studied how presidents communicate in the Internet era. "The nature of the president's job is that others can use e-mail for him."

These seem like separate issues. I don't know what the Presidential Records Act says, outside of the Wikipedia article, but presumably this is an argument against the President using email at all, not just a Blackberry. Presumably what's required here is discretion in what gets sent over the Blackberry.

The security ("hacking") problem seems more serious. There are a number of issues here, including:

  • Confidentiality of the data going to and from the Blackberry.
  • Remote compromise of the Blackberry.
  • Tracking of the President via his Blackberry.
The confidentiality problem is comparatively easy to address. Cellular networks generally have relatively weak encryption, and even if that weren't true, you can't trust the cellular provider anyway. That said, there's plenty of technology for tieing up encrypted channels from the Blackberry back to some server in the White House where the data gets handled like email sent from White House computers (e.g., a VPN). I'm not familiar with the Blackberry VPN offerings, but this isn't something that would be that hard to develop.

Remote compromise is much more difficult to solve. You've got a device that's connected to the Internet, and of course it contains software with what you'd expect to be the usual complement of security vulnerabilities. You could perhaps try to tunnel all IP-level communications back through the White House, but you'd still have to worry about everything a the cellular/radio level which has to come directly through the ordinary cell network. Accordingly, you should expect that a dedicated attacker with access to the device's phone number, transmitter serial number, etc. would be able to remotely compromise it. Such a device could send copies of data to whoever controlled it, record any ambient audio (or video if you had a camera), etc. Protecting against remote compromise isn't like adding a VPN client; you have to worry about the entire surface area of the software and it's not like you're going to rewrite the entire Blackberry firmware stack. Cutting against this concern is the fact that the president isn't going to be the only person with access to sensitive material. Are we going to deny everyone on the direct presidential staff access to any sort of modern communications device?

Similar considerations apply to tracking. All you need is to know the phone's radio parameters and have an appropriate receiver, and the phone will helpfully transmit regular beacons. Again, though, it's not usually hard to figure out where the president is, surrounded as he is by a bunch of staffers and secret service agents. Additionally, many of those people will have radio transmitters, so it's not clear that denying the president his device will add much value. If it's imperative that the president not be tracked at any particular time, you can simply shut down his device then.

 

November 24, 2008

If you fly much, you've probably heard of Clear, those kiosks near airport security which let you zip through security faster. The way that Clear works is that you sign up, give them some biographical data and biometrics, and of course pay them a bunch of money. They do some kind of background check (unclear how much they actually do) and then issue you a "Clear card", a smart card with your biometrics on it. Then when you go to the airport you present your card, they verify your biometrics, and if everything checks out you get to bypass the security line and go right through the x-ray and magnetometer. As far as I can tell, then, you're just paying $199/year to go to the front of the security line.

The natural question is: if you're just paying to cut in line but you go through the same security screening, what's the purpose of the background check and the biometrics? One could argue, I suppose, that once you know that people were OK, you could give them lighter security screening, but as far as I know that's not what happens: TSA only has two security modes: normal and aggressive (SSSS), but it's fairly easy to avoid aggressive mode with a boarding pass printer, so it's not like you need any system this heavyweight to securely exempt people from random selection. The cynical might argue that the purpose is to protect Clear's ability to extract money from you by preventing you from giving your card to someone else. On the other hand, you don't really need a thumbprint, let alone an irisprint, to stop that. A photo would be plenty. And of course the background check is totally unnecessary.

I suspect that the real reason here is that Clear was originally conceived as a bypass system where you would be able to get lighter (or perhaps no) screening, and in that context the background check made sense. That didn't work out, but the initial security theatre stuck around. After all, how would you explain that it was somehow no longer needed?

 

November 23, 2008

The recount in the Coleman-Franken Minnesota Senate race is in full swing and so again as in Florida 2000, we get to observe the spectacle of voting officials trying to figure out just what the heck their fellow citizens were thinking when they marked their ballots. This election was run on optically scanned paper and Minnesota Public Radio has posted a whole set of challenged ballots and a quiz where you can make your own judgement about whether they should be accepted or not. As I've mentioned before, one of the nominal advantages of DRE systems (see for instance, this post by Ed Felten) is that when you do a recount you don't have to do this, or rather one should say you can't do this because the DRE just records whatever choice you think it made. It may be wrong, but it's (almost) never ambiguous. Unfortunately, DREs and opscan ballots are incomparable in a number of other ways so it's sort of hard to decide whether this particular feature is decisive. Instead, let's try a thought experiment. Consider the following four voting systems:

A. Ordinary Precinct Count Optical Scan Ballots
You mark your paper ballots and they're scanned in the precinct and then dropped in a box. The scanner detects under and overvotes and spits out your ballot if it thinks it's invalid, but you can't tell if it's misread your vote. The paper ballots are available for subsequent recounts as usual.

B. PCOS + Confirmation
This is just like system A, but before the system accepts your ballot it shows you a confirmation screen indicating its interpretation of how you voted. If you think it's wrong, you can start over again with a new ballot. This isn't a security feature, really: the machine can always lie; it just detects incorrect reads (assuming voters check the confirmation screen).

C. Disposable PCOS + Confirmation
This system is like system B, except where the ballot box would ordinarily be (underneath the scanner) there's a big crosscut shredder which destroys your ballot as soon as its been recorded. Thus, the only possible recount is re-exporting the vote data from the scanner and re-tabulating at election central.

D. Disposable PCOS + VVPAT
Finally, consider what happens if we take system C but fit it with a VVPAT printer, which records the systems's interpretation of your vote which you can then accept or reject as usual with DREs.

System A is roughly how PCOS elections are run now. As far as I know, System B doesn't exist anywhere You could imagine retrofitting any PCOS scanner with a big enough screen, but even the biggest screens, like those found on the Hart eScan, are pretty small. Systems C and D correspond roughly to DREs with and without VVPATs. The two main differences are that the UI is lousy and that whereas with a DRE it's not really possible to have an independent record of the intent of the voter,1 with systems C and D we had an independent record, but we systematically destroyed it in order to avoid the ambiguity of being able to go back and second-guess the machine later.

If you buy the argument that it's bad/embarassing/awkward to have people go back and try to revise the machine count, then you ought to think that systems C and D are better than systems A or B. The difficulty with this position is that we know that the scanners do make mistakes and this basically removes our ability to correct a large class of them. Now, you could argue that any scanner errors will be caught by the users in the confirmation phase, but we know that's not true [*], so we're left tolerating the machine error rate with no real way to correct it. The counterargument, here is that the recount has its own error rate, both in terms of ballot interpretation and in terms of ballot handling—it would be one thing if we all agreed on the set of ballots to be audited, but in actual practice the chain of custody of paper ballots can be fairly problematic, so it's not just a matter of deciding the contents of each ballot, but also of making sure you have all the ballots.

Note that while superficially system D seems a lot like system B: in both cases we have an electronic record plus a paper trail. But upon deeper inspection they're really quite different: in system D, what we have is a voter verifiable paper audit trail. I.e., the voter could in principle have checked the paper (though Everett et al.'s research suggests this is unlikely), but the paper just reflects the machine's opinion of the voter's intent. By contrast, in B we have a voter created paper audit trail (I don't think that VCPAT is standard term, but it should be), in which we can independently assess the voter's intent from the paper record. This issue becomes increasingly important the higher the probability that the machine will misrecord people's votes, whether through malice or malfunction.

1. I should qualify this a little bit. Obviously, we could just videotape the voter voting, but that would utterly destroy ballot secrecy, which is generally considered to be an invariant of such systems. Cordero and Wagner [*] have described a system for privacy preserving audits of DREs, where they record the UI inputs but engage in scrubbing to try to remove sensitive marks. It's not clear how well this works.

 

November 22, 2008

Flew out of MSP today and saw that the bathrooms are fitted with the new Dyson airblade hypervelocity hand dryers. I tried it but I can't say I'm very impressed. The first problem is that you have to stick your hands up to the wrists in this gizmo which looks like it might be some sort hand guillotine that Saudi Arabia would use to punish you for dryness theft. That would be offputting enough in any case, but coupled with the name Airblade I must admit I felt a bit squeamish.

Anyway, I screwed up my courage and shoved my hands in, which brings us to problem number two: the opening is really small, so you tend to bang your hands on the sides on the way in or the way out. This sort of defeats the one of the major advantages of an air hand drier, which is to say that you don't have to touch surfaces which might have been contaminated by the grubby fingers of those with less obsessive-compulsive washing habits than your own. Unfortunately, the small orifices makes the exercise of drying your hands kind of like playing an adult-sized version of Operation The first time I tried it, I touched the sides on the way in. No buzzer went off, but I decided it was best to rewash my hands and give it another shot anyway.

I will say that the Airblade lives up to its name in one respect: it produces a quite forceful blast of (extremely cold) air. Unfortunately, it doesn't seem to get your hands very dry and after I pulled them out the tips of my fingers were still dripping water. Part of the problem here seems to be that they want you to pull your hands out slowly, which is difficult to do smoothly without touching the sides (see above) and in my eagerness to preserve my fingers I may have gone too quickly, resulting in suboptimal moisture removal. Even so, I still managed to bang the back of my hands on way out, producing the need to wash my hands yet again. This time I just wiped them on my pants.

 

November 20, 2008

Sorry for the blogging slowdown. Some combination of IETF prep, IETF, and what seems to be a severe case of anthrax has cut into my productivity.

My slides for this IETF can be found below:

The first two of these are self-explanatory. The third is a little less so. There's a lot of usage of cryptographic hash functions outside of security context, just because they're good fingerprint functions/checksums, etc. Most people don't really know exactly what the property of CRC is, but (at least they think) they understand MD5 or SHA-1, so they get anchored on it. This probably isn't a great choice. First, you can design much faster hash functions than cryptographic hashes. Second, it's confusing to run into a cryptographic hash in these contexts, especially when the hash is a weak one like MD5. People start asking: what happens now that MD5 is broken? The answer, of course, is nothing, but that takes analysis.

What I think we need is a fast, non-cryptographic hash that is explicitly weak (by the standards of message digests). I'm not saying we need to develop a new hash; I'm no expert but my read of the literature is that plenty of such functions already exist. The community just needs to pick one or two and standardize it so that everyone knows what to use.

 

November 9, 2008

Sorry about the delay in completing this part of the series. Things got a bit crazy after the election. Anyway, when we left off I had just covered malicious failure modes in the polling place. Today we'll be talking about failures in the back office, aka election central. There's plenty of stuff to go wrong in the election preparation phase (ballot definition, device programming, etc.), but here I'm mostly interested in vote tabulation, which is done via the Election Management System (EMS).

Depending on the election system being used, tabulation can be performed in a number of ways:

  • In central count opscan systems, the ballots get shipped back to election central, so we have to actually scan them and then tabulate the results.
  • In DRE and precinct count opscan systems, pre-counted results come back from the precinct and simply need to be aggregated and the winners declared.

It's best to take each of these separately.

Central Count Optical Scan
Most plausible CCOS failures are non-malicious: it's pretty hard for an end-user to mount any kind of attack on the scanning system proper or than denial of service. Obviously, the attacker could tamper with their ballot (treat it with acid, glue, or somesuch) to damage the scanner, but it's not clear what this would buy you other than delaying the count. [This isn't to say that there isn't plenty of room for manipulating paper ballots, just that you would probably find it more profitable to do outside of election central, which is presumbly subject to fairly restricted access.]

On the other hand, plenty of stuff can still go wrong. First, ballots don't always scan correctly. If you're lucky, the scanner will just reject the ballot and then it will need to be manually counted. Often the voter's intent is clear, but if it's not, there's no real opportunity for the voter to correct it, and their vote just gets lost. Other than that, the sheet feeder in the scanner can mangle the ballot in various ways, causing inconvenience, manual counting, etc.

That said, if an attacker does manage to take control of the CCOS scanner, the consequences are fairly serious. As with any other piece of computerized election equipment, the attacker can cause it to return any result that he wants. On the other hand, the scanner very rarely needs to be connected to any other piece of computer equipment, so the risk can be minimized with proper controls.

PCOS and DRE
With PCOS and DRE, results get communicated back from the field one of two ways: either on some sort of memory card or on summary results tapes. The big concern with memory cards is that they can serve as a vector for viral spread from compromised precinct machines. For instance, the TTBR Diebold report describes such an attack. As usual, if the EMS is compromised, the attacker can cause it to report any results it chooses. This includes, of course, misreporting any results fed into it from the central count optical scanner. An even more serious concern is that if the same EMS is used for ballot preparation and machine initialization then it can serve as a viral spread vector: the attacker infects a machine in the field, the virus spreads to the EMS, which then compromises every polling place machine. ([HRSW08] has a lot more discussion of this form of attack, as well as countermeasures.)

The data doesn't have to be sent back on memory cards, of course. DREs and opscans typically print out results/summary tapes with the vote totals. These can be manually keyed into the EMS. This mostly controls the viral threat, but now you have to worry about a whole array of errors on the paper tape. As this post by Ed Felten indicates, the quality of the results tapes is pretty low and when coupled with the usual human errors, there's a lot of possibility for the wrong data to end up in the EMS. (This isn't to say that there can't be errors on the memory cards as well, especially with the Premier system which uses some super-old tech; Sequoia and Hart use PCMCIA flash drives, which are just old tech.) In principle, this might get detected by comparison of the precinct-level results tapes, which (at least in Santa Clara County) get posted publicly elsewhere, but I don't know if anyone actually double checks that stuff in practice.

Visibility
Of course, almost none of these issues are obvious to voters: you just vote, but you have no real way of knowing if your vote was counted or not (this is deliberate, for vote privacy reasons). And of course it's even harder to verify that any issues have been handled correctly.

Next: attack vectors.

 

November 7, 2008

I was down at UCSD teaching a guest lecture on communications security and on my way back from lunch we saw this uh, preacher:

I knew Jesus didn't like unbelievers, but it's news to me about mouthy women and the world is pretty much full of clueless fools.

 

November 4, 2008

Yesterday I wrote about non-malicious e-voting failure modes. In today's installment, we discuss malicious failure modes in polling place devices (tomorrow we talk about the back office).

The most powerful attack is if the attacker can gain actual control of the voting machine. There has been a lot of work on subverting polling place devices, but the bottom line is that it looks to me like an attacker with limited physical access can take control of pretty much any of the standard devices (I'll cover attack vectors later). Obviously, an attacker who controls a voting machine can make it do pretty much anything it's physically capable of, including simulating any non-malicious attack. However, there are also more subtle attacks that an attacker can mount. The TTBR and EVEREST reports provide extensive catalogs of the possible attacks, but I'll just cover some of the highlights here, focusing on attacks designed to alter the vote count.

OPSCAN
Because the optical scan interface is so limited, it's extremely hard to distinguish malicious from non-malicious errors. However, an attacker who controls an optical scanner can cause selective failures of the optical scanner in several interesting ways. First, the scanner can explicitly reject ballots cast for particular candidates; for instance, it could claim that some fraction of ballots with Burr selected were undervoted or overvoted. It's not clear how powerful such an attack is, since presumably voters would keep trying and eventually either the ballots would be submitted for exceptional manual processing or the machine would be taken out of service. On the other hand, this could serve as a vote suppression attack for particular districts. A more sophisticated version of this attack would be to have the scanner count votes and if it detects that a lot of voters are voting Burr rather than Hamilton, it starts failing more frequently, suppressing votes in Burr precincts.

There are also invisible attacks: the scanner could simply record votes for Burr instead of Hamilton. As I noted previously, this would only get caught in an audit with a separate scanner or hand counting, since there's no display to the user of the scanned ballot. Even if there were, the scanner could display "Burr" but record "Hamilton", so there's no real way for the user to detect attack. Not all jurisdictions do audits and it's not clear that even where they are done, they're done in a powerful enough way to detect and correct this kind of tampering (more on auditing in a later post as well).

Even an attacker who doesn't control the machine can still perform a DoS attack: optical scanners and the sheet feed mechanisms inside are relatively easy to jam. If you're not worried about getting caught, you could clearly cover your ballot with glue and then shove it in the scanner. There are probably substances you could use that would dirty the scan heads enough to make votes hard to read. Could you do this selectively? Maybe. opscan ballots in Santa Clara are two column, but each race is a single column, so you can't prefer one candidate to another. But what you could potentially do, depending on the scanner, is dirty the section of the head over a given race thus suppressing votes for just that race, which would let you have a semi-selective effect. This would be an invisible attack unless the scanner is configured to report undervotes.

DREs
There's an enormous amount of room for attack with DREs. You could clearly mount a simple vote-flipping attack, simulating a flakey touchscreen and making the machine visibly shift votes from Hamilton to Burr. However, you can do far better than that. The attack that's most obvious—and has generated the most concern—is simply to have the machine record an entirely different set of votes than the voters are voting. [In the research community it's generally considered declasse to just stuff the ballot results with fake votes because at least in principle jurisdictions record the total number of votes and can compare them; intead you change users votes.] This isn't even noticeable to the voter, since the UI all looks right. It's just that a different vote is recorded. Without an independent record of your vote, there's no straightforward way to detect or correct this kind of attack.

If a VVPAT is in use (see the previous post for a description of a VVPAT), the attacker's job becomes a little harder. If he just creates totally phony records, then the electronic results won't match the results on the VVPAT. The obvious attack here is what's called a "presentation attack". The machine accepts the user's input but somewhere along the line it changes the vote. Maybe it does it on initial input but more likely it does it before the summary screen or just on the VVPAT. Studies show that users aren't very good at checking these and so mostly this will work. Even if the user catches it, the machine just lets them correct "their" mistake and then perhaps waits a while before attacking again. A really sophisticated machine might be able to monitor user behavior to try to pick out users who seemed uncertain about how to use the machine and attack them preferentially. The advantage of this kind of attack is that it makes the VVPAT and the electronic records line up, making audits much harder.

Other attacks are possible too: the attacker controls the printer, so perhaps he can print the VVPAT as normal and then when the voter casts their vote, he waits and then prints "VOID" or "REJECTED" (depending on how the machine would ordinarily display rejected ballots) and then prints his own votes of choice. This just looks like a bunch of extra printer activity and since voters don't have a lot of idea how the VVPAT is supposed to behave, it's not at all clear they would notice.

As with OPSCAN, there are also a broad variety of DoS and selective DoS attacks. The machines can be programmed to slow down or crash when you vote for specific candidates. They can fail to display specific races. For instance, if the attacker wants to influence a Senate race and it detects you voted for party A for president, then it could not show you the senate race, thus pushing the race toward party B. Again, you might not notice this in the VVPAT/summary screen. Even without full control of the machine, it's probably possible to crash it in various ways without getting blamed.

Malice, incompetence, etc.
I said at the beginning of this post that an attacker can simulate more or less any non-malicious failure--and has a real incentive to do so. However, as anyone who has worked with computers can tell you, they are perfectly capable of behaving in lots of surprising ways without any malicious activity at all. Any report of failure needs to be evaluated using Hanlon's Razor. We have plenty of evidence that voting machines in particular can do things that look like attacks even when it's pretty clear they are not (see this video, for instance), so while certainly we have to be wary of attack, it's probably a mistake to jump to the conclusion that there's been an attack just because you see something funny.

Next: Malicious failure modes at election central

UPDATE: Fixed cut and paste error.

 

November 2, 2008

Voting machines (DREs and optical scanners) are computers and like any piece of computer equipment, they can fail in a more or less unlimited number of exciting ways. In this post, we focus on non-malicious failures, i.e., those where no attacker is trying to induce failures. It's useful to divide failures into two categories: voter visible and voter invisible. DRE "Vote flipping" is a classic voter visible error. You plan to vote for Jefferson, but when you press the touchscreen, the checkbox appears next to Burr. But remember that your vote is just being recorded on some memory card and just because the screen says "Jefferson" (and the VVPAT says Jefferson, if there is one) doesn't mean the memory card doesn't say "Burr". That's an invisible error.

We can divide visible failures into two additional categories: vote recording errors and system failures. Both of the errors above are recording errors, but the system can also just fail in some more obvious and crude way. For instance, it might not turn on, or might not respond to input or might crash as soon as you try to cast your vote. This shouldn't surprise you; they're computers after all, and crashing is one of the things that computers do best. [The Diebold AV-TSX actually runs Windows CE, so I suppose there is some chance you could get a Blue Screen of Death.]

OPSCAN
Most precinct count optical scanners are fairly opaque devices: you insert your ballot and either it accepts it or rejects it. If it detects an error (overvotes or undervotes, depending on programming), it will generally spit the ballot out with some sort of error message. Even within this range, a variety of visible errors are possible. The machine could accept invalid ballots (though this probably won't be noticed unless the voter deliberately does something wrong); it could reject valid ballots. It could also refuse to accept any ballots at all (i.e., not feed them into the scanner) or jam while processing a ballot. Finally, it could just flat out crash or stop working in some other gross way.

By contrast, vote recording errors with opscan systems tend to be silent, as long as they don't turn into an error that the machine is programmed to check. In principle, the optical scanner could display who you voted for and give you an opportunity to accept or reject, but as far as I can tell, none of the popular scanners actually do this. Indeed, the Sequoia Insight display looks to me to be too small to plausibly display your votes, so it's probably not just a matter of software to add a feature like this.

OurVoteLive's database of voting problems shows another, non-computerized, failure mode of opscan systems: bleeding pens.

DRE
DREs have a much fancier UI than opscan systems and so can freak out in a whole bunch of fun ways. The one that gets a lot of attention is "vote flipping", but there are actually a number of different ways in which the votes recorded can be inconsistent with the user's intent. A DRE has (at least) the following values:

  • The user's intended selection.
  • The UI gesture the user makes.
  • The option that the UI displays as selected.
  • The option selected in the summary screen.
  • The option selected in the VVPAT (if any).
  • The option recorded on the memory card.
There are plausible paths that could lead to effectively any combination of values here. The only one of these that voters are really likely to notice is that the option that the UI displays as selected doesn't match their intended vote [Everett reports that most users don't check the summary screen]. And of course, if the recording on the memory card is wrong, that's a silent failure. It's worth noting that the "vote flipping" people complain about seems much more likely to be a bug than an attack, since a plausible attacker would do something more subtle.

Of course, there are lots of kinds of system failures. Pretty much anything you've ever seen on your desktop computer can show up on a DRE. The computer can lock up, skip races on the ballot, crash, etc. One of the big failure modes seems to be the VVPAT printers: when I took Santa Clara county voter training they told us we would have three printers for our one lonely DRE, just in case some broke down.

It's worth mentioning that DRE breakdowns tend to be more serious in terms of impact on voting: if a precinct count scanner breaks down in an obvious way, you just move to paper ballots in a box and scan them later (though some failures, like preferential rejection of votes for certain candidates, may have more serious effects). By contrast, if the DREs break down, people can't vote and if you don't have enough emergency paper ballots on hand, it can actually stop people from voting, or at least create incredibly long lines if just some of them are broken.

Back Office Failures
There are also plenty of failures that can happen after the polls close, either at the precincts or in the back office. For instance, the memory cards used to carry the results can be corrupted. Most of these are recoverable at some level, since the machines print out paper receipts and of course you can recount the paper ballots and the VVPAT (if any). The tabulation software at election central can screw up as well, but that's independently checkable from the precinct tallies.

In addition, the central count scanners can fail in the same ways as the precinct count scanners can fail. Visible failures, like rejected ballots, can be dealt with by manual counting, but there can be invisible failures which can only be caught via audits and recounts, which generally only sample a small fraction of ballots.

Non-Technical Failures
The above partial catalog of technical failures isn't to say that there aren't a huge number of potential non-technical failures. A huge amount of our pollworker training was devoted to the various anomalies we might have to deal with, ranging from voter's address is wrong to voter is a convicted felon. I may get around to talking about those at some later point.

Next: Malicious failure modes.

 

November 1, 2008

Most elections in the US use what's called first past the post voting. This just means that whoever gets the most votes wins, regardless of if they get a majority of the votes. Some other countries use what's called Instant Runoff Voting (IRV). The idea behind IRV is that voters rank order their choices. Then, if no candidate has a majority of first place votes, the lowest ranked candidate is removed and you then take the highest ranked candidates from each voter who voted for that candidate. Gigabytes of material have been written about the virtues of IRV versus first past the post versus approval voting, and I don't propose to rehash that here. Instead, I want to talk about the impact of IRV on a voting system.

The issue is this: in a FPTP system, the machines can simply record the number of votes for any candidate, since this is sufficient to resolve the contest.1 In an IRV system you need to record the rankings. Here's an example of why:

Let's say we have an election with three candidates, which we'll call Alice, Bob, and Charlie, and denote A, B, C. We have five voters, numbered 1-5. Now, consider the following table of preferences:

A A B B C    --  A:2, B:2, C:1
C C C A B    --  A:1, B:1, C:3
B B A C A    --  A:2, B:2, C:1

In the first round, then A and B are tied, so we cross out C. This has no impact on voters 1-4, but voter 5 now votes for B, so now it's 3-2 for B and B wins.

Now consider the following table of preferences:

A A B B C     -- A:2, B:2, C:1
C B C C A     -- A:1, B:1, C:3
B C A A B     -- A:2, B:2, C:1

This has the same totals as before, but this time, when we cross off C, voter 5 votes for A and now it's 3-2 A and A wins. I.e., totals aren't enough to determine the winner of an IRV contest. (At this point I can hear the approval voting enthusiasts screaming that approval doesn't have this problem. That's true but irrelevant. Take it outside.)

In order to make IRV work, you need to report each voter's preferences. The difficulty here is that this represents a potential threat to voter privacy because it means that election central needs to have access to the voter's ballot information (this is true with central count optical scan, but not with precinct count or DREs). If there is some way to tie the ballot to a voter, you could know how a voter voted, which could be used for vote buying or coercion.

One standard technique for this sort of tying is what's called "pattern voting". The voter is instructed to vote for a specific set of candidates with one race being the one the vote buyer cares about and the other ones being used to encode the voter's identity. Then the attacker looks for that ballot in the reported ballot lists. One natural defense is to disaggregate the contests, so that while you keep the preferences for a given race, they are disconnected from other races. However, when IRV is used, you can encode your information in the ordering of the individual contests, typically without too much impact on the election. This works especially well in races with a large number of candidates, where it doesn't matter too much how big your effect is on the 29th and 30th candidate.

There has been some work on cryptographic techniques for IRV (e.g., [*]), but the uptake of cryptographic voting in general has been minimal.

1.Though it turns out that many DREs actually record the invididual vote results. See, for instance Issue 5.2.19 of the Premier Elections TTBR report.

 
Over the next week or so you're going to hear a lot of complaints about voting machine failures. Unfortunately, the signal to noise ratio tends to be fairly low, so it's hard to figure out what's going on. Over the next few days I'm going to post a bit about voting technology and what kind of things can go wrong. In this post, we provide an overview of the major kinds of voting system in use in the US.

The majority of voting systems in the US fall into one of two broad types:

Optically Scanned Paper Ballots (opscan)
These are pretty much what they sound like. You're issued a paper ballot (usually card stock, actually) and you fill in a bubble or arrow corresponding to the candidate you want to select. Here's an example. These ballots are then optically scanned and the votes are recorded.

Opscan ballots can be run in either a precinct count optical scan (PCOS) or a central count optical scan (CCOS) mode. In PCOS, the scanner is at the precinct; you mark up your ballot and then you feed it into the scanner. At the end of the election, the scanner outputs the results on a piece of paper or a memory card. The results then get carried back to election central where they can be aggregated to determine the final winner. The ballots also get sent back in case a recount is needed, but they're not used as part of the main count.

In CCOS, there is no scanner at the precinct. Voters just drop ballots into boxes and then they are counted on one big scanner (or maybe many) back at election central, where the votes are aggregated and the winner is determined. Some jurisdictions run hybrid systems where ballots cast at the polling place are PCOS counted but absentee and vote-by-mail ballots are counted centrally.

A PCOS system has two major advantages. First, because votes are scanned while the voter is still present, errors can be caught and the voter can correct his ballot onsite. Thus, the rate of errors is quite different ([*], citation due to Joe Hall). Second it creates a set of independent records that might be useful for detecting some ballot stuffing attacks. The big disadvantage of a PCOS system is that the scanner is out at the field and is potentially subject to attack by voters or pollworkers. An attacker who takes over the software of the PCOS system can make it return any results he wants, which won't be detected unless an audit or recount is run. By contrast, the central count scanner can be kept in a secured room and thus is harder for outsiders to attack.

The advantage of both types of systems is that there is a paper record, so in the worst case you can recount every single ballot with a new scanner or by hand. More on this later.

Direct Recording Electronic (DRE)
The other major type of voting system is what's technically called a Direct Recording Electronic (DRE). These are commonly called touchscreens but not all are. A DRE is just a computer where you enter your vote. The computer outputs the votes (or the vote totals), just like the PCOS scanner. They then get carried back to election central for aggregation and contest resolution. Most of these machines are in fact touchcreens, but older ones often used an array of buttons and the Hart system uses a clickwheel. One big advertised advantage of DREs is that they can be fitted with a variety of accessibility devices (audio, sip/puff, etc.)

Many states require independent paper records and so most DREs can be fitted with what's known as a voter verifiable paper audit trail (VVPAT) printer. A typical VVPAT is a reel-to-reel printer with the paper under glass. Here's a not so great picture of a Hart voting machine with a VVPAT fitted (on the left). The way the VVPAT works is that once you've entered your vote, the DRE prints out a summary on the tape. You can then either approve or reject it. If you approve it, the vote is cast. If you reject it, you can vote again. The idea hear is that the paper trail represents an independent check on the machine, since it can't just return any votes it wants; the results need to match the paper (at least if you run an audit). More on this later.

Electronic Ballot Markers (EBM)
There's one less commonly used system that's starting to get some traction, at least in terms of mindshare: electronic ballot markers. An EBM is basically a DRE which instead of recording the ballot results, prints out a paper ballot which can be fed into an optical scanner. The idea here is that there is built in error checking, since the computer can prevent invalid choices, but that the ballots can then be checked using central counting, so there is less of a security dependence on the machine. Also, like DREs, EBMs are more disabled-friendly.

Next: Non-malicious failure modes