Voting: October 2011 Archives

 

October 18, 2011

Following up on their demonstration attack on Diebold voting machines (writeup, my comments), the Argonne Vulnerability Assessment Team has developed a set of Suggestions for Better Election Security). My review comments are below:

I've had a chance to go over this document and while there are some suggestions that are valuable, many seem naive, impractical, or actively harmful. More generally, I don't see that it derives from any systematic threat model or cost/benefit analysis about which threats to address; merely following the procedures here would--at great expense--foreclose some security threats while leaving open other threats that are arguably more serious both in terms of severity and ease of attack. Finally, many of the recommendations here seem quite inconsistent with the current state of election practice. That's not necessarily fatal, since that practice is in some cases flawed, but there doesn't seem to be any acknowledgement that these seemingly minor changes actually would require radically reworking election equipment and procedures.

If this document is to be useful rather than harmful, it needs to start with a with a description of the threat model--and in particular the assumed attacker capabilities--and then proceed to a systematic analysis of which threats it is economical to defend against, rather than just being a grab bag of isolated security recommendations apparently designed to defend against very different levels of threat.

Pre- And Post-Election Inspections
The authors recommend:

... at least 1% of the voting machines actually used in the election-randomly chosen-should be tested, then disassembled, inspected, and the hardware examined for tampering and alien electronics. The software/firmware should also be examined, including for malware. It is not sufficient to merely test the machines in a mock election, or to focus only on cyber security issues!

This document does not specify how the hardware must be "examined", but a thorough examination, sufficient to discover attack by a sophisticated attacker, is likely to be extremely time consuming and expensive. A voting machine, like most embedded computers, consists of a number of chips mounted on one or more printed circuit boards as well as peripherals (e.g., the touchscreen) connected with cabling. This document seems to expect that "alien electronics" will be a separate discrete component added to the device, but this need not be so. A moderately sophisticated attacker could modify or replace any of these components (for instance, by replacing the chips with lookalike chips). As most of these components are sealed in opaque plastic packaging, assessing whether they have been tampered with is no easy matter. For instance, in the case of a chip, one would need to either remove the chip packaging (destroying it in the process) or x-ray it and then compare to a reference example of the chip in order to verify that no substitution had occurred. These are specialized and highly sophisticated techniques that few people are qualified to carry out, and yet this document proposes that they be performed on multiple machines in every jurisdiction in the United States, of which there are on the order of 10,000.

Moreover, this level of hardware analysis is useless against a broad spectrum of informational threats. An attacker who can rewrite the device's firmware--trivial with physical access to the internals, but the California TTBR discovered a number of vectors which did not require such access--can program his malware to erase itself after the election is over, thus evading inspection. Moreover, to the extent to which the microprocessors in the device contain firmware and/or microcode, it may not be possible to determine whether it has been tampered, since that would require interfaces directly to the firmware which do not depend on the firmware itself; these do not always exist. Absent some well-defined threat model, it is unclear why this document ignores these threats in favor of less effective physical attacks.

Finally, doing any of this inspection requires extremely detailed knowledge of the expected internals of the voting machine (it is insufficient to simply do exact comparison from a single reference unit because there is generally some manufacturing variation due to inter-run engineering fixes and the like). This information would either need to be discovered expensive reverse engineering or having the vendor release the information, which they have historically been very reluctant to do, especially as releasing it to every county in the US would be much like publishing it.

Official and Pollworker Verification
This document recommends that voting officials and pollworkers be subject to a number of verification requirements. In particular:

  • Background checks, including interviews with co-workers
  • Citizenship verification
  • Positive physical identification of poll workers prior to handling sensitive materials
  • Test bribery
These recommendations are highly discordant with existing practice. In real jurisdictions, it is extremely difficult to find poll workers (hence the high number of retirees) and they are paid relatively nominal sums (~$10/hr). It's unclear if they would be required to undergo a background check, but I suspect that many would not be pleased by that. In my experience, poll workers feel they are performing a public service and are unlikely to be pleased to be treated as criminals. Of course, it's unclear if poll workers count for the purposed of background checks. The authors write:

Minimum: All election officials, technicians, contractors, or volunteers who prepare, maintain, repair, test, inspect, or transport voting machines, or compile "substantial" amounts of election results should have background checks, repeated every 3---5 years, that include a criminal background history, credit check, and (when practical) interviews with co---workers.

Volunteers certainly set machines up in the polling place. I'm not sure if this counts as "preparing". It wouldn't surprise me if volunteers transported machines. The bottom line here is that this requirement is problematic either way: if you think poll workers have to get background checks, it's really invasive. If you don't, you're ignoring a category of threat from people who have very high levels of machine access (assuming you think that background checks do anything useful, which seems rather dubious in this case.)

The requirement for positive physical identification seems extremely impractical. As noted above, typical polling places are operated by semi-volunteer poll workers. Given the ease of acquiring false identification, it seems highly unlikely that they will be able to validate the identity of either the poll workers under their supervision or of the (alleged) election officials to whom they are supposed to deliver election materials. Similarly, it's not clear to me that verifying US Citizenship does anything useful. Is there some evidence that non-citizens are particularly likely to want to tamper with elections or that it's especially difficult for foreign countries which want to tamper with elections to find US citizens to do it for them?

This document recommends attempting to bribing a subset of poll workers. I'd be interested to learn whether any systematic study of this has been done on the likely subject population. I.e., does this sort of intervention actually reduce the effective level of bribery?

Seal Practice
This document contains a number of detailed recommendations about seal practice (required level of training, surface preparation, inspection protocols). I don't think there's any doubt that seals are a weak security measure and much of the research showing that comes from the Argonne group. However, it's also not clear to me that the measures described here will improve the situation. Extensive human factors research in the Web context shows that users typically ignore even quite obvious indications of security failures, especially in contexts where they get in the way of completion of some task.

Is there research that shows that (for instance) 10 minutes of training has any material impact on the detection rate of fake seals, especially when that detection is performed in the field?

The authors also write:

Minimize the use of (pressure sensitive) adhesive label seals

I don't really understand how this recommendation is operationalizable: Existing voting equipment is designed with numerous points of entry which are not obviously securable in any way, and for which adhesive seals appear to be the most practical option. What is the recommendation for such equipment?

Excessive Expert Manpower Requirements
The authors write:

Minimum: Election officials will arrange for a local committee (pro bono if necessary) to serve as the Election Security Board. The Board should be made up primarily of security professionals, security experts, university professors, students, and registered voters not employees of the election process. The Board should meet regularly to analyze election security, observe elections, and make suggestions for improved election security and the storage and transport of voting machines and ballots. The Board needs considerable autonomy, being able to call press conferences or otherwise publicly discuss its findings and suggestions as appropriate. Employees of companies that sell or manufacture seals, other security products often used in elections, or voting machines are not eligible to serve on the Board.

The United States has something like 10,000 separate election jurisdictions. If each of these convenes a board of 3-5 people, then approximately 30,000-50,000 security experts will be required. Given that all existing voting system reviews have been short-term affairs and in many cases the experts were compensated, and yet have drawn from the entire country to gather ~30 experts, it's hard to see where we are going to gather 1000 times more people for a largely thankless long-term engagement.

Miscellaneous
The authors recommend that:

The voting machines for the above inspection (or trial bribery discussed below) should be randomly chosen based on pseudo-random numbers generated by computer, or by hardware means such as pulling numbers or names from a hat.

Verifiably generating random values is a significantly harder problem than this makes it sound like. In particular, pulling names and numbers from a hat is trivial to game.

Recommended: Each individual in the chain of custody must know the secret password of the day or the election before being allowed to take control of the assets.

Any secret that is distributed this widely is hardly likely to remain a secret for long.

Recommended: Before each election, discuss with poll workers, election judges, and election officials the importance of ballot secrecy, and the importance of watching for miniature wireless video cameras in the polling place, especially mounted to the ceiling or high up on walls to observe voters' choices. The polling place should be checked for surreptitious digital or video cameras at least once on election day.

Elections are typically conducted in spaces which are otherwise reserved for other purposes and therefore are not empty. In my experience with such spaces, it would be very difficult to practically inspect for a surreptitious camera placed in the ceiling and concealed with any level of skill. This is particularly difficult in spaces with drop ceilings, ventilation ducts, etc.

 

October 7, 2011

Argonne Labs's demonstration attack on a Diebold voting machine is getting a lot of press. The article above has the details, but briefly, what the Argonne team did was to insert some malicious "alien" electronics between the CPU and the touch screen. Unsurprisingly, that device can modify input from the touch screen and/or output to the touch screen, allowing the attacker to tamper with the election. To read the press coverage and the quotes given by the authors, you might get the impression that this was something new. For instance:

"This is a fundamentally very powerful attack and we believe that voting officials should become aware of this and stop focusing strictly on cyber [attacks]," says Vulnerability Assessment Team member John Warner. "There's a very large physical protection component of the voting machine that needs to be addressed."

These comments aside, there's not really any new information here; rather, it was completely obvious that this sort of thing was possible to anyone who knew how the devices were constructed. It's well-known that the only defenses against this were physical security of the machines itself (tamper seals, locks, custody, etc.) and that they were extremely weak. Alex Halderman and his team demonstrated some not-dissimilar attacks a while back on the Indian Electronic Voting Machines. The EVEREST report described a man-in-the-middle attack on the iVotronic interface to the VVPAT vote printer. Indeed, the same team from Argonne demonstrated a similar attack on a Sequoia system in 2009.

There are a number of reasons why voting researchers have historically focused on informational attacks (as I've said before, "cyber" isn't the word that computer scientists would typically use). First, they're easier to do wholesale. While it's moderately expensive—though not that expensive—to reverse engineer the software and develop an exploit and/or replacement software, once you've done that you can make as many copies as you want. Moreover, if you have a good exploit (like many of the ones described in the TTBR), you may be able to easily install it with very brief physical access, without opening the case, and perhaps without even violating any security seals. For obvious reasons, attacks which can be mounted by voters seem a lot more interesting than attacks which involve semi long-term access to the machine. It's not exactly likely that your average voter is going to be allowed to open the machine in the middle of the election.

Moreover, in some cases, informational attacks (i.e., viruses) have been demonstrated that only require contact with a small number of voting machines. The idea here is that you have temporary access to a given machine, infect it with the virus, and then this somehow spreads to every machine in the county. By contrast, a physical attack like this requires tampering with every voting machine.

Related to this issue, informational attacks can be easier to conceal. If you need to install some sort of attack hardware and have it present during the election, you're either going to need to get access after the election or (a) lose the device and (b) have a high risk of having it discovered in any subsequent inspection. By contrast, software/informational attacks can be designed so that the standard (i.e., authorized) machine inspection mechanisms won't discover them at all, and in many cases can be programmed to self-destruct after the election. It's not clear that there's any plausible non-destructive mechanism that can be used to post-facto detect the tampering (see the TTBR reports again).

Moreover, as I've said, the possibility of physical attacks is totally obvious once you know you can get into the case (with or without violating the tamper seals) and there's a certain level of "difficulty bias" here. Since everyone already knew that physical attacks were possible, as soon as it was demonstrated that you could get into the machine, it wasn't that important to demonstrate the obvious end-to-end attack. However, since software-based attacks were (a) harder and (b) more useful, it was natural for researchers to spend more time trying working to demonstrate those. That certainly doesn't mean that researchers were somehow unaware that physical attacks were possible.

 
Argonne Labs's demonstration attack on a Diebold voting machine is getting a lot of press. The article above has the details, but briefly, what the Argonne team did was to insert some malicious "alien" electronics between the CPU and the touch screen. Unsurprisingly, that device can modify input from the touch screen and/or output to the touch screen, allowing the attacker to tamper with the election. To read the press coverage and the quotes given by the authors, you might get the impression that this was something new. For instance:

"This is a fundamentally very powerful attack and we believe that voting officials should become aware of this and stop focusing strictly on cyber [attacks]," says Vulnerability Assessment Team member John Warner. "There's a very large physical protection component of the voting machine that needs to be addressed."

These comments aside, there's not really any new information here; rather, it was completely obvious that this sort of thing was possible to anyone who knew how the devices were constructed. It's well-known that the only defenses against this were physical security of the machines itself (tamper seals, locks, custody, etc.) and that they were extremely weak. Indeed, Alex Halderman and his team demonstrated some not-dissimilar attacks a while back on the Indian Electronic Voting Machines. The EVEREST report described a man-in-the-middle attack on the iVotronic interface to the VVPAT vote printer. Indeed, the same team from Argonne demonstrated a similar attack on a Sequoia system im 2009.

There are a number of reasons why voting researchers have historically focused on informational attacks (as I've said before, "cyber" isn't the word that computer scientists would typically use). First, they're easier to do wholesale. While it's moderately expensive—though not that expensive—to reverse engineer the software and develop an exploit and/or replacement software, once you've done that you can make as many copies as you want. Moreover, if you have a good exploit (like many of the ones described in the TTBR), you may be able to easily install it with very brief physical access, without opening the case, and perhaps without even violating any security seals. For obvious reasons, attacks which can be mounted by voters seem a lot more interesting than attacks which involve semi long-term access to the machine. It's not exactly likely that your average voter is going to be allowed to open the machine in the middle of the election.

Moreover, in some cases, informational attacks (i.e., viruses) have been demonstrated that only require contact with a small number of voting machines. The idea here is that you have temporary access to a given machine, infect it with the virus, and then this somehow spreads to every machine in the county. By contrast, a physical attack like this requires tampering with every voting machine.

Related to this issue, informational attacks can be easier to conceal. If you need to install some sort of attack hardware and have it present during the election, you're either going to need to get access after the election or (a) lose the device and (b) have a high risk of having it discovered in any subsequent inspection. By contrast, software/informational attacks can be designed so that the standard (i.e., authorized) machine inspection mechanisms won't discover them at all, and in many cases can be programmed to self-destruct after the election. It's not clear that there's any plausible non-destructive mechanism that can be used to post-facto detect the tampering (see the TTBR reports again).

Moreover, as I've said, the possibility of physical attacks is totally obvious once you know you can get into the case (with or without violating the tamper seals) and there's a certain level of "difficulty bias" here. Since everyone already knew that physical attacks were possible, as soon as it was demonstrated that you could get into the machine, it wasn't that important to demonstrate the obvious end-to-end attack. However, since software-based attacks were (a) harder and (b) more useful, it was natural for researchers to spend more time trying working to demonstrate those. That certainly doesn't mean that reserachers were somehow unaware that physical attacks were possible.