Shocked, shocked to find "non-cyber" attacks on voting systems

| Comments (1) | Voting
Argonne Labs's demonstration attack on a Diebold voting machine is getting a lot of press. The article above has the details, but briefly, what the Argonne team did was to insert some malicious "alien" electronics between the CPU and the touch screen. Unsurprisingly, that device can modify input from the touch screen and/or output to the touch screen, allowing the attacker to tamper with the election. To read the press coverage and the quotes given by the authors, you might get the impression that this was something new. For instance:

"This is a fundamentally very powerful attack and we believe that voting officials should become aware of this and stop focusing strictly on cyber [attacks]," says Vulnerability Assessment Team member John Warner. "There's a very large physical protection component of the voting machine that needs to be addressed."

These comments aside, there's not really any new information here; rather, it was completely obvious that this sort of thing was possible to anyone who knew how the devices were constructed. It's well-known that the only defenses against this were physical security of the machines itself (tamper seals, locks, custody, etc.) and that they were extremely weak. Alex Halderman and his team demonstrated some not-dissimilar attacks a while back on the Indian Electronic Voting Machines. The EVEREST report described a man-in-the-middle attack on the iVotronic interface to the VVPAT vote printer. Indeed, the same team from Argonne demonstrated a similar attack on a Sequoia system in 2009.

There are a number of reasons why voting researchers have historically focused on informational attacks (as I've said before, "cyber" isn't the word that computer scientists would typically use). First, they're easier to do wholesale. While it's moderately expensive—though not that expensive—to reverse engineer the software and develop an exploit and/or replacement software, once you've done that you can make as many copies as you want. Moreover, if you have a good exploit (like many of the ones described in the TTBR), you may be able to easily install it with very brief physical access, without opening the case, and perhaps without even violating any security seals. For obvious reasons, attacks which can be mounted by voters seem a lot more interesting than attacks which involve semi long-term access to the machine. It's not exactly likely that your average voter is going to be allowed to open the machine in the middle of the election.

Moreover, in some cases, informational attacks (i.e., viruses) have been demonstrated that only require contact with a small number of voting machines. The idea here is that you have temporary access to a given machine, infect it with the virus, and then this somehow spreads to every machine in the county. By contrast, a physical attack like this requires tampering with every voting machine.

Related to this issue, informational attacks can be easier to conceal. If you need to install some sort of attack hardware and have it present during the election, you're either going to need to get access after the election or (a) lose the device and (b) have a high risk of having it discovered in any subsequent inspection. By contrast, software/informational attacks can be designed so that the standard (i.e., authorized) machine inspection mechanisms won't discover them at all, and in many cases can be programmed to self-destruct after the election. It's not clear that there's any plausible non-destructive mechanism that can be used to post-facto detect the tampering (see the TTBR reports again).

Moreover, as I've said, the possibility of physical attacks is totally obvious once you know you can get into the case (with or without violating the tamper seals) and there's a certain level of "difficulty bias" here. Since everyone already knew that physical attacks were possible, as soon as it was demonstrated that you could get into the machine, it wasn't that important to demonstrate the obvious end-to-end attack. However, since software-based attacks were (a) harder and (b) more useful, it was natural for researchers to spend more time trying working to demonstrate those. That certainly doesn't mean that researchers were somehow unaware that physical attacks were possible.


The rule is: "what I tell you three times is true." Not two.

Leave a comment