Recently in Voting Category

 

October 18, 2011

Following up on their demonstration attack on Diebold voting machines (writeup, my comments), the Argonne Vulnerability Assessment Team has developed a set of Suggestions for Better Election Security). My review comments are below:

I've had a chance to go over this document and while there are some suggestions that are valuable, many seem naive, impractical, or actively harmful. More generally, I don't see that it derives from any systematic threat model or cost/benefit analysis about which threats to address; merely following the procedures here would--at great expense--foreclose some security threats while leaving open other threats that are arguably more serious both in terms of severity and ease of attack. Finally, many of the recommendations here seem quite inconsistent with the current state of election practice. That's not necessarily fatal, since that practice is in some cases flawed, but there doesn't seem to be any acknowledgement that these seemingly minor changes actually would require radically reworking election equipment and procedures.

If this document is to be useful rather than harmful, it needs to start with a with a description of the threat model--and in particular the assumed attacker capabilities--and then proceed to a systematic analysis of which threats it is economical to defend against, rather than just being a grab bag of isolated security recommendations apparently designed to defend against very different levels of threat.

Pre- And Post-Election Inspections
The authors recommend:

... at least 1% of the voting machines actually used in the election-randomly chosen-should be tested, then disassembled, inspected, and the hardware examined for tampering and alien electronics. The software/firmware should also be examined, including for malware. It is not sufficient to merely test the machines in a mock election, or to focus only on cyber security issues!

This document does not specify how the hardware must be "examined", but a thorough examination, sufficient to discover attack by a sophisticated attacker, is likely to be extremely time consuming and expensive. A voting machine, like most embedded computers, consists of a number of chips mounted on one or more printed circuit boards as well as peripherals (e.g., the touchscreen) connected with cabling. This document seems to expect that "alien electronics" will be a separate discrete component added to the device, but this need not be so. A moderately sophisticated attacker could modify or replace any of these components (for instance, by replacing the chips with lookalike chips). As most of these components are sealed in opaque plastic packaging, assessing whether they have been tampered with is no easy matter. For instance, in the case of a chip, one would need to either remove the chip packaging (destroying it in the process) or x-ray it and then compare to a reference example of the chip in order to verify that no substitution had occurred. These are specialized and highly sophisticated techniques that few people are qualified to carry out, and yet this document proposes that they be performed on multiple machines in every jurisdiction in the United States, of which there are on the order of 10,000.

Moreover, this level of hardware analysis is useless against a broad spectrum of informational threats. An attacker who can rewrite the device's firmware--trivial with physical access to the internals, but the California TTBR discovered a number of vectors which did not require such access--can program his malware to erase itself after the election is over, thus evading inspection. Moreover, to the extent to which the microprocessors in the device contain firmware and/or microcode, it may not be possible to determine whether it has been tampered, since that would require interfaces directly to the firmware which do not depend on the firmware itself; these do not always exist. Absent some well-defined threat model, it is unclear why this document ignores these threats in favor of less effective physical attacks.

Finally, doing any of this inspection requires extremely detailed knowledge of the expected internals of the voting machine (it is insufficient to simply do exact comparison from a single reference unit because there is generally some manufacturing variation due to inter-run engineering fixes and the like). This information would either need to be discovered expensive reverse engineering or having the vendor release the information, which they have historically been very reluctant to do, especially as releasing it to every county in the US would be much like publishing it.

Official and Pollworker Verification
This document recommends that voting officials and pollworkers be subject to a number of verification requirements. In particular:

  • Background checks, including interviews with co-workers
  • Citizenship verification
  • Positive physical identification of poll workers prior to handling sensitive materials
  • Test bribery
These recommendations are highly discordant with existing practice. In real jurisdictions, it is extremely difficult to find poll workers (hence the high number of retirees) and they are paid relatively nominal sums (~$10/hr). It's unclear if they would be required to undergo a background check, but I suspect that many would not be pleased by that. In my experience, poll workers feel they are performing a public service and are unlikely to be pleased to be treated as criminals. Of course, it's unclear if poll workers count for the purposed of background checks. The authors write:

Minimum: All election officials, technicians, contractors, or volunteers who prepare, maintain, repair, test, inspect, or transport voting machines, or compile "substantial" amounts of election results should have background checks, repeated every 3---5 years, that include a criminal background history, credit check, and (when practical) interviews with co---workers.

Volunteers certainly set machines up in the polling place. I'm not sure if this counts as "preparing". It wouldn't surprise me if volunteers transported machines. The bottom line here is that this requirement is problematic either way: if you think poll workers have to get background checks, it's really invasive. If you don't, you're ignoring a category of threat from people who have very high levels of machine access (assuming you think that background checks do anything useful, which seems rather dubious in this case.)

The requirement for positive physical identification seems extremely impractical. As noted above, typical polling places are operated by semi-volunteer poll workers. Given the ease of acquiring false identification, it seems highly unlikely that they will be able to validate the identity of either the poll workers under their supervision or of the (alleged) election officials to whom they are supposed to deliver election materials. Similarly, it's not clear to me that verifying US Citizenship does anything useful. Is there some evidence that non-citizens are particularly likely to want to tamper with elections or that it's especially difficult for foreign countries which want to tamper with elections to find US citizens to do it for them?

This document recommends attempting to bribing a subset of poll workers. I'd be interested to learn whether any systematic study of this has been done on the likely subject population. I.e., does this sort of intervention actually reduce the effective level of bribery?

Seal Practice
This document contains a number of detailed recommendations about seal practice (required level of training, surface preparation, inspection protocols). I don't think there's any doubt that seals are a weak security measure and much of the research showing that comes from the Argonne group. However, it's also not clear to me that the measures described here will improve the situation. Extensive human factors research in the Web context shows that users typically ignore even quite obvious indications of security failures, especially in contexts where they get in the way of completion of some task.

Is there research that shows that (for instance) 10 minutes of training has any material impact on the detection rate of fake seals, especially when that detection is performed in the field?

The authors also write:

Minimize the use of (pressure sensitive) adhesive label seals

I don't really understand how this recommendation is operationalizable: Existing voting equipment is designed with numerous points of entry which are not obviously securable in any way, and for which adhesive seals appear to be the most practical option. What is the recommendation for such equipment?

Excessive Expert Manpower Requirements
The authors write:

Minimum: Election officials will arrange for a local committee (pro bono if necessary) to serve as the Election Security Board. The Board should be made up primarily of security professionals, security experts, university professors, students, and registered voters not employees of the election process. The Board should meet regularly to analyze election security, observe elections, and make suggestions for improved election security and the storage and transport of voting machines and ballots. The Board needs considerable autonomy, being able to call press conferences or otherwise publicly discuss its findings and suggestions as appropriate. Employees of companies that sell or manufacture seals, other security products often used in elections, or voting machines are not eligible to serve on the Board.

The United States has something like 10,000 separate election jurisdictions. If each of these convenes a board of 3-5 people, then approximately 30,000-50,000 security experts will be required. Given that all existing voting system reviews have been short-term affairs and in many cases the experts were compensated, and yet have drawn from the entire country to gather ~30 experts, it's hard to see where we are going to gather 1000 times more people for a largely thankless long-term engagement.

Miscellaneous
The authors recommend that:

The voting machines for the above inspection (or trial bribery discussed below) should be randomly chosen based on pseudo-random numbers generated by computer, or by hardware means such as pulling numbers or names from a hat.

Verifiably generating random values is a significantly harder problem than this makes it sound like. In particular, pulling names and numbers from a hat is trivial to game.

Recommended: Each individual in the chain of custody must know the secret password of the day or the election before being allowed to take control of the assets.

Any secret that is distributed this widely is hardly likely to remain a secret for long.

Recommended: Before each election, discuss with poll workers, election judges, and election officials the importance of ballot secrecy, and the importance of watching for miniature wireless video cameras in the polling place, especially mounted to the ceiling or high up on walls to observe voters' choices. The polling place should be checked for surreptitious digital or video cameras at least once on election day.

Elections are typically conducted in spaces which are otherwise reserved for other purposes and therefore are not empty. In my experience with such spaces, it would be very difficult to practically inspect for a surreptitious camera placed in the ceiling and concealed with any level of skill. This is particularly difficult in spaces with drop ceilings, ventilation ducts, etc.

 

October 7, 2011

Argonne Labs's demonstration attack on a Diebold voting machine is getting a lot of press. The article above has the details, but briefly, what the Argonne team did was to insert some malicious "alien" electronics between the CPU and the touch screen. Unsurprisingly, that device can modify input from the touch screen and/or output to the touch screen, allowing the attacker to tamper with the election. To read the press coverage and the quotes given by the authors, you might get the impression that this was something new. For instance:

"This is a fundamentally very powerful attack and we believe that voting officials should become aware of this and stop focusing strictly on cyber [attacks]," says Vulnerability Assessment Team member John Warner. "There's a very large physical protection component of the voting machine that needs to be addressed."

These comments aside, there's not really any new information here; rather, it was completely obvious that this sort of thing was possible to anyone who knew how the devices were constructed. It's well-known that the only defenses against this were physical security of the machines itself (tamper seals, locks, custody, etc.) and that they were extremely weak. Indeed, Alex Halderman and his team demonstrated some not-dissimilar attacks a while back on the Indian Electronic Voting Machines. The EVEREST report described a man-in-the-middle attack on the iVotronic interface to the VVPAT vote printer. Indeed, the same team from Argonne demonstrated a similar attack on a Sequoia system im 2009.

There are a number of reasons why voting researchers have historically focused on informational attacks (as I've said before, "cyber" isn't the word that computer scientists would typically use). First, they're easier to do wholesale. While it's moderately expensive—though not that expensive—to reverse engineer the software and develop an exploit and/or replacement software, once you've done that you can make as many copies as you want. Moreover, if you have a good exploit (like many of the ones described in the TTBR), you may be able to easily install it with very brief physical access, without opening the case, and perhaps without even violating any security seals. For obvious reasons, attacks which can be mounted by voters seem a lot more interesting than attacks which involve semi long-term access to the machine. It's not exactly likely that your average voter is going to be allowed to open the machine in the middle of the election.

Moreover, in some cases, informational attacks (i.e., viruses) have been demonstrated that only require contact with a small number of voting machines. The idea here is that you have temporary access to a given machine, infect it with the virus, and then this somehow spreads to every machine in the county. By contrast, a physical attack like this requires tampering with every voting machine.

Related to this issue, informational attacks can be easier to conceal. If you need to install some sort of attack hardware and have it present during the election, you're either going to need to get access after the election or (a) lose the device and (b) have a high risk of having it discovered in any subsequent inspection. By contrast, software/informational attacks can be designed so that the standard (i.e., authorized) machine inspection mechanisms won't discover them at all, and in many cases can be programmed to self-destruct after the election. It's not clear that there's any plausible non-destructive mechanism that can be used to post-facto detect the tampering (see the TTBR reports again).

Moreover, as I've said, the possibility of physical attacks is totally obvious once you know you can get into the case (with or without violating the tamper seals) and there's a certain level of "difficulty bias" here. Since everyone already knew that physical attacks were possible, as soon as it was demonstrated that you could get into the machine, it wasn't that important to demonstrate the obvious end-to-end attack. However, since software-based attacks were (a) harder and (b) more useful, it was natural for researchers to spend more time trying working to demonstrate those. That certainly doesn't mean that reserachers were somehow unaware that physical attacks were possible.

 
Argonne Labs's demonstration attack on a Diebold voting machine is getting a lot of press. The article above has the details, but briefly, what the Argonne team did was to insert some malicious "alien" electronics between the CPU and the touch screen. Unsurprisingly, that device can modify input from the touch screen and/or output to the touch screen, allowing the attacker to tamper with the election. To read the press coverage and the quotes given by the authors, you might get the impression that this was something new. For instance:

"This is a fundamentally very powerful attack and we believe that voting officials should become aware of this and stop focusing strictly on cyber [attacks]," says Vulnerability Assessment Team member John Warner. "There's a very large physical protection component of the voting machine that needs to be addressed."

These comments aside, there's not really any new information here; rather, it was completely obvious that this sort of thing was possible to anyone who knew how the devices were constructed. It's well-known that the only defenses against this were physical security of the machines itself (tamper seals, locks, custody, etc.) and that they were extremely weak. Alex Halderman and his team demonstrated some not-dissimilar attacks a while back on the Indian Electronic Voting Machines. The EVEREST report described a man-in-the-middle attack on the iVotronic interface to the VVPAT vote printer. Indeed, the same team from Argonne demonstrated a similar attack on a Sequoia system in 2009.

There are a number of reasons why voting researchers have historically focused on informational attacks (as I've said before, "cyber" isn't the word that computer scientists would typically use). First, they're easier to do wholesale. While it's moderately expensive—though not that expensive—to reverse engineer the software and develop an exploit and/or replacement software, once you've done that you can make as many copies as you want. Moreover, if you have a good exploit (like many of the ones described in the TTBR), you may be able to easily install it with very brief physical access, without opening the case, and perhaps without even violating any security seals. For obvious reasons, attacks which can be mounted by voters seem a lot more interesting than attacks which involve semi long-term access to the machine. It's not exactly likely that your average voter is going to be allowed to open the machine in the middle of the election.

Moreover, in some cases, informational attacks (i.e., viruses) have been demonstrated that only require contact with a small number of voting machines. The idea here is that you have temporary access to a given machine, infect it with the virus, and then this somehow spreads to every machine in the county. By contrast, a physical attack like this requires tampering with every voting machine.

Related to this issue, informational attacks can be easier to conceal. If you need to install some sort of attack hardware and have it present during the election, you're either going to need to get access after the election or (a) lose the device and (b) have a high risk of having it discovered in any subsequent inspection. By contrast, software/informational attacks can be designed so that the standard (i.e., authorized) machine inspection mechanisms won't discover them at all, and in many cases can be programmed to self-destruct after the election. It's not clear that there's any plausible non-destructive mechanism that can be used to post-facto detect the tampering (see the TTBR reports again).

Moreover, as I've said, the possibility of physical attacks is totally obvious once you know you can get into the case (with or without violating the tamper seals) and there's a certain level of "difficulty bias" here. Since everyone already knew that physical attacks were possible, as soon as it was demonstrated that you could get into the machine, it wasn't that important to demonstrate the obvious end-to-end attack. However, since software-based attacks were (a) harder and (b) more useful, it was natural for researchers to spend more time trying working to demonstrate those. That certainly doesn't mean that researchers were somehow unaware that physical attacks were possible.

 

August 9, 2011

My EVT 2011 rump session talk, on the future of Internet Voting, is now available here. And in response to the people who ask about my cat's political leanings? She's in favor of Proposition C legalizing medical catnip.

UPDATE: Temporary subversion glitch makes file unavailable. Will have it back online soon.

UPDATE: Fixed.

 

November 4, 2010

2010 General:
  • How do I know how to vote without a Granick Slate Card
  • For some reason, Santa Clara County keeps moving my polling place around and I somehow lost my voter pamphlet telling me where to go, so I cruised over to the polling place on Middlefield to look at their map. It wasn't my polling place, but I still could have gotten vaccinated:

  • Provisional ballot handling seemed a little clunky at this polling place. The way you vote a central count optical scan provisional ballot in Santa Clara is to fill out the ballot and then stuff it in an envelope with your information. You seal the envelope and then if election central determines that you're entitled to vote, they open the envelope and scan the ballot. (Santa Clara doesn't know use a double envelope system). But you are supposed to seal the envelope, not let the poll workers do it, since otherwise they see how you're going to vote. Anyway, when I saw a provisional voter vote, they tried to pass the whole mess to the pollworker, who looked about to put it all in the envelope but eventually let the voter do it.
  • Santa Clara does have Sequoia DREs, but after the TTBR California restricted these to one per polling place, and so there was one lonely Sequoia AVC Edge, but the poll workers by default give you a paper ballot. When I showed up around 11 AM the poll workers told me that nobody had used it yet. It's kind of a pain to shut the machine down, so the poll workers generally prefer to have everyone vote opscan.
 

October 21, 2010

Ingo Boltz attempts to resurrect the Caltech/MIT "FROG" ballot approach. His idea is that you divide the job of building an e-voting system into two parts:
  • A "vote generator" module, which has a DRE-style UI, but instead of recording votes on an electronic memory, outputs a human-readable but also machine-readable paper ballot.
  • A "vote casting" module, which processes the output ballots from the vote generator and tabulates the results.

This is a familiar design (the technical term is an Electronic Ballot Marker (EBM)). What's new is that Boltz suggests that the vote generator (i.e., the EBM) be built by the usual voting machine vendors, the vote casting module (i.e., the tabulation device) be built by some open source group in cooperation with academic security experts. It's not clear to me that this really changes the situation.

The reason why people like EBM designs is that they appear not to require trust in the EBM itself. The idea here is that because the EBM generates a human-readable paper ballot, even if it's compromised the user will noticed that the paper is wrong before its cast. So, you have the convenience of a DRE combined with the security of an optical scan system. Unfortunately, the available human factors evidence suggests that humans do a very poor job of checking the output of this kind of device. I'm not aware of research done specifically on EBMs, but Everett has studied the question of how often users noticed that malicious DREs changed their votes and found that less than 40% of voters actually check. This implies that a malicious EBM could actually do quite a bit of damage and thus remains a security critical component.

In some respects, the tabulator is actually a less security critical component. While the tabulation operation is of course security critical, we have a number of techniques for verifying correct tabulation even in the face of a not-totally-trustworthy tabulation device (manual recounts, audits, re-scanning, etc.). So if we employ those techniques—which aren't in wide use now—then we actually don't need to worry so much about the tabulator itself being designed by a team of geniuses.

This is good because the whole idea that an open source collaboration involving academic security experts will deliver a really secure system seems to me to be fatally flawed. The reason that researchers have been so effective at attacking electronic voting systems isn't because they are so smart and the voting vendors are so dumb—though of course many of the researchers are very smart and in many cases the design of the systems has left much to be desired—but rather because bulding secure software systems is incredibly difficult. Obviously I can't speak for all researchers but while I feel pretty comfortable in my ability to attack voting machines, I wouldn't want to accept a contract to build a machine which couldn't be attacked by others. This is in large part why so many security researchers want to design software independent systems that don't require trusting the software at all.

 

August 29, 2010

The second major attack described by Prasad et al. is a clip-on memory manipulator. The device in question is a small microcrontroller attached to a clip-on connector. You open the control unit, clip the device onto the memory storage chip, rewrite the votes, and disconnect it. There are a number of countermeasures we could consider here.

Physical Countermeasures
We've already discussed seals, but one obvious countermeasure is to encase the entire memory chip in plastic/epoxy. This would make it dramatically harder to access the memory chip. One concern I have about this is heat: were these chips designed to operate without any cooling. That seems like a question that could be experimentally answered. I think you'd want to use transparent epoxy here, to prevent an attacker from drilling in, access the memory chip, and covering it over, maybe with a small piece of plastic to permit future access. I also had an anonymous correspondent suggest encasing the entire unit in epoxy, but at most this would be the circuit board, since the device has buttons and the like; this would of course make the heat problem worse.

Cryptographic Countermeasures
Another possibility would be to extend the cryptographic checksum technique I suggested to deal with the dishonest display. At the end of the election when the totals are recorded the CPU writes a MAC into the memory over all the votes (however recorded) as well as writing a MAC over the totals. It then erases the per-election key from memory (by overwriting it with zeros). This makes post-election rewriting attacks much harder—the attacker would need to also know the per-election key (which requires either insider information or access to the machine between setup and the election) and the per-machine key, which requires extensive physical access. I think it's plausible to argue that the machine can be secured at least during the election and potentially before it. Note that this system could be made much more secure by having a dedicated memory built into the CPU for storage of the per-unit key, but that would involve a lot more reengineering than I'm suggesting here.

 

August 27, 2010

In their paper on Indian EVMs, Prasad et al. demonstrate that you can easily pry off the LED segment display module and replace it with a malicious display. At a high level, it's important to realize that no computer system can be made secure if the attacker is able to replace arbitrary components, since in the limit he can just swap everything out with lookalike components.

The two basic defenses here to use anti-counterfeiting techniques and to use cryptography with hardware security modules. Most of the proposals for fixing this problem (and the memory overwriting problem) are of the anti-counterfeiting variety; you seal everything up with tamper-evident seals and make it very hard to get/make the seals. Then any attacker who wants to swap components needs to break the seals, which is in theory obvious. Unfortunately, it's very hard to make seals that resist a dedicated attacker. In addition, sealing requires good seal procedures in terms of placing them and checking them with this many machines in the field it's going to be quite hard to actually do that in a substantially better way than we are doing now.

The other main defense is to use cryptography. The idea is that you embed all your sensitive stuff in a hardware security module (one per device). That module has an embedded cryptographic key and is designed so that if anyone tampers with the module it erases the key. When you want to make sure that a device is legitimate, you challenge the module to prove it knows the key. That way, even if an attacker creates a lookalike module, it can't generate the appropriate proof and so the substitution doesn't work. Obviously, this means that anything you need to trust needs to be cryptographically verified (i.e., signed) as well. Periodically one does see the suggestion of rearchitecting DRE-style voting machines to be HSM-based, but this seems like a pretty big change for India, both in terms of hardware and in terms of procedures for managing the keying material, verifying the signatures, etc.

However, there is an intermediate approach which would make a Prasad-style attack substantially harder without anywhere near as much effort. The idea is that each machine would be programmed by the Election Commission of India with a unique cryptographic key. This could be done at the same time as it was programmed for the election to minimize logistical hassle. Then at the same time that the vote totals are read out, the machine also reads out a MAC (checksum) of the results computed using that key. That MAC is reported along with the totals and if it doesn't verify, that machine is investigated. Even though the malicious display can show anything the attacker wants, the attacker cannot compute the MAC and therefore can't generate a valid report of vote totals. The MAC can be quite short, even 4 decimal digits reduces the chance of successful attack on a machine to 1/10000.

This approach is significantly less secure than a real HSM, since an attacker who recovers the key for a given machine can program a display for that machine. But it means that the window of opportunity for that attack is much shorter; if the key is reprogrammed for each election then you need to remount the attack between programming time and election time, instead of attacking the machine once and leaving the display in place indefinitely. It's also worth asking if we could make it harder to recover the key; if it's just in the machine memory, then it's not going to be hard to read out using the same technique that Prasad et al. demonstrate for rewriting vote totals. However, you could make the key harder to read by, for instance, having two keys, one of which is burned into the machine at manufacture time in the unreadable (hard to read) firmware which is already a part of each machine and another which is reprogrammed at each election. The MAC would be computed using both keys. This would require the attacker to attack both the firmware on the machine (once) and the main memory (before each election).

Clearly, this isn't an ideal solution but as I said at the beginning of this series, the idea is to improve things without doing too much violence to the existing design. Other approaches welcome.

 
As I mentioned earlier, Prasad et al.'s research clearly demonstrates that there are attacks on the election machines used in India. On the other hand, during the panel at EVT/WOTE the Indian election officials argued that there were serious fraud problems (especially ballot box stuffing) with the paper ballot-based system they used before the introduction of the EVMs, so there's going to be a huge amount of resistance to just going back to paper ballots. Without taking a position on paper versus machines, it's worth asking whether it's possible to build a better version of the existing EVMs (bearing in mind that there are something like a million of these machines out there, so change isn't going to be easy.)

Prasad et al. have three major complaints about the EVMs:

  • It's possible to replace the display, causing it to show any vote totals the attacker chooses.
  • It's possible to rewrite the memory chip that stores the vote totals.
  • The firmware on the devices is secret and the the devices are designed so that the firmware cannot be read off. This makes it difficult to determine whether the devices have malware (either installed at manufacture time or later.)

These are obviously real problems, though how serious they are of course depends on whatever procedural controls are used with the machines. Obviously, it would be better to have a machine without those problems. In DC I asked the panel to assume that they were stuck with something like the existing DREs (this isn't hard for Indiresan and Shukla, of course) and consider how they would improve them. I didn't get much of an answer, but I still think it's worth considering.

Over the next few days, I'll be talking a bit about how to address some of these issues.

 

August 24, 2010

I've held off on writing much about EVT/WOTE because I've been waiting for the A/V recordings to be posted. Most of them are up now, including an unfortunately partial recording of the most dramatic part of the conference, the panel on Indian EVMs. (There's some other good stuff like the rump session that's not up or only partially up.)

As background, Indian elections are conducted on a relatively simple hardware-based DRE machine, I.e., a small handset with buttons for each candidate; votes are recorded in memory and then totals read out on a control module. Hari Prasad, Alex Halderman, Rop Gonggrijp, Scott Wolchock, Eric Wustrow, Arun Kankipati, Sal Sakhamuri, and Vasavya Yagati got ahold of one of the machines and managed to demonstrate some attacks on it (see their analysis here). This naturally provoked a lot of controversy, and we decided this made a good topic for a panel. The panelists were:

  • P.V. Indiresan, Former Director, IIT-Madras
  • G.V.L Narasimha Rao, Citizens for Verifiability, Transparency, and Accountability in Elections, VeTA
  • Alok Shukla, Election Commission of India
  • J. Alex Halderman, University of Michigan

Unsurprisingly, the panel was extremely contentious, with Joseph Lorenzo Hall doing a great job of keeping the various strong personalities to the agreed upon format. It's definitely worth watching for yourself: we have complete audio and video for the last hour or so.

You may have heard that Hari Prasad has been arrested. This has obviously raised some very strong feelings, but I don't think it really bears one way or another on the arguments about whether EVMs are a good choice or not. The issues here aren't really that technical; the attacks reported by Prasad at all are straightforward, as are the attacks that the representatives of the Election Commission of India report were common on paper ballot systems before the introduction of EVMs. It's definitely worth watching/listening to this panel and making your own assessment.