What's wrong with Boltz's FROG proposal

| Comments (3) | Voting
Ingo Boltz attempts to resurrect the Caltech/MIT "FROG" ballot approach. His idea is that you divide the job of building an e-voting system into two parts:
  • A "vote generator" module, which has a DRE-style UI, but instead of recording votes on an electronic memory, outputs a human-readable but also machine-readable paper ballot.
  • A "vote casting" module, which processes the output ballots from the vote generator and tabulates the results.

This is a familiar design (the technical term is an Electronic Ballot Marker (EBM)). What's new is that Boltz suggests that the vote generator (i.e., the EBM) be built by the usual voting machine vendors, the vote casting module (i.e., the tabulation device) be built by some open source group in cooperation with academic security experts. It's not clear to me that this really changes the situation.

The reason why people like EBM designs is that they appear not to require trust in the EBM itself. The idea here is that because the EBM generates a human-readable paper ballot, even if it's compromised the user will noticed that the paper is wrong before its cast. So, you have the convenience of a DRE combined with the security of an optical scan system. Unfortunately, the available human factors evidence suggests that humans do a very poor job of checking the output of this kind of device. I'm not aware of research done specifically on EBMs, but Everett has studied the question of how often users noticed that malicious DREs changed their votes and found that less than 40% of voters actually check. This implies that a malicious EBM could actually do quite a bit of damage and thus remains a security critical component.

In some respects, the tabulator is actually a less security critical component. While the tabulation operation is of course security critical, we have a number of techniques for verifying correct tabulation even in the face of a not-totally-trustworthy tabulation device (manual recounts, audits, re-scanning, etc.). So if we employ those techniques—which aren't in wide use now—then we actually don't need to worry so much about the tabulator itself being designed by a team of geniuses.

This is good because the whole idea that an open source collaboration involving academic security experts will deliver a really secure system seems to me to be fatally flawed. The reason that researchers have been so effective at attacking electronic voting systems isn't because they are so smart and the voting vendors are so dumb—though of course many of the researchers are very smart and in many cases the design of the systems has left much to be desired—but rather because bulding secure software systems is incredibly difficult. Obviously I can't speak for all researchers but while I feel pretty comfortable in my ability to attack voting machines, I wouldn't want to accept a contract to build a machine which couldn't be attacked by others. This is in large part why so many security researchers want to design software independent systems that don't require trusting the software at all.


found that less than 40% of voters actually check.

That sounds like more than enough. Even if 10% of voters check, it means that any significant fraud or error will be noticed quickly. (If you have a million voters, and someone alters 1% of the ballots, 1000 people will notice problems.)

I agree 100% that we shouldn't need to trust the software. If you are relying on the software (and the compiler it was compiled with, and the firmware, and the OS, and the drivers) to be secure, you have already lost. Hence even the lamest optical-scan-counting software is trivial to verify and (this is important) to correct after the fact.

Proponents of "open-source voting" seem to have a solution, and are looking for a problem to which to apply it.

Yes, thy will notice them, but experience suggests that this just gets interpreted as user error or a glitch, because there is a natural base rate of mis-reads in any of these systems.

One area where you're not focusing on is usability. I was an election judge for the City of Minneapolis in the state of Minnesota, USA. Being a security guy, I was interested in the procedures that would allow one to violate the integrity or confidentiality. Sadly, that's moot when the voter gives up and doesn't care about the ballot.

I had a voter that got frustrated by an EBM because of its design. It would not allow one to skip to the contests without suffering through alerts. He eventually gave up and left; we had to cast his empty ballot.

So, we might be able to design awesome security devices but if people are turned off by them, have we done our job?

Leave a comment