Contrarianism on Sequoia's Disclosed Source Voting System

| Comments (4) | SYSSEC Voting
Sequoia Voting Systems recently announced that it will be publishing the source code to their Frontier opscan voting system. Reaction in the security community seems generally positive. Here's Ed Felten:
The trend toward publishing election system source code has been building over the last few years. Security experts have long argued that public scrutiny tends to increase security, and is one of the best ways to justify public trust in a system. Independent studies of major voting vendors' source code have found code quality to be disappointing at best, and vendors' all-out resistance to any disclosure has eroded confidence further. Add to this an increasing number of independent open-source voting systems, and secret voting technologies start to look less and less viable, as the public starts insisting that longstanding principles of election transparency be extended to election technology. In short, the time had come for this step.

I'm less sanguine. I'm not saying this is a bad thing necessarily, but I'm not sure it's a good thing either. As always, it's important to consider what threats we're trying to defend against. We need to consider two kinds of vulnerabilities that might be present in the code:

  • Backdoors intentionally introduced by Sequoia or their engineers.
  • Design and or implementation errors accidentally introduced by Sequoia engineers.

A lot of the focus on open voting systems has focused on the first kind of threat (corporations are stealing your votes, etc.) I think there's certainly a credible argument to be made that having to publish the source code does make this form of attack somewhat harder. If people are looking at your code, then you probably can't put a naked backdoor ("if someone types 1111, give them operator control") into the code because that might get caught in a review. On the other hand, it would be a pretty risky move to put that kind of backdoor into a piece of software anyway, since even closed voting source code does get reviewed, both as part of the system certification process and in private reviews like those conducted by Califoria and Ohio. More likely, you'd want to hide your back door so it looked like an accidentally introduced vulnerability, both to make it harder to find and to give you plausible deniability.

This brings us to the second form of vulnerability: those introduced as errors/misfeatures in Sequoia's development process. These aren't necessarily a sign of incompetence; as Steve Bellovin says, "all software has bugs and security software has security relevant bugs." Having access to the source code makes it easier to find those vulnerabilities (though as Checkoway et al. have shown it's quite possible to find exploitable vulnerabilities in voting systems without access to the source code). This of course applies both to attackers and defenders. There's an active debate about whether or not on balance this makes open source inherently more or less secure. I'm not aware of any data which settles the question definitively, but I don't think that anyone in the security community believes that a previously insecure piece of software will suddenly become substantially more secure just because the source is disclosed; there are just too many security vulnerabilities for the sort of low-level uncoordinated review that you get in practice to stamp out. On the other hand, it does provide a pretty clear near-term benefit to attackers, who, after all, just need to find one vulnerability.

Now, that's not what Sequoia is doing. According to their press release, Frontier is an entirely new system which they say has been "developed from the ground up with the full intention of releasing all of the source code to any member of the public who wishes to download it - from computer scientists and election officials to students, security experts and the voting public". This syncs up better with another argument about why openness is important, which is more about incentives: if vendors know their code will be open to scrutiny they will be motivated to be more careful with their designs and implementations. Reviews of Sequoia's previous systems have been pretty negative; it will be interesting to see if the new one is any better. On the other hand, we have the confounding factor that modern standards for what it means for a piece of software to be secure are a lot higher than those which applied when the original SVS system was written, so it will be hard to tell whether it's really openness which provided the value, or just that they started from scratch.

One more thing: suppose that the source code is published and the code is full of problems. What then?

4 Comments

If there is a problem, hopefully someone with actual clout (or a tight budget) will get angry that the state shelled out for a piece of crap, and demand fixes or refunds. Multiple teams have already cracked older machines, it's time to find out whether the new security requirements have had an effect on Sequoia, or if states will look elsewhere for voting machines.

Openness creates better incentives after release, too. A common story-line with closed e-voting systems has a reviewer claiming a system is flawed, and the vendor replying that there isn't a problem, because the reviewer misunderstood the code, or the reviewer didn't look at the whole context, or whatever. If the code is open, the vendor's counterclaims are checkable, so they'll have a stronger disincentive to make bogus counterclaims.

I've been working on a post that will dovetail a few blog postings (Felten's and your's, notably) on this subject. Will post to FTT sometime soon.

One big problem is that there is no way for the user of a voting system to know whether an issue found using an examination of the open source has been addressed.


One advantage of open source is that you can actually check whether a vulnerability you know about has been eliminated, sometimes by compiling and installing code you personally know to be free of the issue. That's not going to happen with a voting booth, obviously, as we can't let each voter load a custom image.


But it is not clear whether there is any strong way to ensure that a particular version running on a specific machine has actually been patched. Letting individual districts confirm it by installing their own patches has all sorts of possibilities for back-door installation. But if you go much higher up than that, it boils down to the voter having to trust that the people who couldn't pick a product in the first place can pick a patch.


In other words, the open sourcing of the code is only really use if you trust that the folks writing the code want it to be accurate and free of vulnerabilities. And that's part of what we shouldn't have to trust in the system.

Leave a comment