SYSSEC: November 2008 Archives

 

November 26, 2008

As you may have heard, President-Elect Obama may need to give up his Blackberry for "security reasons":

But before he arrives at the White House, he will probably be forced to sign off. In addition to concerns about e-mail security, he faces the Presidential Records Act, which puts his correspondence in the official record and ultimately up for public review, and the threat of subpoenas. A decision has not been made on whether he could become the first e-mailing president, but aides said that seemed doubtful.

...

Diana Owen, who leads the American Studies program at Georgetown University, said presidents were not advised to use e-mail because of security risks and fear that messages could be intercepted.

"They could come up with some bulletproof way of protecting his e-mail and digital correspondence, but anything can be hacked," said Ms. Owen, who has studied how presidents communicate in the Internet era. "The nature of the president's job is that others can use e-mail for him."

These seem like separate issues. I don't know what the Presidential Records Act says, outside of the Wikipedia article, but presumably this is an argument against the President using email at all, not just a Blackberry. Presumably what's required here is discretion in what gets sent over the Blackberry.

The security ("hacking") problem seems more serious. There are a number of issues here, including:

  • Confidentiality of the data going to and from the Blackberry.
  • Remote compromise of the Blackberry.
  • Tracking of the President via his Blackberry.
The confidentiality problem is comparatively easy to address. Cellular networks generally have relatively weak encryption, and even if that weren't true, you can't trust the cellular provider anyway. That said, there's plenty of technology for tieing up encrypted channels from the Blackberry back to some server in the White House where the data gets handled like email sent from White House computers (e.g., a VPN). I'm not familiar with the Blackberry VPN offerings, but this isn't something that would be that hard to develop.

Remote compromise is much more difficult to solve. You've got a device that's connected to the Internet, and of course it contains software with what you'd expect to be the usual complement of security vulnerabilities. You could perhaps try to tunnel all IP-level communications back through the White House, but you'd still have to worry about everything a the cellular/radio level which has to come directly through the ordinary cell network. Accordingly, you should expect that a dedicated attacker with access to the device's phone number, transmitter serial number, etc. would be able to remotely compromise it. Such a device could send copies of data to whoever controlled it, record any ambient audio (or video if you had a camera), etc. Protecting against remote compromise isn't like adding a VPN client; you have to worry about the entire surface area of the software and it's not like you're going to rewrite the entire Blackberry firmware stack. Cutting against this concern is the fact that the president isn't going to be the only person with access to sensitive material. Are we going to deny everyone on the direct presidential staff access to any sort of modern communications device?

Similar considerations apply to tracking. All you need is to know the phone's radio parameters and have an appropriate receiver, and the phone will helpfully transmit regular beacons. Again, though, it's not usually hard to figure out where the president is, surrounded as he is by a bunch of staffers and secret service agents. Additionally, many of those people will have radio transmitters, so it's not clear that denying the president his device will add much value. If it's imperative that the president not be tracked at any particular time, you can simply shut down his device then.

 

November 9, 2008

Sorry about the delay in completing this part of the series. Things got a bit crazy after the election. Anyway, when we left off I had just covered malicious failure modes in the polling place. Today we'll be talking about failures in the back office, aka election central. There's plenty of stuff to go wrong in the election preparation phase (ballot definition, device programming, etc.), but here I'm mostly interested in vote tabulation, which is done via the Election Management System (EMS).

Depending on the election system being used, tabulation can be performed in a number of ways:

  • In central count opscan systems, the ballots get shipped back to election central, so we have to actually scan them and then tabulate the results.
  • In DRE and precinct count opscan systems, pre-counted results come back from the precinct and simply need to be aggregated and the winners declared.

It's best to take each of these separately.

Central Count Optical Scan
Most plausible CCOS failures are non-malicious: it's pretty hard for an end-user to mount any kind of attack on the scanning system proper or than denial of service. Obviously, the attacker could tamper with their ballot (treat it with acid, glue, or somesuch) to damage the scanner, but it's not clear what this would buy you other than delaying the count. [This isn't to say that there isn't plenty of room for manipulating paper ballots, just that you would probably find it more profitable to do outside of election central, which is presumbly subject to fairly restricted access.]

On the other hand, plenty of stuff can still go wrong. First, ballots don't always scan correctly. If you're lucky, the scanner will just reject the ballot and then it will need to be manually counted. Often the voter's intent is clear, but if it's not, there's no real opportunity for the voter to correct it, and their vote just gets lost. Other than that, the sheet feeder in the scanner can mangle the ballot in various ways, causing inconvenience, manual counting, etc.

That said, if an attacker does manage to take control of the CCOS scanner, the consequences are fairly serious. As with any other piece of computerized election equipment, the attacker can cause it to return any result that he wants. On the other hand, the scanner very rarely needs to be connected to any other piece of computer equipment, so the risk can be minimized with proper controls.

PCOS and DRE
With PCOS and DRE, results get communicated back from the field one of two ways: either on some sort of memory card or on summary results tapes. The big concern with memory cards is that they can serve as a vector for viral spread from compromised precinct machines. For instance, the TTBR Diebold report describes such an attack. As usual, if the EMS is compromised, the attacker can cause it to report any results it chooses. This includes, of course, misreporting any results fed into it from the central count optical scanner. An even more serious concern is that if the same EMS is used for ballot preparation and machine initialization then it can serve as a viral spread vector: the attacker infects a machine in the field, the virus spreads to the EMS, which then compromises every polling place machine. ([HRSW08] has a lot more discussion of this form of attack, as well as countermeasures.)

The data doesn't have to be sent back on memory cards, of course. DREs and opscans typically print out results/summary tapes with the vote totals. These can be manually keyed into the EMS. This mostly controls the viral threat, but now you have to worry about a whole array of errors on the paper tape. As this post by Ed Felten indicates, the quality of the results tapes is pretty low and when coupled with the usual human errors, there's a lot of possibility for the wrong data to end up in the EMS. (This isn't to say that there can't be errors on the memory cards as well, especially with the Premier system which uses some super-old tech; Sequoia and Hart use PCMCIA flash drives, which are just old tech.) In principle, this might get detected by comparison of the precinct-level results tapes, which (at least in Santa Clara County) get posted publicly elsewhere, but I don't know if anyone actually double checks that stuff in practice.

Visibility
Of course, almost none of these issues are obvious to voters: you just vote, but you have no real way of knowing if your vote was counted or not (this is deliberate, for vote privacy reasons). And of course it's even harder to verify that any issues have been handled correctly.

Next: attack vectors.

 

November 4, 2008

Yesterday I wrote about non-malicious e-voting failure modes. In today's installment, we discuss malicious failure modes in polling place devices (tomorrow we talk about the back office).

The most powerful attack is if the attacker can gain actual control of the voting machine. There has been a lot of work on subverting polling place devices, but the bottom line is that it looks to me like an attacker with limited physical access can take control of pretty much any of the standard devices (I'll cover attack vectors later). Obviously, an attacker who controls a voting machine can make it do pretty much anything it's physically capable of, including simulating any non-malicious attack. However, there are also more subtle attacks that an attacker can mount. The TTBR and EVEREST reports provide extensive catalogs of the possible attacks, but I'll just cover some of the highlights here, focusing on attacks designed to alter the vote count.

OPSCAN
Because the optical scan interface is so limited, it's extremely hard to distinguish malicious from non-malicious errors. However, an attacker who controls an optical scanner can cause selective failures of the optical scanner in several interesting ways. First, the scanner can explicitly reject ballots cast for particular candidates; for instance, it could claim that some fraction of ballots with Burr selected were undervoted or overvoted. It's not clear how powerful such an attack is, since presumably voters would keep trying and eventually either the ballots would be submitted for exceptional manual processing or the machine would be taken out of service. On the other hand, this could serve as a vote suppression attack for particular districts. A more sophisticated version of this attack would be to have the scanner count votes and if it detects that a lot of voters are voting Burr rather than Hamilton, it starts failing more frequently, suppressing votes in Burr precincts.

There are also invisible attacks: the scanner could simply record votes for Burr instead of Hamilton. As I noted previously, this would only get caught in an audit with a separate scanner or hand counting, since there's no display to the user of the scanned ballot. Even if there were, the scanner could display "Burr" but record "Hamilton", so there's no real way for the user to detect attack. Not all jurisdictions do audits and it's not clear that even where they are done, they're done in a powerful enough way to detect and correct this kind of tampering (more on auditing in a later post as well).

Even an attacker who doesn't control the machine can still perform a DoS attack: optical scanners and the sheet feed mechanisms inside are relatively easy to jam. If you're not worried about getting caught, you could clearly cover your ballot with glue and then shove it in the scanner. There are probably substances you could use that would dirty the scan heads enough to make votes hard to read. Could you do this selectively? Maybe. opscan ballots in Santa Clara are two column, but each race is a single column, so you can't prefer one candidate to another. But what you could potentially do, depending on the scanner, is dirty the section of the head over a given race thus suppressing votes for just that race, which would let you have a semi-selective effect. This would be an invisible attack unless the scanner is configured to report undervotes.

DREs
There's an enormous amount of room for attack with DREs. You could clearly mount a simple vote-flipping attack, simulating a flakey touchscreen and making the machine visibly shift votes from Hamilton to Burr. However, you can do far better than that. The attack that's most obvious—and has generated the most concern—is simply to have the machine record an entirely different set of votes than the voters are voting. [In the research community it's generally considered declasse to just stuff the ballot results with fake votes because at least in principle jurisdictions record the total number of votes and can compare them; intead you change users votes.] This isn't even noticeable to the voter, since the UI all looks right. It's just that a different vote is recorded. Without an independent record of your vote, there's no straightforward way to detect or correct this kind of attack.

If a VVPAT is in use (see the previous post for a description of a VVPAT), the attacker's job becomes a little harder. If he just creates totally phony records, then the electronic results won't match the results on the VVPAT. The obvious attack here is what's called a "presentation attack". The machine accepts the user's input but somewhere along the line it changes the vote. Maybe it does it on initial input but more likely it does it before the summary screen or just on the VVPAT. Studies show that users aren't very good at checking these and so mostly this will work. Even if the user catches it, the machine just lets them correct "their" mistake and then perhaps waits a while before attacking again. A really sophisticated machine might be able to monitor user behavior to try to pick out users who seemed uncertain about how to use the machine and attack them preferentially. The advantage of this kind of attack is that it makes the VVPAT and the electronic records line up, making audits much harder.

Other attacks are possible too: the attacker controls the printer, so perhaps he can print the VVPAT as normal and then when the voter casts their vote, he waits and then prints "VOID" or "REJECTED" (depending on how the machine would ordinarily display rejected ballots) and then prints his own votes of choice. This just looks like a bunch of extra printer activity and since voters don't have a lot of idea how the VVPAT is supposed to behave, it's not at all clear they would notice.

As with OPSCAN, there are also a broad variety of DoS and selective DoS attacks. The machines can be programmed to slow down or crash when you vote for specific candidates. They can fail to display specific races. For instance, if the attacker wants to influence a Senate race and it detects you voted for party A for president, then it could not show you the senate race, thus pushing the race toward party B. Again, you might not notice this in the VVPAT/summary screen. Even without full control of the machine, it's probably possible to crash it in various ways without getting blamed.

Malice, incompetence, etc.
I said at the beginning of this post that an attacker can simulate more or less any non-malicious failure--and has a real incentive to do so. However, as anyone who has worked with computers can tell you, they are perfectly capable of behaving in lots of surprising ways without any malicious activity at all. Any report of failure needs to be evaluated using Hanlon's Razor. We have plenty of evidence that voting machines in particular can do things that look like attacks even when it's pretty clear they are not (see this video, for instance), so while certainly we have to be wary of attack, it's probably a mistake to jump to the conclusion that there's been an attack just because you see something funny.

Next: Malicious failure modes at election central

UPDATE: Fixed cut and paste error.