EKR: October 2011 Archives

 

October 25, 2011

Threat Level writes about the release of a denial of service tool for SSL/TLS web servers.
The tool, released by a group called The Hackers Choice, exploits a known flaw in the Secure Socket Layer (SSL) protocol by overwhelming the system with secure connection requests, which quickly consume server resources. SSL is what's used by banks, online e-mail providers and others to secure communications between the website and the user.

The flaw exists in the process called SSL renegotiation, which is used in part to verify a user's browser to a remote server. Sites can still use HTTPS without that renegotiation process turned on, but the researchers say many sites have it on by default.

"We are hoping that the fishy security in SSL does not go unnoticed. The industry should step in to fix the problem so that citizens are safe and secure again. SSL is using an aging method of protecting private data which is complex, unnecessary and not fit for the 21st century," said the researchers in a blog post.

The attack still works on servers that don't have SSL renegotiation enabled, the researchers said, though it takes some modifications and some additional attack machines to bring down the system.

Background
In order to understand what's going on, you need to have some background about SSL/TLS. An SSL/TLS connection has two phases:

  • A handshake phase in which the keys are exchanged
  • A data transfer phase in which the actual data is passed back and forth.

For technical cryptographic reasons which aren't relevent here, the handshake phase is generally much more expensive than the data transfer phase (though not as expensive as people generally think). Moreover, the vast majority of the cost is to the server. Thus, if I'm an attacker and you're a server and I can initiate a lot of handshakes to you, I can force you to do a lot of computations. In large enough quantity, then, this is a computation denial of service attack. This is all very well known. What the attack would look like is that I would set up a client or set of clients which would repeatedly connect to your server, do enough of a handshake to force you to incur computational cost, and disconnect.

What's slightly less well-known is that SSL/TLS includes a feature called "renegotiation", in which either side can ask to do a new handshake on an existing connection. Unsurprisingly, the cost of a new handshake is roughly the same as an initial one. [Technical note: not if you're doing resumption but in this case the client wouldn't offer resumption, since he wants to maximize server cost.] So, what this attack would look like is that instead of opening multiple connections, I'd open a single connection and just renegotiate over and over. As I said, this is slightly less well-known, but it's certainly been known that it's a possibility for some time, but most of the analyses I have seen suggested that it wasn't a major improvement from the attacker's perspective.

The Impact of This Attack
What you should be asking at this point is whether a computational DoS attack based on renegotiation is any better for the attacker than a computational DoS attack based on multiple connections. The way we measure this is by the ratio of the work the attacker has to do to the work that the server has to do. I've never seen any actual measurements here (and the THC guys don't present any), but some back of the envelope calculations suggest that the difference is small.

If I want to mount the old, multiple connection attack, I need to incur the following costs:

  1. Do the TCP handshake (3 packets)
  2. Send the SSL/TLS ClientHello (1 packet). This can be a canned message.
  3. Send the SSL/TLS ClientKeyExchange, ChangeCipherSpec, Finished messages (1 packet). These can also be canned.

Note that I don't need to parse any SSL/TLS messages from the server, and I don't need to do any cryptography. I'm just going to send the server junk anyway, so I can (for instance) send the same bogus ClientKeyExchange and Finished every time. The server can't find out that they are bogus until it's done the expensive part [Technical note: the RSA decryption is the expensive operation.] So, roughly speaking, this attack consists of sending a bunch of canned packets in order to force the server to do one RSA decryption.

Now let's look at the "new" single connection attack based on renegotiation. I need to incur the following costs.

  1. Do the TCP handshake (3 packets) [once per connection.]
  2. Send the SSL/TLS ClientHello (1 packet). This can be a canned message.
  3. Receive the server's messages and parse the server's ServerHello to get the ServerRandom (1-3 packets).
  4. Send the SSL/TLS ClientKeyExchange and ChangeCipherSpec messages (1 packet).
  5. Compute the SSL/TLS PRF to generate the traffic keys.
  6. Send a valid Finished message.
  7. Repeat steps 2-7 as necessary.

The advantage of this variant is that I get to amortize the TCP handshake (which is very cheap). The disadvantage is that I can't just use canned packets. I need to do actual cryptographic computations in order to force the server to do an RSA private key decryption. This is just a bunch of hashes, but it's still not free.

Briefly then, we've taken an attack which was previously limited by network bandwidth and slightly reduced the bandwidth (by a factor of about 2 in packets/sec and less than 10% in number of bytes) at the cost of significantly higher computational effort on the attacker's client machines. Depending on the exact characteristics of your attack machines, this might be better or worse, but it's not exactly a huge improvement in any case.

Another factor to consider is the control discipline on the server. Remember that the point of the exercise is to deny service to legitimate users. It's not uncommon for servers to service each SSL/TLS connection in a single thread. If you're attacking a server that does this, and you use a single connection with renegotiation, then you're putting a lot of load on that one thread; a sane thread scheduler will try to give each thread equivalent amounts of CPU, which means that you don't have a lot of impact on other legitimate users; your thread just falls way behind. By contrast, if you use a lot of connections then you get much better crowding out of legitimate users. On the other hand, if you have some anti-DoS device in front of your server, it might be designed to prevent a lot of connections from the same client, in which case the single connection approach would be more effective. Of course, if single-connection attacks become popular, it's trivial to enhance anti-DoS devices to stop them. [Technical note: SSL/TLS content types are in the clear so renegotiation is easily visible.]

Is this a flaw in SSL/TLS?
Zetter and the THC guys characterize this as a flaw in SSL/TLS. Without offering a general defense of SSL/TLS, this seems overstated at best. First, this isn't really a threat that endangers citizens ability to be "safe and secure". Rather, it's a mechanism for bringing down the Web sites they visit. This isn't to say that there aren't problems in SSL/TLS that would lead to compromise of user's data, but this sort of DoS attack doesn't fall into that category.

Second, computational DoS attacks of this type have been known about for a very long time and in general security protocol designers have made a deliberate choice not to attempt to defend against them. Defenses against computational DoS typically fall into two categories:

  • Force users to demonstrate that they are reachable at their claimed IP address. This prevents "blind" attacks where the attacker can send forged packets and thus makes it easier to track down attackers.
  • Try to impose costs on users so that the ratio of attacker work to defender work is more favorable. (There are a variety of schemes of this type but the general term is "client puzzles").

Because SSL/TLS runs over TCP, it gets the first type of defense automatically. [Technical note: Datagram TLS runs over UDP and so Nagendra Modadugu and I explicitly added a reachability proof mechanism to protect against blind attack.] However, SSL/TLS, like most other Internet security protocols, doesn't do anything to push work onto the client. The general reasoning here is that DoS attackers generally use botnets (i.e., other people's compromised computers) to mount their attacks and therefore they have a very large amount of CPU available to them. This makes it very hard to create a puzzle which creates enough of a challenge to attackers to reduce the attack threat without severely impacting people with low computational resources such as those on mobile devices. Obviously, there is a tradeoff here, but my impression of the history of DoS attacks has been that this sort of CPU-based attack isn't that common and so this has been a generally reasonable design decision.

More generally, defending against computational DoS attacks is a really hard problem; you need to be able to serve large numbers of people you don't really have a relationship with, but it's easy for attackers who control large botnets to pretend to be a lot of legitimate users. All the known defenses are about trying to make it easier to distinguish legitimate users from attackers before you've invested a lot of resources in them, but this turns out to be inherently difficult and we don't have any really good solutions.

UPDATE: Fixed a writing error. Thanks to Joe Hall for pointing this out.

 

October 18, 2011

Following up on their demonstration attack on Diebold voting machines (writeup, my comments), the Argonne Vulnerability Assessment Team has developed a set of Suggestions for Better Election Security). My review comments are below:

I've had a chance to go over this document and while there are some suggestions that are valuable, many seem naive, impractical, or actively harmful. More generally, I don't see that it derives from any systematic threat model or cost/benefit analysis about which threats to address; merely following the procedures here would--at great expense--foreclose some security threats while leaving open other threats that are arguably more serious both in terms of severity and ease of attack. Finally, many of the recommendations here seem quite inconsistent with the current state of election practice. That's not necessarily fatal, since that practice is in some cases flawed, but there doesn't seem to be any acknowledgement that these seemingly minor changes actually would require radically reworking election equipment and procedures.

If this document is to be useful rather than harmful, it needs to start with a with a description of the threat model--and in particular the assumed attacker capabilities--and then proceed to a systematic analysis of which threats it is economical to defend against, rather than just being a grab bag of isolated security recommendations apparently designed to defend against very different levels of threat.

Pre- And Post-Election Inspections
The authors recommend:

... at least 1% of the voting machines actually used in the election-randomly chosen-should be tested, then disassembled, inspected, and the hardware examined for tampering and alien electronics. The software/firmware should also be examined, including for malware. It is not sufficient to merely test the machines in a mock election, or to focus only on cyber security issues!

This document does not specify how the hardware must be "examined", but a thorough examination, sufficient to discover attack by a sophisticated attacker, is likely to be extremely time consuming and expensive. A voting machine, like most embedded computers, consists of a number of chips mounted on one or more printed circuit boards as well as peripherals (e.g., the touchscreen) connected with cabling. This document seems to expect that "alien electronics" will be a separate discrete component added to the device, but this need not be so. A moderately sophisticated attacker could modify or replace any of these components (for instance, by replacing the chips with lookalike chips). As most of these components are sealed in opaque plastic packaging, assessing whether they have been tampered with is no easy matter. For instance, in the case of a chip, one would need to either remove the chip packaging (destroying it in the process) or x-ray it and then compare to a reference example of the chip in order to verify that no substitution had occurred. These are specialized and highly sophisticated techniques that few people are qualified to carry out, and yet this document proposes that they be performed on multiple machines in every jurisdiction in the United States, of which there are on the order of 10,000.

Moreover, this level of hardware analysis is useless against a broad spectrum of informational threats. An attacker who can rewrite the device's firmware--trivial with physical access to the internals, but the California TTBR discovered a number of vectors which did not require such access--can program his malware to erase itself after the election is over, thus evading inspection. Moreover, to the extent to which the microprocessors in the device contain firmware and/or microcode, it may not be possible to determine whether it has been tampered, since that would require interfaces directly to the firmware which do not depend on the firmware itself; these do not always exist. Absent some well-defined threat model, it is unclear why this document ignores these threats in favor of less effective physical attacks.

Finally, doing any of this inspection requires extremely detailed knowledge of the expected internals of the voting machine (it is insufficient to simply do exact comparison from a single reference unit because there is generally some manufacturing variation due to inter-run engineering fixes and the like). This information would either need to be discovered expensive reverse engineering or having the vendor release the information, which they have historically been very reluctant to do, especially as releasing it to every county in the US would be much like publishing it.

Official and Pollworker Verification
This document recommends that voting officials and pollworkers be subject to a number of verification requirements. In particular:

  • Background checks, including interviews with co-workers
  • Citizenship verification
  • Positive physical identification of poll workers prior to handling sensitive materials
  • Test bribery
These recommendations are highly discordant with existing practice. In real jurisdictions, it is extremely difficult to find poll workers (hence the high number of retirees) and they are paid relatively nominal sums (~$10/hr). It's unclear if they would be required to undergo a background check, but I suspect that many would not be pleased by that. In my experience, poll workers feel they are performing a public service and are unlikely to be pleased to be treated as criminals. Of course, it's unclear if poll workers count for the purposed of background checks. The authors write:

Minimum: All election officials, technicians, contractors, or volunteers who prepare, maintain, repair, test, inspect, or transport voting machines, or compile "substantial" amounts of election results should have background checks, repeated every 3---5 years, that include a criminal background history, credit check, and (when practical) interviews with co---workers.

Volunteers certainly set machines up in the polling place. I'm not sure if this counts as "preparing". It wouldn't surprise me if volunteers transported machines. The bottom line here is that this requirement is problematic either way: if you think poll workers have to get background checks, it's really invasive. If you don't, you're ignoring a category of threat from people who have very high levels of machine access (assuming you think that background checks do anything useful, which seems rather dubious in this case.)

The requirement for positive physical identification seems extremely impractical. As noted above, typical polling places are operated by semi-volunteer poll workers. Given the ease of acquiring false identification, it seems highly unlikely that they will be able to validate the identity of either the poll workers under their supervision or of the (alleged) election officials to whom they are supposed to deliver election materials. Similarly, it's not clear to me that verifying US Citizenship does anything useful. Is there some evidence that non-citizens are particularly likely to want to tamper with elections or that it's especially difficult for foreign countries which want to tamper with elections to find US citizens to do it for them?

This document recommends attempting to bribing a subset of poll workers. I'd be interested to learn whether any systematic study of this has been done on the likely subject population. I.e., does this sort of intervention actually reduce the effective level of bribery?

Seal Practice
This document contains a number of detailed recommendations about seal practice (required level of training, surface preparation, inspection protocols). I don't think there's any doubt that seals are a weak security measure and much of the research showing that comes from the Argonne group. However, it's also not clear to me that the measures described here will improve the situation. Extensive human factors research in the Web context shows that users typically ignore even quite obvious indications of security failures, especially in contexts where they get in the way of completion of some task.

Is there research that shows that (for instance) 10 minutes of training has any material impact on the detection rate of fake seals, especially when that detection is performed in the field?

The authors also write:

Minimize the use of (pressure sensitive) adhesive label seals

I don't really understand how this recommendation is operationalizable: Existing voting equipment is designed with numerous points of entry which are not obviously securable in any way, and for which adhesive seals appear to be the most practical option. What is the recommendation for such equipment?

Excessive Expert Manpower Requirements
The authors write:

Minimum: Election officials will arrange for a local committee (pro bono if necessary) to serve as the Election Security Board. The Board should be made up primarily of security professionals, security experts, university professors, students, and registered voters not employees of the election process. The Board should meet regularly to analyze election security, observe elections, and make suggestions for improved election security and the storage and transport of voting machines and ballots. The Board needs considerable autonomy, being able to call press conferences or otherwise publicly discuss its findings and suggestions as appropriate. Employees of companies that sell or manufacture seals, other security products often used in elections, or voting machines are not eligible to serve on the Board.

The United States has something like 10,000 separate election jurisdictions. If each of these convenes a board of 3-5 people, then approximately 30,000-50,000 security experts will be required. Given that all existing voting system reviews have been short-term affairs and in many cases the experts were compensated, and yet have drawn from the entire country to gather ~30 experts, it's hard to see where we are going to gather 1000 times more people for a largely thankless long-term engagement.

Miscellaneous
The authors recommend that:

The voting machines for the above inspection (or trial bribery discussed below) should be randomly chosen based on pseudo-random numbers generated by computer, or by hardware means such as pulling numbers or names from a hat.

Verifiably generating random values is a significantly harder problem than this makes it sound like. In particular, pulling names and numbers from a hat is trivial to game.

Recommended: Each individual in the chain of custody must know the secret password of the day or the election before being allowed to take control of the assets.

Any secret that is distributed this widely is hardly likely to remain a secret for long.

Recommended: Before each election, discuss with poll workers, election judges, and election officials the importance of ballot secrecy, and the importance of watching for miniature wireless video cameras in the polling place, especially mounted to the ceiling or high up on walls to observe voters' choices. The polling place should be checked for surreptitious digital or video cameras at least once on election day.

Elections are typically conducted in spaces which are otherwise reserved for other purposes and therefore are not empty. In my experience with such spaces, it would be very difficult to practically inspect for a surreptitious camera placed in the ceiling and concealed with any level of skill. This is particularly difficult in spaces with drop ceilings, ventilation ducts, etc.

 

October 11, 2011

I've now completed two different flavors of ultradistance event, Ironman-distance triathlon and a 50 mile trail race. In terms of time these events are fairly comparable—my Ironman personal record (at Ironman Canada) was 10:13 and my time at the Firetrails 50 was 10:10—so I feel like I have some basis for comparison.

Intensity
For my money, the biggest difference is that—at least for age groupers—Ironman seems to be raced at a much higher level of intensity than trail running. I think this can be attributed to a number of factors, some inherent and some cultural.

First, running 26 miles on road is really different than running 50 miles on dirt trails. Just covering that much distance on your feet, and having to constantly adapt to changing footing is really hard on your legs. Remember that even though the time in the Ironman is longer, a lot of that is on the bike, which isn't anywhere near as hard on your legs. Then there's the elevation change: the Ironman Hawaii run course has 400 or so feet of elevation gain; the Firetrails 50 run course has 7800 feet. When you put these two factors together, my experience is that you end up feeling a lot more beat up at the same intensity level; I was about as sore the day after Firetrails 50 as I've been after any Ironman, even though I went a lot easier. I don't think there's any way I could have raced a 50 mile trail event at the same level of perceived effort (e.g., heart rate) as I would an Ironman.

Conversely, you don't need to race a trail event the same way. Every halfway decent long course age group triathlete's goal is to qualify for Ironman Hawaii. The comparable objective in trail running is the Western States 100 (There are other big ultradistance running races, but they tend to be invitational only, rather than having clear qualifying criteria.) However, the qualifying procedures are radically different: Hawaii qualification is by place; each race gets some number of slots for each age group and the top N finishers who want to go to Hawaii get those slots. This means that if the person in front of you is in your age group, you have a very direct incentive to finish ahead of them, even if you're nowhere near actually winning your age group. By contrast, Western States qualification is by time: Every athlete who meets the time cutoff can apply and a lottery is used to decide who actually gets to race. This means you have no direct incentive—other than pride—to beat anyone in particular, since you can't stop them from qualifying (unless you trip them or something). Moreover, the Western States qualifying times are comparatively soft; out of 193 finishers at Firetrails 50, 140 qualified. Hawaii qualification rates are more like 10%. This means that you don't need to kill yourself, you just need to have an OK day. That's why you see people walking the aid stations in trail runs; good age groupers don't walk triathlon aid stations unless they're basically melting down.

Social Structure
Compared to the trail running, triathlon is intensely competitive. Obviously, trail running is competitive at the upper levels, but even mid-pack triathletes can be super-aggressive. I've been kicked, shoved, and swum over plenty of times, even in local races where I'm not in contention for anything. The bike and run tend to be less bad because people are more spread out, but there's still plenty of jockeying for position. This happens in road racing, too; people generally don't push too much, but I've definitely had to fight my way through a crowd plenty of times and there's a lot of bumping at the start. By contrast, trail running just seems a lot more mellow, even in situations that are inherently just as crowded.

I don't have a complete explanation for this. I'm sure it's partly just cultural, but I suspect it's also the setting you're in. Even nice road races and triathlons aren't typically in places that are that interesting. If you want to run on the Boston marathon course, nobody's stopping you, and to be honest, I'd describe the Ironman Hawaii course as more grim (20+ miles of asphalt and lava fields) than scenic. The only real reason to do the race, then, is to compete. By contrast, trail runs tend to be in nice places, often ones where it would be inconvenient to do a long run because you couldn't resupply yourself easily. This means you get a different, less competitive, class of people. Moreover, because the terrain is challenging and you're in the middle of nowhere, I think that people feel more like they're in it together.

Price
One more thing: trail racing is super-cheap. Even a cheap Ironman, like Vineman, costs $350 in advance and $450 on short notice. Ironman Canada is $675. Plus, if it's out of town you end up paying $200-300 to get your bike there. I paid $120 at the last minute for Firetrails 50, and you can put your shoes in your carry-on. I'm not blaming the people who run triathlons: it's an expensive sport to put together. But that doesn't mean it's not nice to race on the cheap.

 

October 10, 2011

Back in 2008 I was forced to DNF at mile 18 of the Dick Collins Firetrails 50 due to an ITB issue. 2011 has been a pretty good year in terms of training, so I thought maybe it was a good time to give it another shot. I knew going in that I wasn't really ready: I've been increasing my training load but prior to September 2, my longest training day of the year had been 17 miles with about 2000 ft of climbing, which I'd normally consider barely adequate for a 50K, but not for 50M, but after the PCTR Santa Cruz 50K was cancelled, I went looking for another event and there Firetrails 50 was on the schedule.

Still, I was pretty iffy, and my original plan was to see how I felt on my long run September 2 (19 miles, ~2500 ft of climb) but then when I went to see about registering on Saturday, I noticed that the race was already full. I emailed the RD to see about a cancellation, and (unsurprisingly) the scarcity effect kicked in and I went from being ambivalent to actually wanting to race, so when I heard there was now a slot, I signed up. However, after Sunday, my hamstrings, which had been gradually tightening up, got super tight and no amount of stretching seemed to help. Luckily, I was able to get a last minute appointment with Joy at SMI; she didn't fix me completely, but did manage to get my legs loose enough that I figured I had a reasonable shot, especially if I kept up with yoga and stretching.

The race itself went fairly smoothly. I went out very conservatively, at around 10 minutes a mile. [This pace is a little misleading; you run the flats and downhills and walk the uphills, so you're actually not running 10 minute miles; you run like 8:30 or so, but with the walking it averages out.] Of course, over an event this long there are always a few snags:

  • There was a bee hive somewhere around mile 10 and I and pretty much everyone else got stung.
  • Around mile 15 the tape on my nipples started to come off and I ended up with quite a bit of chafing. Luckily, I was able to score some band-aids and duct tape at an aid station (the duct tape because practically nothing sticks to wet skin) and this mostly solved the problem though I felt some discomfort the rest of the way.
  • Around mile 27 or so, something went wrong with my left heel and I spent the next mile or so wincing every time I landed wrong. Eventually it resolved itself, though, and I ran the rest of the way without incident.

Around mile 40 or so, I started to get pretty confident I would finish, but I stick with a conservative game plan until mile 45, at which point I started to press the pace a bit. Obviously, I was pretty tired, but with only 5 miles to go and feeling like I was at maybe mile 10 of an ordinary day, I figured I could afford to push it. I blew through the 45.5 aid station without stopping and decided I'd just run the rest of it. I didn't have a GPS and there aren't really mile markers, but I suspect I was running about 8:30 pace continuously, and I passed maybe 10 people over the next 5 miles, and did the last mile or so pretty hard (maybe 8:00 or 7:30 pace). My eventual finishing time was 10:10 and change. (I don't know exactly because the results are screwed up and have me inacurrately at 10:34, which is definitely wrong.) This is easily good enough to qualify me for Western States (the cutoff is 11 hours for a 50) so I'm pretty satisfied with this time. I'm not sure if I really feel like doing Western States, but it's nice to know I could sign up if I wanted to (there's a lottery to determine who actually gets in).

 

October 7, 2011

Argonne Labs's demonstration attack on a Diebold voting machine is getting a lot of press. The article above has the details, but briefly, what the Argonne team did was to insert some malicious "alien" electronics between the CPU and the touch screen. Unsurprisingly, that device can modify input from the touch screen and/or output to the touch screen, allowing the attacker to tamper with the election. To read the press coverage and the quotes given by the authors, you might get the impression that this was something new. For instance:

"This is a fundamentally very powerful attack and we believe that voting officials should become aware of this and stop focusing strictly on cyber [attacks]," says Vulnerability Assessment Team member John Warner. "There's a very large physical protection component of the voting machine that needs to be addressed."

These comments aside, there's not really any new information here; rather, it was completely obvious that this sort of thing was possible to anyone who knew how the devices were constructed. It's well-known that the only defenses against this were physical security of the machines itself (tamper seals, locks, custody, etc.) and that they were extremely weak. Alex Halderman and his team demonstrated some not-dissimilar attacks a while back on the Indian Electronic Voting Machines. The EVEREST report described a man-in-the-middle attack on the iVotronic interface to the VVPAT vote printer. Indeed, the same team from Argonne demonstrated a similar attack on a Sequoia system in 2009.

There are a number of reasons why voting researchers have historically focused on informational attacks (as I've said before, "cyber" isn't the word that computer scientists would typically use). First, they're easier to do wholesale. While it's moderately expensive—though not that expensive—to reverse engineer the software and develop an exploit and/or replacement software, once you've done that you can make as many copies as you want. Moreover, if you have a good exploit (like many of the ones described in the TTBR), you may be able to easily install it with very brief physical access, without opening the case, and perhaps without even violating any security seals. For obvious reasons, attacks which can be mounted by voters seem a lot more interesting than attacks which involve semi long-term access to the machine. It's not exactly likely that your average voter is going to be allowed to open the machine in the middle of the election.

Moreover, in some cases, informational attacks (i.e., viruses) have been demonstrated that only require contact with a small number of voting machines. The idea here is that you have temporary access to a given machine, infect it with the virus, and then this somehow spreads to every machine in the county. By contrast, a physical attack like this requires tampering with every voting machine.

Related to this issue, informational attacks can be easier to conceal. If you need to install some sort of attack hardware and have it present during the election, you're either going to need to get access after the election or (a) lose the device and (b) have a high risk of having it discovered in any subsequent inspection. By contrast, software/informational attacks can be designed so that the standard (i.e., authorized) machine inspection mechanisms won't discover them at all, and in many cases can be programmed to self-destruct after the election. It's not clear that there's any plausible non-destructive mechanism that can be used to post-facto detect the tampering (see the TTBR reports again).

Moreover, as I've said, the possibility of physical attacks is totally obvious once you know you can get into the case (with or without violating the tamper seals) and there's a certain level of "difficulty bias" here. Since everyone already knew that physical attacks were possible, as soon as it was demonstrated that you could get into the machine, it wasn't that important to demonstrate the obvious end-to-end attack. However, since software-based attacks were (a) harder and (b) more useful, it was natural for researchers to spend more time trying working to demonstrate those. That certainly doesn't mean that researchers were somehow unaware that physical attacks were possible.

 
Argonne Labs's demonstration attack on a Diebold voting machine is getting a lot of press. The article above has the details, but briefly, what the Argonne team did was to insert some malicious "alien" electronics between the CPU and the touch screen. Unsurprisingly, that device can modify input from the touch screen and/or output to the touch screen, allowing the attacker to tamper with the election. To read the press coverage and the quotes given by the authors, you might get the impression that this was something new. For instance:

"This is a fundamentally very powerful attack and we believe that voting officials should become aware of this and stop focusing strictly on cyber [attacks]," says Vulnerability Assessment Team member John Warner. "There's a very large physical protection component of the voting machine that needs to be addressed."

These comments aside, there's not really any new information here; rather, it was completely obvious that this sort of thing was possible to anyone who knew how the devices were constructed. It's well-known that the only defenses against this were physical security of the machines itself (tamper seals, locks, custody, etc.) and that they were extremely weak. Indeed, Alex Halderman and his team demonstrated some not-dissimilar attacks a while back on the Indian Electronic Voting Machines. The EVEREST report described a man-in-the-middle attack on the iVotronic interface to the VVPAT vote printer. Indeed, the same team from Argonne demonstrated a similar attack on a Sequoia system im 2009.

There are a number of reasons why voting researchers have historically focused on informational attacks (as I've said before, "cyber" isn't the word that computer scientists would typically use). First, they're easier to do wholesale. While it's moderately expensive—though not that expensive—to reverse engineer the software and develop an exploit and/or replacement software, once you've done that you can make as many copies as you want. Moreover, if you have a good exploit (like many of the ones described in the TTBR), you may be able to easily install it with very brief physical access, without opening the case, and perhaps without even violating any security seals. For obvious reasons, attacks which can be mounted by voters seem a lot more interesting than attacks which involve semi long-term access to the machine. It's not exactly likely that your average voter is going to be allowed to open the machine in the middle of the election.

Moreover, in some cases, informational attacks (i.e., viruses) have been demonstrated that only require contact with a small number of voting machines. The idea here is that you have temporary access to a given machine, infect it with the virus, and then this somehow spreads to every machine in the county. By contrast, a physical attack like this requires tampering with every voting machine.

Related to this issue, informational attacks can be easier to conceal. If you need to install some sort of attack hardware and have it present during the election, you're either going to need to get access after the election or (a) lose the device and (b) have a high risk of having it discovered in any subsequent inspection. By contrast, software/informational attacks can be designed so that the standard (i.e., authorized) machine inspection mechanisms won't discover them at all, and in many cases can be programmed to self-destruct after the election. It's not clear that there's any plausible non-destructive mechanism that can be used to post-facto detect the tampering (see the TTBR reports again).

Moreover, as I've said, the possibility of physical attacks is totally obvious once you know you can get into the case (with or without violating the tamper seals) and there's a certain level of "difficulty bias" here. Since everyone already knew that physical attacks were possible, as soon as it was demonstrated that you could get into the machine, it wasn't that important to demonstrate the obvious end-to-end attack. However, since software-based attacks were (a) harder and (b) more useful, it was natural for researchers to spend more time trying working to demonstrate those. That certainly doesn't mean that reserachers were somehow unaware that physical attacks were possible.

 

October 2, 2011

As veteran Star Trek viewers know, Starfleet cadets take the Kobayashi Maru test. The details of the test don't matter; what matters is that it's a simulation exercise designed so that there is no way to win, and either the cadet in command lets a bunch of innocent people die or their ship gets destroyed. As Spock puts it in the 2009 Star Trek reboot, the purpose of the test is "The purpose is to experience fear. Fear in the face of certain death. To accept that fear, and maintain control of oneself and one's crew. This is a quality expected in every Starfleet captain."

I don't really see how it does this, though, because everyone knows that the test is unwinnable and that everyone dies. This makes it pretty hard to obtain the requisite level of immersivity; as games designers know well, getting people to immerse themselves in your simulation game isn't just a matter of having realistic graphics, but also of having there be just the right balance of control and uncertainty. Games which are too easy (e.g., playing in god mode) or too hard (where you clearly can't survive at all), are very hard for people to take seriously, which would seem to be a critical element if you want people to "experience fear".