COMSEC: August 2007 Archives

 

August 22, 2007

I'm not at CRYPTO but my sources tell me that there may have been some more progress on SHA-1 and that the latest estimates are on the order of 260.x. Anyone with more details please post them in the comments.
 

August 6, 2007

In my previous post about SWORDS robots, I referred to "fail-safe" and "fail-unsafe" strategies. Now, clearly, if you're a civilian in the line of fire of a killer robot, you'd think a strategy in which the robot shut itself down when it couldn't communicate with base to be "safe", you might feel a little differently if you were a soldier who had to go out into enemy fire because a minor communication glitch caused your robot to shut down.

As another example, take a system like Wireless Access in Vehicular Networks (WAVE), which provides for communications between vehicles and between vehicles and road-side units. WAVE can be used for safety messages, such as the Curve Speed Warning message, which allows a station at the side of the road to broadcast the maximum safe speed for a given curve. Obviously, you'd like there to be some message integrity here to prevent an attacker from broadcasting a fake speed. Now, what happens when the integrity check fails; do you ignore the message?

A decent argument could be made that either ignoring or trusting such messages was "fail-safe". Obviously, ignoring them appears safe in the sense that your vehicle reverts to what it was without the WAVE functionality, so you haven't been damaged. On the other hand, the curve speed warning is designed to help safety (that's why it's being broadcast) so ignoring it is arguably failing unsafe! I don't really have a position on what's right or wrong here, but it should be clear that the terminology is confusing.

I've heard people substitute the terms "fail-open" or "fail-closed", but those are even worse. If you're an electrical engineer, a closed circuit means current flows and an open circuit means current doesn't. On the other hand, an open firewall means that data flows but a closed one means it doesn't.

I don't know of any really good terms, unfortunately.

 

August 5, 2007

Wired reports that the DoD has taken delivery of three "special weapons observation remote reconnaissance direct action system" (SWORDS) robots. (Pretty tricky with those acronyms, guys!). Anyway, these are remote-controlled robots armed with M-249 machine guns.

Apparently these robots were uh, a bit flakey, but the manufacturers say they've got all the bugs worked out now:

The SWORDS -- modified versions of bomb-disposal robots used throughout Iraq -- were first declared ready for duty back in 2004. But concerns about safety kept the robots from being sent over the the battlefield. The machines had a tendency to spin out of control from time to time. That was an annoyance during ordnance-handling missions; no one wanted to contemplate the consequences during a firefight.

So the radio-controlled robots were retooled, for greater safety. In the past, weak signals would keep the robots from getting orders for as much as eight seconds -- a significant lag during combat. Now, the SWORDS won't act on a command, unless it's received right away. A three-part arming process -- with both physical and electronic safeties -- is required before firing. Most importantly, the machines now come with kill switches, in case there's any odd behavior. "So now we can kill the unit if it goes crazy," Zecca says.

OK, so ignoring the wisdom of starting from a platform which used to "spin out of control", I'm sort of interested in how the "kill switch" works. As far as I know, there are two basic ways to build a system like this:

  • Fail-unsafe. The kill command is just a separate command that tells the unit to shut down.
  • Fail-safe. The control unit regularly (or continuously) sends a signal. If the robot stops getting the signal it shuts down.

It should be pretty clear that if what you think there's a high likelihood that the robot's going to go nuts and you want to minimize the chance that it kills your own people, random civilians, their pets, etc., you probably want something that fails safe. This is especially true in view of the implication in this article that signal strength isn't always what you might like. You really don't want to have a situation where the robot is busy slaughtering innocent bystanders and you can't shut it down because your control unit is showing zero bars.

On the other hand, a fail-safe system is also much easier to DoS—it's probably more important when the system being DoSed is shooting your enemies than when it's serving up copies of Girls Gone Wild. All the attacker has to do is somehow jam your signal (and remember that since you probably want to have a cryptographically secured control channel, they only need to introduce enough errors to make the integrity checks fail). This makes the problem of designing the control channel a lot more difficult. I'd definitely be interested in hearing more about the design of the protocol for these gizmos.

 

August 3, 2007

For the past couple months I've been spending most of my time working on California's Top-to-Bottom Review of electronic voting systems certified for use in California.

The overall project was performed under the auspices of UC and led by Matt Bishop (UC Davis) and David Wagner (UC Berkeley), who did a great job of negotiating a wide variety of organizational obstacles to get the project going and keep it on track.

This project reviewed the systems of three manufacturers:

  • Diebold Election Systems Inc. (DESI)
  • Hart InterCivic
  • Sequoia Voting Systems
Each company makes both an optical scanner for paper ballots and a computerized direct recording electronic (DRE) (these are often called touchscreen, but the Hart system actually uses a clickwheel), as well as a back-end election management system.

Each system was assigned to three teams:

  • A documentation team which reviewed only the documentation.
  • A "red team" which conducted penetration testing.
  • A source code team which reviewed the source code.

There was also an accessibility team for all the systems.

I led the Hart source code team, consisting of me, Srinivas Inguva, Hovav Shacham, and Dan Wallach, and sited at an undisclosed location which can now be disclosed as SRI International in Menlo Park. Our report was just published yesterday, just ahead of the statutory deadline for the State to decide on whether these systems will continue to be certifed (more detail here). You can get it here and all the reports here.

I wasn't planning on saying much about this on EG. Most of what I have to say is already said better in our report. I did want to say a word about my team, who put in extraordinary amounts of effort under an extremely tight timeline; just over a month from the time we got the Hart source to the delivery of the final report. Thanks, guys, and I look forward to working with you again, hopefully next time in a room with 24x7 air conditioning.