COMSEC: November 2008 Archives

 

November 29, 2008

OK, so opinions differ about whether or not it's a good idea to encourage the use of self-signed certificates for SSL servers. As I read the situation, the basic arguments go like this:

For:
Active attacks are relatively uncommon but passive sniffing is a big problem, so the world would be better off if people used SSL, even if there is no real authentication of the server. Moreover, if you use SSH-style "leap-of-faith" authentication techniques where you memorize the server's certificate and get worried if it changes, you are fairly resistant to active attack.

Against:
Active attacks are a real threat, and people are already way too willing to ignore warnings from the browsers about invalid certificates. If we encourage self-signed certs, people will only get more lax.

This is a contentious issue in the security community, but few of us are in a position to do much other than rant. On the other hand, if you work for a major browser vendor, you do get to do something. It was big news (at least in the narrow security community) a while back when Firefox 3 took a much more aggressive line on invalid certificates. I was initially sort of sanguine about this turn of events, since many security types have long been worried about users ignoring error messages (see above), but (at minimum) the execution seems to be a little lacking.

Here's how things shake out when you use Firefox to connect to some site with an "invalid" cert. First, you get the following error screen:

So, first, this looks like a hard error to any sane person. In the past few weeks I've seen several people not really know what to do here, and even I've done a double take at least once before I realized it was just a certificate error I could override (note that the dialog doesn't in any way suggest that this could be intentional and/or safe in certain circumstances). Anyway, once you figure out what's going on, you click "Or you can add an exception..." which takes you to the following screen:

I'm not sure I entirely agree with Firefox's opinion about when you should add an exception. I get https: URLs fairly often in contexts when I'm not overworried about security. If you would have been willing to retrieve the page with HTTP, you should also be willing to retrieve it with HTTP over TLS. Maybe it's bad policy by the server but it's not unsafe as far as I can tell.

If you click "Add Exception..." you then get:

Note that you can't actually add the exception at this point. Every button is grayed out besides "Get Certificate" and "Cancel". When you click "Get Certificate", the browser fills in the information, giving us the following dialog:

Now you can confirm the exception and after four separate dialogs, you can finally get to the original page you were looking for.

Whatever one's position on self-signed certs, this all seems unnecessarily clumsy. I'm particularly struck by dialog 3, where they force you to download the certificate, despite the fact that Firefox absolutely has a copy, having obtained it when it first contacted the server. Why doesn't it just fill in the dialog instead of forcing you to click through? It's one thing to give you an alarming warning, but the rest of this feels a lot like editorializing via UI. You know what I'm talking about here: we don't think you should be doing, so despite the fact that you're insisting on it, we'll make it as inconvenient and irritating as possible. I don't know if that was what was in the programmer's heads or not, but it seems to me like one could produce a rather better UI even if your underlying objective was to discourage self-signed certs.

 

November 26, 2008

As you may have heard, President-Elect Obama may need to give up his Blackberry for "security reasons":

But before he arrives at the White House, he will probably be forced to sign off. In addition to concerns about e-mail security, he faces the Presidential Records Act, which puts his correspondence in the official record and ultimately up for public review, and the threat of subpoenas. A decision has not been made on whether he could become the first e-mailing president, but aides said that seemed doubtful.

...

Diana Owen, who leads the American Studies program at Georgetown University, said presidents were not advised to use e-mail because of security risks and fear that messages could be intercepted.

"They could come up with some bulletproof way of protecting his e-mail and digital correspondence, but anything can be hacked," said Ms. Owen, who has studied how presidents communicate in the Internet era. "The nature of the president's job is that others can use e-mail for him."

These seem like separate issues. I don't know what the Presidential Records Act says, outside of the Wikipedia article, but presumably this is an argument against the President using email at all, not just a Blackberry. Presumably what's required here is discretion in what gets sent over the Blackberry.

The security ("hacking") problem seems more serious. There are a number of issues here, including:

  • Confidentiality of the data going to and from the Blackberry.
  • Remote compromise of the Blackberry.
  • Tracking of the President via his Blackberry.
The confidentiality problem is comparatively easy to address. Cellular networks generally have relatively weak encryption, and even if that weren't true, you can't trust the cellular provider anyway. That said, there's plenty of technology for tieing up encrypted channels from the Blackberry back to some server in the White House where the data gets handled like email sent from White House computers (e.g., a VPN). I'm not familiar with the Blackberry VPN offerings, but this isn't something that would be that hard to develop.

Remote compromise is much more difficult to solve. You've got a device that's connected to the Internet, and of course it contains software with what you'd expect to be the usual complement of security vulnerabilities. You could perhaps try to tunnel all IP-level communications back through the White House, but you'd still have to worry about everything a the cellular/radio level which has to come directly through the ordinary cell network. Accordingly, you should expect that a dedicated attacker with access to the device's phone number, transmitter serial number, etc. would be able to remotely compromise it. Such a device could send copies of data to whoever controlled it, record any ambient audio (or video if you had a camera), etc. Protecting against remote compromise isn't like adding a VPN client; you have to worry about the entire surface area of the software and it's not like you're going to rewrite the entire Blackberry firmware stack. Cutting against this concern is the fact that the president isn't going to be the only person with access to sensitive material. Are we going to deny everyone on the direct presidential staff access to any sort of modern communications device?

Similar considerations apply to tracking. All you need is to know the phone's radio parameters and have an appropriate receiver, and the phone will helpfully transmit regular beacons. Again, though, it's not usually hard to figure out where the president is, surrounded as he is by a bunch of staffers and secret service agents. Additionally, many of those people will have radio transmitters, so it's not clear that denying the president his device will add much value. If it's imperative that the president not be tracked at any particular time, you can simply shut down his device then.