COMSEC: December 2007 Archives

 

December 23, 2007

In my original post on Loren Weinstein's suggested adoption of universal HTTPS, I said that MITM attacks were a issue I would address in a separate post. This is that post. As cryptographers and COMSEC engineers never tire of pointing out, if your channel isn't authenticated then you're very vulnerable to active attackers. The classic attack is what's called a man-in-the-middle 1 (MITM) attack, but in general the problem is that you can end up talking to the attacker when you think you're talking to the right person. There are a lot of proposed solutions for this, but the only one that really works when you're trying to talk to someone you don't know is to have someone you do know (or at least trust) vouch for them. In TLS this is done with certificates, and the third party you trust is the certificate authority.

Whenever this topic comes up, you hear a lot of complaining about the difficulty and expense of obtaining certificates. For instance, here's Weinstein:

Certificates are required to enable TLS encryption in these environments, of course. And while the marketplace for commercial certs is far more competitive now than it was just a few years ago, the cost and hassle factors associated with their purchase and renewal are very relevant, especially for larger sites with many operational server names and systems.

It's certainly true that certs are an obstacle, though not as big an obstacle as people think. You can get a certificate for as little as $9/year. It's a little inconvenient, but it wouldn't be hard for Web hosting providers (who typically charge rather more than that) to simply issue you a certificate (or work with a CA to do so) as part of your Web site setup. But still, this is obviously more inconvenient than not doing anything. So, do you need a certificate at all? Here's Weinstein again:

However, in a vast number of applications where absolute identity confirmation is not required (particularly when commerce is not involved), self-signed certificates are quite adequate. Yes, as I alluded to in my previous blog posting, there are man-in-the-middle attack issues associated with this approach, but in the context of many routine communications I don't feel that this is as high a priority concern as is getting some level of crypto going as soon as possible.

Given their significant capabilities, why then are self-signed certs primarily employed within organizations, but comparatively rarely for servers used by the public at large, even where identity confirmation is not a major issue?

A primary reason is that most Web browsers will present a rather alarming and somewhat confusing (for the typical user) alert as part of a self-signed certificate acceptance query dialogue. This tends to scare off many people unnecessarily, and makes self-signed certificate use in public contexts significantly problematic.

Security purists may bristle at what I'm going to say next, but so be it. I believe that we should strongly consider something of a paradigm shift in the manner of browsers' handling of self-signed certificates, at the user's option.

When a browser user reaches a site with a self-signed certificate, they would be presented with a dialogue similar to that now displayed, but with additional, clear, explanatory text regarding self-signed certificates and their capabilities/limitations. The user would also be offered the opportunity to not only accept this particular cert, but also to optionally accept future self-signed certs without additional dialogues (this option could also be enabled or disabled via browser preference settings).

This topic has been debated endlessly on mailing lists, so I have no intention of detailing all the arguments. Instead, here's the bullet point version.

For

  • Active attacks aren't a major issue anyway, but passive attacks are, and you can use SSH-style leap of faith (remembering the server's cert) to block most active attacks.
  • What's important is to get people to use any crypto at all, and this whole certificate thing is an impediment.
  • Self-signed certs can be made to install automatically with the server.
  • In the real world, certificates are often screwed up in some way and people already ignore that.
  • Certificates are practically worthless since the CAs practically don't check who you are (the standard thing is to force you to respond to an email send to an admin address at your domain).
  • Many of the important attacks (e.g., phishing) aren't even detectable by certificate checks because the certs are right.

Against

  • What do you mean active attacks aren't important? This is an active attack we're looking at right here.
  • It's true that the certificate thing is an impediment, but that's really a social and engineering issue. Given how cheap certs are and how little checking the CAs do, cert issuance could easily be practically automated.
  • Encouraging people to accept self-signed certs undermines security for all the sites which want real certs—all the reasons why people don't check certs go double for why they won't check that the cert is not self-signed if they see both cases a lot and there's no way to tell what kind of site you should expect to talk to.
  • If you want to encourage the client authors to do something, encourage a free CA (like OpenCA used to be) that does simple email checking. At least that has the same security model as self-signed certs.

This doesn't really cut in either direction, but another possibility is to reserve the https: URL scheme for real certs but to have clients auto-negotiate SSL/TLS silently where possible (like RFC 2817 but done right). This at least gives you channel confidentiality, and if you cache the fact that you negotiated SSL/TLS, then some active attack resistance.

Note that an active attacker can of course downgrade you to straight HTTP (who knows how people respond to whatever warning accompanies "hey, I just negotiated HTTP even though before I was doing HTTPS?") but, then, they could MITM self-signed certs and Weinstein's argument that they won't:

Any ISP that was caught playing MITM certificate substitution games on encrypted data streams without explicit authorization would certainly be thoroughly pilloried and, to use the vernacular, utterly screwed in the court of public opinion -- and quite possibly be guilty of a criminal offense as well. I doubt that even the potentially lucrative revenue streams that could be generated by imposing themselves into users' Web communications would be enough to entice even the most aggressive ISPs into taking such a risk. But if they did anyway, the negative impacts on their businesses, and perhaps on their officials personally as well, would be, as Darth Vader would say, "Impressive. Most impressive."

seems kinda shakey to me.

1. Amusingly, the Wikipedia entry on Interlock, a protocol designed to stop MITM, reads in part:

Most cryptographic protocols rely on the prior establishment of secret or public keys or passwords. However, the Diffie-Hellman key exchange protocol introduced the concept of two parties establishing a secure channel (that is, with at least some desirable security properties) without any such prior agreement. Unauthenticated Diffie-Hellman, as an anonymous key agreement protocol, has long been known to be subject to man in the middle attack. However, the dream of a "zipless" mutually authenticated secure channel remained.

As far as I know, this use of the term "zipless" comes from Erica Jong's novel Fear of Flying, where it refers to a somewhat different type of interaction.

 

December 19, 2007

Ryan Singel reports that despite the rather lax standards required for wiretaps, some FBI agents seem to have decided that they could skip procedure:
The revelation is the second this year showing that FBI employees bypassed court order requirements for phone records. In July, the FBI and the Justice Department Inspector General revealed the existence of a joint investigation into an FBI counter-terrorism office, after an audit found that the Communications Analysis Unit sent more than 700 fake emergency letters to phone companies seeking call records. An Inspector General spokeswoman declined to provide the status of that investigation, citing agency policy.

The June 2006 e-mail (.pdf) was buried in more than 600-pages of FBI documents obtained by the Electronic Frontier Foundation, in a Freedom of Information Act lawsuit.

The message was sent to an employee in the FBI's Operational Technology Division by a technical surveillance specialist at the FBI's Minneapolis field office -- both names were redacted from the documents. The e-mail describes widespread attempts to bypass court order requirements for cellphone data in the Minneapolis office.

Remarkably, when the technical agent began refusing to cooperate, other agents began calling telephone carriers directly, posing as the technical agent to get customer cellphone records.

Federal law prohibits phone companies from revealing customer information unless given a court order, or in the case of an emergency involving physical danger.

The actual document is here.

 

December 14, 2007

Orin Kerr points to the decision in in re Boucher where a magistrate ruled that forcing someone to disclose their PGP password violates the Fifth Amendment. This question has been the topic of an unbelievable amount of amateur lawyering on cypherpunks and associated mailing lists and a lot of that gets repeated in the Volokh Conspiracy comments. The key question seems to be whether disclosing their password is a testimonial or non-testimonial act. I'm no expert on this topic, but as I recall, in past discussions, people have suggested having a password which is inherently self-incriminating (e.g., "I murdered John Doe") in an attempt to create a Fifth Amendment situation, which always seemed to me to be too clever by half.
 

December 12, 2007

Lauren Weinstein points out that the assclowns at Rogers are prototyping a system for splicing their own messages into other people's Web pages, like this:

Lauren argues that it's time to abandon unprotected web surfing:

That first, key action is to begin phasing out, as rapidly as possible and in as many application contexts as practicable, the use of unencrypted http: Web communications, and move rapidly to the routine use of TLS/https: whenever possible.

This is of course but an initial step in a rather long path toward pervasive Internet encryption, but it would be an immensely important one.

TLS is not a total panacea by any means. In the absence of prearranged user security certificates, TLS is still vulnerable to man-in-the-middle attacks, but any entity attempting to exploit that approach would likely find themselves in significant legal difficulty in short order.

Also, while TLS/https: would normally deprive ISPs -- or other intermediaries along the communications path -- of the ability to observe or modify data traffic contents, various transactional information, such as which Web sites subscribers were visiting (or at least which IP addresses), would still be available to ISPs (in the absence of encrypted proxy systems).

Another potential issue is the additional computational cost associated with setting up and maintaining TLS communication paths, which could become significant for busy server sites. However, thanks to system speed improvements and a choice of encryption algorithms, the additional overhead, while not trivial, is likely to at least be manageable.

Weinstein raises a number of issues here, namely:

  • Vulnerability to MITM attacks.
  • The effect of TLS on deep packet inspection engines.
  • The computational cost of TLS.
In this post, I want to address the second and third issues. MITM attacks deserve their own post. First, we need to be clear on what we're trying to do. The property the communicating parties (the client and server) want to ensure isn't that third parties can't read (the technical term here is confidentiality) the traffic going by but rather they can't modify it (the technical terms here are data origin authentication (knowing who sent the message) and message integrity (knowing that it hasn't been modified)). Obviously, there's no way to stop your ISP from sending you any data of his choice, but you can arrange to detect that and reject the data.

The general way that this is done is to have the server compute what's called a message integrity check (MIC) value over the data. The server sends the MIC, along with the data to the client. The client checks the MIC (I'm being deliberately vague about how this works) and if it isn't correct it knows that the data has been tampered with and the client discards the data. The way this works in TLS is that the client and the server do an initial handshake to exchange a symmetric key. This key is then used to key a message authentication code (MAC)1 function which is used to protect individual data records (up to 16K each).

So, going back to Issue 2, TLS actually provides confidentiality and message integrity/data origin authentication separately. In particular, there are modes which provide integrity but not confidentiality (confidentiality without integrity is only safe in some special cases so these modes aren't provided)—the so-called NULL modes. So, it's quite possible to arrange matters in such a way that intermediaries can inspect the traffic but not modify it. Of course, whether this is desirable is a separate issue, but I think it's pretty clear that many enterprises, at least, want to run various kinds of DPI engines on the traffic going by. Indeed, they want to so much that they deploy solutions to intercept encrypted traffic, so presumably they would be pretty unhappy if they couldn't see any Web traffic.

There are at least two major difficulties with providing a widely used integrity-only version of HTTPS. The first is that clients don't generally offer to negotiate it, at least in part because it's easier to just have users expect that HTTPS = the lock icon = security than to try to explain the whole thing about integrity vs. confidentiality. This brings us to the second issue, which is how we provide a UI which gives users the right understanding of what's going on. More on the UI issue in a subsequent post, but it should be clear that from a protocol perspective this can be made to work.

Moving on to the performance issue: HTTP over TLS is a lot more expensive than raw HTTP [CPD02]. So, TLS-izing everything involves taking a pretty serious performance hit. The basic issue is that each connection between the client and the server requires establishing a new cryptographic key to use with the MAC. This setup is expensive, but it's a more or less fundamental requirement of using a MAC because the same key is used to verify the MAC as to create it. So, in order to stop Alice from forging traffic to Bob from the server, Alice and Bob need to share different keys with the server. The situation can be improved to some extent by aggressive session reuse, thus amortizing the cost of the really expensive public key operations. Client-side session caching/TLS tickets can help here to some extent as well, but the bottom line is that (1) there's some per-connection cost and (2) it breaks proxy caches, which obviously puts even more load on the server.

One approach that doesn't have this performance drawback is to have the server authenticate with a digital signature. Because different keys are used to sign and verify, a single signed message can be replayed to multiple recipients. This reduces the load on the server, as well as (if the protocols are constructed correctly) working correctly with proxy caches. Obviously, this only works well when the pages the server is serving are exactly identical. If each page you're generating is different, this technique doesn't buy you much (though note that even dynamic pages tend to incorporate static components such as inline images.) Static signatures of this type were present in some of the early Web security protocols (e.g., S-HTTP) but SSL/TLS is a totally different kind of design and this sort of functionality would be complicated to retrofit into it at this point.


1. Yes, this whole MIC/MAC thing is incredibly confusing. It's even better when you're doing layer 2 communication security and MAC means the Ethernet MAC.

 

December 3, 2007

OpenSSL has a FIPS-140 validated module. One of the requirements is self-testing of the PRNG. Unfortunately, it somehow doesn't quite work
A significant flaw in the PRNG implementation for the OpenSSL FIPS Object Module v1.1.1 (http://openssl.org/source/openssl-fips-1.1.1.tar.gz, FIPS 140-2 validation certificate #733, http://csrc.nist.gov/groups/STM/cmvp/documents/140-1/140val-all.htm#733) has been reported by Geoff Lowe of Secure Computing Corporation. Due to a coding error in the FIPS self-test the auto-seeding never takes place. That means that the PRNG key and seed used correspond to the last self-test. The FIPS PRNG gets additional seed data only from date-time information, so the generated random data is far more predictable than it should be, especially for the first few calls.

This vulnerability is tracked as CVE-2007-5502.

There's no real deep lesson here. This is the kind of mistake anyone can accidentally make. It's true that the more options you have in a piece of code, the higher the chance that there will be a some code path that doesn't work right, and in this case it's particularly striking because (1) there's no need to self-test a software PRNG 1 and (2) it's the addition of the self-test that broke it, but it could have easily have been something else.

1. In general, self-testing any cryptographic PRNG is difficult. The standard way to build a CSPRNG is to take whatever your entropy source is and run it through a bunch of hash functions. The result of this is that the output looks random under standard entropy tests. This is true even if the seed is very low entropy. All a self-test really means is that the hashing part of the PRNG is working correctly, but usually it's the seeding part that goes wrong (as seen here).