COMSEC: October 2008 Archives

 

October 21, 2008

One of the things I noticed in my review of OAuth was a pretty confusing section about entropy depletion:

The OAuth protocol has a number of features which may make resource exhaustion attacks against Service Providers possible. For example, if a Service Provider includes a nontrivial amount of entropy in Token Secrets as recommended above, then an attacker may be able to exhaust the Service Provider's entropy pool very quickly by repeatedly obtaining Request Tokens from the Service Provider.

...

Resource Exhaustion attacks are by no means specific to OAuth. However, OAuth implementors should be careful to consider the additional avenues of attack that OAuth exposes, and design their implementations accordingly. For example, entropy starvation typically results in either a complete denial of service while the system waits for new entropy or else in weak (easily guessable) secrets. When implementing OAuth, Service Providers should consider which of these presents a more serious risk for their application and design accordingly.

One hears concerns of entropy exhaustion occasionally in discussions of PRNGs, but it's sort of conceptually confused. Let's say we have some relatively slow source of true randomness, e.g., radioactive decay or Johnson noise. One way to use something like this is to take the input and shove it in a buffer (typically a FIFO). Then whenever a process asks for a random value, you read out of the buffer. If the rate of entropy consumption exceeds the rate of entropy input, elementary queueing theory tells you that eventually the buffer will empty and you'll have to block and wait for more entropy. That's approximately the problem that this document is talking about, though the randomness pools that systems actually use aren't this simple.

For this reason, among others1, you don't want to use this method for generating random numbers for protocols. As I mentioned previously, standard procedure isn't to use this data directly, but rather to feed it to a PRNG as seed input. This way you can extract an arbitrary number of bits without worrying about emptying the buffer. Obviously, this is only safe as long as there is a reasonable amount of seed data, but once the PRNG is seeded, you don't need to match the input to the output. It should be clear, of course, that this is information theoretically imperfect. If you have 100 bits of input entropy and you extract 200 bits of output, clearly an attacker with unlimited computational resources can determine the seed values and thus future values. However, if your PRNG is well designed, this should require 2^{n} effort where n is the size of the seed input, which quickly becomes impractical. Note that most PRNGs have a fixed internal state size (a few hundred to a thousand bits), so feeding in entropy beyond this doesn't buy you as much as you might think.

So, a well written application uses a PRNG seeded from some source of high quality randomness. However, likely some applications don't do this, and it's not helped by the fact that the random number devices on common UNIXen aren't consistent. E.g., on FreeBSD, /dev/random is a PRNG fed by kernel randomness, but on Linux /dev/random tries to track the amount of entropy input and will block when you read too many bytes whereas /dev/urandom is the PRNG which is seeded off the "true" randomness pool and doesn't block no matter how many bytes are read.

1. Another reason is that it's quite hard to get randomness sources where every bit of the measurement is random. Consider, for instance, a coin flip. There's a lot of randomness there, but most coins aren't completely balanced. However, a good PRNG can be fed only semi-random data and still make use of the entropy.

Acknowledgement: Thanks to Hovav Shacham for discussion about this post.

 

October 8, 2008

American Airlines has announced that it's going to filter pornography on it's in-flight Internet service:

FORT WORTH (AP) -- American Airlines says it will filter an in-flight Internet service to block pornography sites, reversing course after complaints from flight attendants and passengers. American said Tuesday it was working with technology provider Aircell to allow filtering of its nascent Internet service.

American, the nation's largest carrier, said it hasn't gotten reports of passengers viewing "inappropriate content" on the Gogo in-flight service but said filtering was "an appropriate measure to take."

The airline launched Internet service in August on some of its Boeing 767 jets that fly between New York and San Francisco, Los Angeles and Miami in a six-month trial.

This seems like a bit of an overreaction. First, downloading it while in the air is hardly the only way in which passengers could gain access to pornography. First, they could buy (or download) it prior to boarding. Some airport stores even sell skin magazines (which has always seemed odd to me). So, if you're really concerned about passengers viewing porn in flight, didn't you already have this problem? Is there any evidence that Internet access will make it dramatically worse? Moreover, now you have all the usual problems associated with filtering, especially deciding which sites are porn and false positives. Do you really want to be the flight attendant who explains to the customer who just paid $12.95 to access the Internet that American has decided to block his access to Mother Jones? And of course, it's relatively easy to circumvent this kind of blocking anyway.

 

October 5, 2008

Recently there was an extensive discussion going on on the Cryptography mailing list about how to build cheap hardware based random number generators. For those of you who aren't crypto or comsec specialists, the background goes something like this: most cryptographic protocols require a reliable source of numbers which are unpredictable to your adversary 1. For instance, in SSL/TLS the client (your browser) generates a random cryptographic key and sends it to the server encrypted under the server's public key. If the attacker can predict that random key, then he can decrypt the connection, as was famously shown by Wagner and Goldberg. So, having good randomness is important.

As I mentioned previously, standard procedure in comsec applications is to use a cryptographically secure PRNG which is seeded with data which is unpredictable to the attacker. But this still leaves us with the question of how we're going to seed the PRNG. The most common thing to do is to seed it with system events (network packets, drive timings, etc.). In principle, of course, this isn't entirely unpredictable: if you were on the same network as the target machine you could observer packets; if you knew the layout of the drive, you might be able to predict how long things would take, etc. (and older PRNGs that weren't wired into the OS used fewer sources of randomness). In any case, these sources aren't physically random the way that, say, radioactive decay is. There's a nontrivial contingent of the security community which is extremely interested (some would say obsessed) in physical/hardware random sources of randomness. [This goes back at least to von Neumann's famous statement about those who consider arithmetical methods of producing random numbers being in a state of sin.] The thread I'm pointing to above has a variety of suggestions for cheap external hardware randomness sources.

My general intuition is that while this sort of thing doesn't do any harm, it's not really necessary either. The amount of entropy you actually need to have a secure system is surprisingly small: 80-128 bits is enough in most situations, and most systems gather that kind of entropy in the normal course of business. For instance, in a system with a GUI, just collecting the low order (odd/even) bit of each mouse click coordinate gets you to 128 bits in 64 clicks, which isn't much. Even on unattended systems, there's probably enough timing randomness in hard drive behavior to get you to 128 bits relatively randomly. Even in cases where there aren't any unobservable behaviors and the PRNG is basically broken, the effort to reverse engineer the state of the PRNG becomes near prohibitive very quickly, as is the case with the Debian PRNG and DHE vs. RSA [*]. I'm not aware of any evidence that well-designed OS-based PRNGs like /dev/random are vulnerable because of insufficient entropy gathering.

1. Technical note: not as many as you'd think, though. Many protocols use random numbers in contexts where only unique numbers are required. For instance, in SSL/TLS and IKE it's standard practice to use cryptographically random nonces (called "Random" in TLS). However, the security guarantees of the protocol apply as long as the numbers are unique—you could probably just use a counter. The advantage of using long random numbers is that you don't need to store counter state to ensure uniqueness.