On entropy depletion

| Comments (0) | COMSEC
One of the things I noticed in my review of OAuth was a pretty confusing section about entropy depletion:

The OAuth protocol has a number of features which may make resource exhaustion attacks against Service Providers possible. For example, if a Service Provider includes a nontrivial amount of entropy in Token Secrets as recommended above, then an attacker may be able to exhaust the Service Provider's entropy pool very quickly by repeatedly obtaining Request Tokens from the Service Provider.


Resource Exhaustion attacks are by no means specific to OAuth. However, OAuth implementors should be careful to consider the additional avenues of attack that OAuth exposes, and design their implementations accordingly. For example, entropy starvation typically results in either a complete denial of service while the system waits for new entropy or else in weak (easily guessable) secrets. When implementing OAuth, Service Providers should consider which of these presents a more serious risk for their application and design accordingly.

One hears concerns of entropy exhaustion occasionally in discussions of PRNGs, but it's sort of conceptually confused. Let's say we have some relatively slow source of true randomness, e.g., radioactive decay or Johnson noise. One way to use something like this is to take the input and shove it in a buffer (typically a FIFO). Then whenever a process asks for a random value, you read out of the buffer. If the rate of entropy consumption exceeds the rate of entropy input, elementary queueing theory tells you that eventually the buffer will empty and you'll have to block and wait for more entropy. That's approximately the problem that this document is talking about, though the randomness pools that systems actually use aren't this simple.

For this reason, among others1, you don't want to use this method for generating random numbers for protocols. As I mentioned previously, standard procedure isn't to use this data directly, but rather to feed it to a PRNG as seed input. This way you can extract an arbitrary number of bits without worrying about emptying the buffer. Obviously, this is only safe as long as there is a reasonable amount of seed data, but once the PRNG is seeded, you don't need to match the input to the output. It should be clear, of course, that this is information theoretically imperfect. If you have 100 bits of input entropy and you extract 200 bits of output, clearly an attacker with unlimited computational resources can determine the seed values and thus future values. However, if your PRNG is well designed, this should require 2^{n} effort where n is the size of the seed input, which quickly becomes impractical. Note that most PRNGs have a fixed internal state size (a few hundred to a thousand bits), so feeding in entropy beyond this doesn't buy you as much as you might think.

So, a well written application uses a PRNG seeded from some source of high quality randomness. However, likely some applications don't do this, and it's not helped by the fact that the random number devices on common UNIXen aren't consistent. E.g., on FreeBSD, /dev/random is a PRNG fed by kernel randomness, but on Linux /dev/random tries to track the amount of entropy input and will block when you read too many bytes whereas /dev/urandom is the PRNG which is seeded off the "true" randomness pool and doesn't block no matter how many bytes are read.

1. Another reason is that it's quite hard to get randomness sources where every bit of the measurement is random. Consider, for instance, a coin flip. There's a lot of randomness there, but most coins aren't completely balanced. However, a good PRNG can be fed only semi-random data and still make use of the entropy.

Acknowledgement: Thanks to Hovav Shacham for discussion about this post.

Leave a comment