Provable Security (II)

| Comments (1) |
Koblitz and Menezes are at it again. Back in 2004, they published Another Look at "Provable Security" arguing that the reduction proofs that are de rigeur for new cryptosystems don't add much security value. (See here for a summary.) Last week, K&M returned to the topic with Another Look at "Provable Security" II which is about the difficulty of interpreting the reduction results. They take on the proofs for a number of well-known systems and argue that the don't show what you would like.

One of the themes that shows up repeatedly is what's called "non-tightness of reduction". Recall that the idea behind a reduction proof is to show that if you could break the cipher (recover the key, plaintext, whatever you've modelled it as) then you could use that ability to get leverage on some believed-hard problem. So, for instance you might want to prove that breaking your algorithm was "equivalent" to letting you factor large products of prime. There's generally some additional effort required to solve the hard problem, so you get a result where being able to break the cryptosystem with effort E implies being able to solve the hard problem with effort cE. If c is small, then the reduction is called "tight". If c is large (and we're talking real large, like big powers of two) then it's called "non-tight". K&N write:

Suppose that researchers have been able to obtain a highly non-tight reduction from a hard mathematical problem to breaking a protocol2. There are various common ways to respond to this situation:

  1. Even a non-tight reduction is better than nothing at all. One should regard the cup as half-full rather than half-empty, derive some reassurance from what one has, and try not to think too much about what one wishes one had.
  2. Even though the reduction is not tight, it is reasonable to expect that in the future a tighter reduction will be found.
  3. Perhaps a tight reduction cannot be found for the protocol in question, but a small modification of the protocol can be made in such a way as to permit the construction of a tight reduction -- and we should regard this reduction as a type of assurance about the original protocol.
  4. A tight reduction perhaps can be obtained by relaxing the underlying hard problem (for example, replacing the computational Diffie­ Hellman problem by the decision Diffie­Hellman problem). [This is a very popular move -- EKR]
  5. Maybe the notion of security is too strict, and one should relax it a little so as to make possible a tight reduction.
  6. Perhaps the protocol is secure in practice, even though a tight reduction may simply not exist.
  7. Perhaps the protocol is in fact insecure, but an attack has not yet been discovered.

These seven points of view are not mutually exclusive. In fact, protocol developers usually adopt some combination of the first six interpretations -- but generally not the seventh.

K&N go on to show a number of depressing examples of this and other problems. Probably the most depressing example they give of a proof that doesn't show what you'd like is the Blum Blum Shub Pseudorandom Number Generator, which is based on factoring. There are proofs that provide a lower bound on the adversary's effort (and hence on the security of the algorithm). Unfortunately, when you plug in reasonable sounding parameters (1024 bits) and the current best factoring algorithms you discover that the formula for the lower bound tells us that the attacker's lower bound is a stunning -2199 operations, which isn't that useful. In order to get a reasonable security guarantee, you need to use a 5000 bit modulus and extract only one bit per crank of the generator, which is far too expensive for practical use.

Other topics covered include the security of RSA (including OAEP), ECDSA, and a number of short signature schemes.

1. I'm simplifying here, but it's close enough for the purpose of explanation.
2. This is cryptographer jargon. In networking, protocol means "the bits that go on the wire" but in cryptography, it means something more like "this is the set of cryptographic messages that are sent between A and B". For most simple systems, you can read "protocol" as "algorithm".

UPDATE: Fixed the title of the new paper. Thanks to Perry Metzger for pointing out the error.

1 Comments

Seems like these days it's pretty tough to get a crypto paper accepted at the good conferences unless you have a proof of security. It doesn't seem to matter much how good the proof is, or how strong the assumptions are you have to make, but it better be there.

The real question then is how to judge a scheme where the inventors can't come up with any kind of proof of security, vs one where there is a proof. Are we well advised to prefer the one with the proof, all else being equal? I think most cryptographers would say yes. If a proposal appears without such a proof, in a domain where proofs are commonly provided, then either the scheme is highly questionable, or the people involved are incompetent. The proof presents a certain barrier to entry that can weed out less well developed proposals.

Leave a comment