What automatic key management is for

| Comments (1) | COMSEC
I just finished complaining in the KARP WG meeting about their rationale for automatic key management:
But why do we need a KMP?
  • To address brute force attacks [RFC3562] recommends:
    • frequent key rotation,
    • limited key sharing,
    • key length restrictions, etc.
  • Advances in computational power make that management burden untenable for MD5 implementations in today's routing
  • Keys must be of a size and composition that makes configuration and maintenance difficult or keys must be rotated with an unreasonable frequency.
  • KMPs help A LOT, IF you can make them operationally usable

I'm definitely in favor of automatic key management (indeed, I think there's some possibility that I was the person who prompted Steve Bellovin pushing this point in 4107), but I think this is set of rationales is pretty much totally wrong (you can find my review here). Greg Lebovitz prompted me to explain what was good about AKM and I grudgingly agreed, and since I'm always on the lookout for blog content...

Anyway, a little background might be in order. In a simple (non-automatic) system, we just agree on a shared key (e.g., we have a guy with a briefcase chained to his wrist) who transports the key and we use that key directly to protect (encrypt, authenticate, etc.) the data. This is generally referred to as "manual keying". Examples of protocols which use manual keying are the TCP MD5 option as well as some modes of IPsec AH and ESP.

Manual keying seems so easy that it's natural to ask why you would ever do anything else. Unfortunately there are two major problems with using pure manual keying: security and flexibility. The security problem is that while it's OK to use the same key for a long time, actually using it to directly encrypt data intended for multiple contexts can lead to a lot of problems. Probably the easiest to understand is what's called "two-time pads". In a number of encryption algorithms (technical note: stream ciphers) it's unsafe to encrypt two pieces of data with the same key. If you want to encrypt multiple chunks of data you can get around this by treating the chunks as if they were concatenated together and saving the encryption state in between chunks. Unfortunately, if you (for instance) reboot the computer, this state can get lost and now you're just encrypting twice with the same key, which, as I said, is bad. (I'm oversimplifying here, but it's close enough for our purposes). The standard fix is to create a new encryption key for each session, using the long-term encryption key to bootstrap it. This generally requires having some mechanism where the two sides agree to start a new session and establish a fresh key, in other words, automatic key management. There are a number of other benefits as well, including having explicit session boundaries, which often, though not always, implies positive identification of the party on the other end by forcing him to confirm that he has the right keys.

The flexibility problem is mostly about coordination. Let's say we build a system where everyone supports algorithm A and then we want to introduce algorithm B. When Alice wants to connect to Bob, they now need to configure not only the keys the want to use but also the algorithms. And if they get it wrong, nothing will work. As soon as you have more than a few options, the number of ways to get a configuration mismatch starts to get out of control. If we have a mechanism where we explicitly establish a session, then we can also use that to figure out what each side supports and pick a common mechanism. Similarly, if we have systems which operate with multiple security settings, an automatic handshake between them can help to determine the most appropriate mode.

In summary, then, my answer is something like this:

The use of automatic key management mechanisms offers a number of benefits over manual keying. First, it provides fresh traffic keying material for each session, thus helping to prevent a number of attacks such as inter-connection replay and two-time pads. Second, it allows for the negotiation of algorithms, modes, and parameters, thus providing interoperability between endpoints with disparate capabilities and configurations. This second point is especially important because it enables incremental deployment of endpoints with new capabilities while retaining backward compatibility.

I wanted to make one more point: because most of the automatic key mechanisms we have are oriented towards public key cryptography (though they also typically have some sort of pre-shared key mode), there is a tendency to assume that automatic key management implies buying into the whole public key thing, including a public key infrastructure, which is scary for a lot of people. This isn't correct, however. It's quite possible to design an automatic key management system which only requires the same shared keys you would otherwise use (and as I said, popular protocols like TLS and IPsec/IKE do this already if you configure them right). In fact, if you do things right, you can make the whole thing mostly painless. For example, the recently-designed TCP-AO replaces the manual key management in TCP-MD5 with a (incredibly primitive) automatic key management system that only does new key establishment and key rollover, but in a way that's (hopefully) invisible to the end-user.

1 Comments

Hey I just wanted to say that I really enjoyed reading your blog. You have good views, Keep up the good informative info :) Just a quick question though. Are you making enough money from blogging?

Look, I am blogger myself and I know how hard it is to make a living from blogging. The whole thing changed once my friend introduced me to Blogging to the Bank 2010. I now can support my whole family from blogging alone. Just check it out http://bit.ly/blogging-to-the-bank-2010

Leave a comment