Against rekeying

| Comments (6) | COMSEC
It's IETF time again and recently I've reviewed a bunch of drafts concerned with cryptographic rekeying. In my opinion, rekeying is massively overrated, but apparently I've never bothered to comprehensively address the usual arguments. Now seems like as good a time as any...

As background, there are two major kinds of cryptographic keys:

  • Long-term keys (e.g., your password or an SSL server's RSA key pair).
  • Traffic keys used to encrypt and/or authenticate data in transit.

For instance, in channel security protocols like SSL/TLS, SSH, or IPsec, you use your long-term keys to authenticate a cryptographic handshake which sets up the traffic keys which are then used to encrypt/MAC the data between the communicating peers. In other protocols, such as DNSSEC or X.509, long-term keys are used to directly protect data—this is particularly common in applications where data is signed and then published. Sometimes these situations shade into each other: even though TCP-MD5 is a channel security protocol, you use the shared key directly to authenticate the traffic. Even with TCP-AO, where you generate a separate key for each connection, connections are very long-lived so the traffic keys are very long-lived as well.

All of the following rationales were recently seen in IETF submissions:

Key "strengthening". If you change keys frequently, analytic or brute-force attackers need to do more work in order to maintain their access to valid keys. See, for instance: draft-birkos-p2psip-security-key-refresh-00:

A secondary goal is to limit the amount of time available to attackers that may be using cryptanalysis in order to reveal private keys.

I've already beat up on this idea: unless you change keys incredibly frequently, you just don't get a significant level of security enhancement. For instance, changing keys daily is only about 9 bits more secure than changing them yearly.

This rationale gets applied especially often in situations where for some reason you can't use a key as long as you would like, e.g., for packet size reasons or because you're using a key that the user has to remember.

Key exhaustion: minimizing the amount of traffic available to the attacker protected with the same key. For instance: draft-ietf-karp-design-guide-00:

Cryptographic keys must have a limited lifetime so that they are vulnerable against cryptanalysis attacks. Each time a key is employed, it generates a cipher text. In case of routing protocols the cipher text is the authentication data that is carried by the protocol packets. Using the same key repetitively allows an attacker to build up a store of cipher texts which can prove sufficient for a successful cryptanalysis of the key value.

This was true once upon a time, but now it's just cryptographic inertia (do you still type "sync; sync; sync"?). While it's not true that cryptographic algorithms can be used for an unlimited number of operations, the limits are extremely far out and mostly depend on the properties of the cryptographic modes rather than the algorithms themselves. For instance, in cipher block chaining mode, with a b-bit block, you can safely encrypt up to around 2^{b/2} blocks. With a modern algorithm like AES, this means 2^{68} bytes of data, which is a truly ridiculous number. Even in situations where there are limits (e.g., counter mode with a limited counter space), the threat is generally to the data rather than the key. When we're talking about asymmetric algorithms (public key encryption, digital signature, etc.) there's no realistic threat from processing an unlimited amount of data.

  • Damage limitation If a key is disclosed, but you change it in the meantime, then you're only vulnerable during the time period before it is changed. (This is especially relevant sounding if the key is disclosed but you don't know about it). For instance draft-ietf-karp-design-guide-00.
    Another reason for limiting the lifetime of a key is to minimize the damage from a compromised key. It is unlikely a user will discover an attacker has compromised his or her key if the attacker remains "passive." Relatively frequent key changes will limit any potential damage from compromised keys.

    This isn't totally crazy: obviously if you know that your key has been compromised, you should change it, so it's at least superficially plausible that if you think your key might have been compromised you should change it. But we need to ask how that could actually happen. You've got a 128-bit (or whatever) encryption key stuffed into some file on your hard drive. There are a few ways that key could become available to an attacker:

    • You could accidentally send it to them (e.g., cut-and-paste it into an AIM window [thanks to Chris Morrow for this vivid example]
    • They could compromise your endpoint and steal it.
    • They could gain temporary access, e.g., by bribing your staff.

    So, in the second case, changing your keys doesn't help, since the attacker will just steal them. In the third case, it might help, if the relevant staffer has been fired, or they have to take a risk every time they steal a key, or something. On the other hand, if they have continuous access, then changing the key doesn't help. The first is a real risk, but despite my native paranoia it seems like realistically you would know if this happened, and just be able to change the key then, rather than worrying about whether it happened and you didn't know.

    Having a key leak and not know it is primarily (though not exclusively) a concern for keys used for confidentiality. In order to exploit a key used for authentication/integrity you need to inject traffic, so this is more likely to be noticed. By contrast, to exploit a key used for confidentiality you just need to watch passively. This can also be done retroactively, since disclosure of an encryption key at time T still compromises data encrypted before the key was disclosed. Note that we're talking about the direct use of a key here, not about intent: protocols which use a long-term RSA key to authenticate a DH exchange aren't vulnerable to passive attack from disclosure of the RSA key, even if the only purpose of the protocol is confidentiality.

    Moreover, if we follow this logic to its obvious conclusion we should be changing keys every few minutes; we don't, of course, because changing keys has costs (I'm also concerned that the very process of changing keys increases the risk for leakage, since you need more direct contact with the key management process). Without a mathematical model for the cost/benefit computation (I don't have anything more than handwaving just yet), I don't think it makes much sense to provide guidance that keys should be changed frequently.

  • 6 Comments

    > (do you still type "sync; sync; sync"?)

    I LOLed. I still type sync;sync;sync -- just in case.

    C

    ...where of course ";" is a carriage return, not an inline ; -- otherwise the whole point of the repetition is wasted.

    I think that, absent enough field reports from people who actually routinely attack cryptographic systems, we can't really know whether changing keys is useful or not. I'm guessing you're right, but it would be silly to claim anyone outside of the COMINT community really knows for sure. All we can do is guess, and possibly badly.

    Here's another (albeit dubious) reason to roll over keys.

    When the storm troopers land on your rooftop, force all of your guards away from their keyboards, and drop your computer into a giant vat of Liquid Nitrogen so that can carry it back home with them, you would like to minimize the amount of instant messenger data that you've been exchanging with a friend that they will be able to decrypt given their having recorded the data on the wire and their (now) access to all the keys currently in use. Periodically rolling over session keys is an aspect of perfect forward secrecy. They can only recover data sent or received since the last key rollover. In this scenario, the rollover period should probably be time based rather than quantity of data based.

    With very long standing connections (say - months), variations of this concern are not entirely unrealistic.

    I fully agree with EKR w.r.t. re-keying as a mitigation for key aging: with modern cryptosystems there's really no need to do that.

    I wonder if the issue that Charlie brings up can't be dealt with by a form of 'lite' re-keying where each peer in a two-party protocol independently derives new session keys from the previous ones and a counter and sends the counter in its messages. There'd be a slight synchronization problem for non-ordered message streams (as in DTLS) for the other peer to decide when to forget older keys. This approach to re-keying strikes me as vastly simpler than SSHv2 re-keying and TLS re-negotiation. On the other hand, for something like IKE where there is no re-keying, just establishment of new SAs to replace old ones, there'd be no benefit.

    1) First of all, you only really needed two sync's, not three. Guess the 3rd was for good luck. (At least that was true going back as far as on Version 6 and Version 7 UNIX and IIRC, all way up through SVR4. And yes, I am a dinosaur. ;)
    2) Agree with most everything Eric wrote on rekeying, but even if the entire cryptographic community agreed with EKR, we are going to be stuck with it for awhile since the bureaucratic wonks writing the private industry security policies such as PCI Data Security Standards, NACHA, etc. all have it in their brains that regularly scheduled rekeying is a Good Thing(TM).

    True story: Back in the days of PCI DSS 1.0, the Data Security Standard only specified that keys should be changed regularly, but did not specify how often it was required. So I posed the question to our PCI auditor. He came back and said that it was deliberately left vague because it depended on the situation. I said I could not test for compliance against such an ill-specified requirement. I asked him, so "does regular mean once every use, once every minute, once every year, once every 10,000 years, or what?" I told him that if he didn't get me a specific answer, I was going to decide and my choice was going to be once every 10,000 years or whenever we had reasonable suspicion that the key had been compromised, whichever came first. Shortly thereafter, he got back to me and said "at least once a year", and soon after that, the one year minumum was added to the next DSS release.

    Of course, because of that rekeying requirement in PCI DSS, followed by NACHA, our corporate security folks mandated a one year minimum for all symmetric keys. But other than when there's a possibility that a key has been leaked to an unauthorized person (e.g., person with root access leaves the company, etc.), I agree it is of little benefit other than giving people the warm fuzzies. It is much harder to build a cryptosystem that can automatically detect and deal with automated key change operations than one that doesn't. Would love to see your formal cost/benefit analysis on this if you do one.

    Leave a comment