COMSEC: March 2010 Archives

 

March 24, 2010

I just finished complaining in the KARP WG meeting about their rationale for automatic key management:
But why do we need a KMP?
  • To address brute force attacks [RFC3562] recommends:
    • frequent key rotation,
    • limited key sharing,
    • key length restrictions, etc.
  • Advances in computational power make that management burden untenable for MD5 implementations in today's routing
  • Keys must be of a size and composition that makes configuration and maintenance difficult or keys must be rotated with an unreasonable frequency.
  • KMPs help A LOT, IF you can make them operationally usable

I'm definitely in favor of automatic key management (indeed, I think there's some possibility that I was the person who prompted Steve Bellovin pushing this point in 4107), but I think this is set of rationales is pretty much totally wrong (you can find my review here). Greg Lebovitz prompted me to explain what was good about AKM and I grudgingly agreed, and since I'm always on the lookout for blog content...

Anyway, a little background might be in order. In a simple (non-automatic) system, we just agree on a shared key (e.g., we have a guy with a briefcase chained to his wrist) who transports the key and we use that key directly to protect (encrypt, authenticate, etc.) the data. This is generally referred to as "manual keying". Examples of protocols which use manual keying are the TCP MD5 option as well as some modes of IPsec AH and ESP.

Manual keying seems so easy that it's natural to ask why you would ever do anything else. Unfortunately there are two major problems with using pure manual keying: security and flexibility. The security problem is that while it's OK to use the same key for a long time, actually using it to directly encrypt data intended for multiple contexts can lead to a lot of problems. Probably the easiest to understand is what's called "two-time pads". In a number of encryption algorithms (technical note: stream ciphers) it's unsafe to encrypt two pieces of data with the same key. If you want to encrypt multiple chunks of data you can get around this by treating the chunks as if they were concatenated together and saving the encryption state in between chunks. Unfortunately, if you (for instance) reboot the computer, this state can get lost and now you're just encrypting twice with the same key, which, as I said, is bad. (I'm oversimplifying here, but it's close enough for our purposes). The standard fix is to create a new encryption key for each session, using the long-term encryption key to bootstrap it. This generally requires having some mechanism where the two sides agree to start a new session and establish a fresh key, in other words, automatic key management. There are a number of other benefits as well, including having explicit session boundaries, which often, though not always, implies positive identification of the party on the other end by forcing him to confirm that he has the right keys.

The flexibility problem is mostly about coordination. Let's say we build a system where everyone supports algorithm A and then we want to introduce algorithm B. When Alice wants to connect to Bob, they now need to configure not only the keys the want to use but also the algorithms. And if they get it wrong, nothing will work. As soon as you have more than a few options, the number of ways to get a configuration mismatch starts to get out of control. If we have a mechanism where we explicitly establish a session, then we can also use that to figure out what each side supports and pick a common mechanism. Similarly, if we have systems which operate with multiple security settings, an automatic handshake between them can help to determine the most appropriate mode.

In summary, then, my answer is something like this:

The use of automatic key management mechanisms offers a number of benefits over manual keying. First, it provides fresh traffic keying material for each session, thus helping to prevent a number of attacks such as inter-connection replay and two-time pads. Second, it allows for the negotiation of algorithms, modes, and parameters, thus providing interoperability between endpoints with disparate capabilities and configurations. This second point is especially important because it enables incremental deployment of endpoints with new capabilities while retaining backward compatibility.

I wanted to make one more point: because most of the automatic key mechanisms we have are oriented towards public key cryptography (though they also typically have some sort of pre-shared key mode), there is a tendency to assume that automatic key management implies buying into the whole public key thing, including a public key infrastructure, which is scary for a lot of people. This isn't correct, however. It's quite possible to design an automatic key management system which only requires the same shared keys you would otherwise use (and as I said, popular protocols like TLS and IPsec/IKE do this already if you configure them right). In fact, if you do things right, you can make the whole thing mostly painless. For example, the recently-designed TCP-AO replaces the manual key management in TCP-MD5 with a (incredibly primitive) automatic key management system that only does new key establishment and key rollover, but in a way that's (hopefully) invisible to the end-user.

 

March 22, 2010

It's IETF time again and recently I've reviewed a bunch of drafts concerned with cryptographic rekeying. In my opinion, rekeying is massively overrated, but apparently I've never bothered to comprehensively address the usual arguments. Now seems like as good a time as any...

As background, there are two major kinds of cryptographic keys:

  • Long-term keys (e.g., your password or an SSL server's RSA key pair).
  • Traffic keys used to encrypt and/or authenticate data in transit.

For instance, in channel security protocols like SSL/TLS, SSH, or IPsec, you use your long-term keys to authenticate a cryptographic handshake which sets up the traffic keys which are then used to encrypt/MAC the data between the communicating peers. In other protocols, such as DNSSEC or X.509, long-term keys are used to directly protect data—this is particularly common in applications where data is signed and then published. Sometimes these situations shade into each other: even though TCP-MD5 is a channel security protocol, you use the shared key directly to authenticate the traffic. Even with TCP-AO, where you generate a separate key for each connection, connections are very long-lived so the traffic keys are very long-lived as well.

All of the following rationales were recently seen in IETF submissions:

Key "strengthening". If you change keys frequently, analytic or brute-force attackers need to do more work in order to maintain their access to valid keys. See, for instance: draft-birkos-p2psip-security-key-refresh-00:

A secondary goal is to limit the amount of time available to attackers that may be using cryptanalysis in order to reveal private keys.

I've already beat up on this idea: unless you change keys incredibly frequently, you just don't get a significant level of security enhancement. For instance, changing keys daily is only about 9 bits more secure than changing them yearly.

This rationale gets applied especially often in situations where for some reason you can't use a key as long as you would like, e.g., for packet size reasons or because you're using a key that the user has to remember.

Key exhaustion: minimizing the amount of traffic available to the attacker protected with the same key. For instance: draft-ietf-karp-design-guide-00:

Cryptographic keys must have a limited lifetime so that they are vulnerable against cryptanalysis attacks. Each time a key is employed, it generates a cipher text. In case of routing protocols the cipher text is the authentication data that is carried by the protocol packets. Using the same key repetitively allows an attacker to build up a store of cipher texts which can prove sufficient for a successful cryptanalysis of the key value.

This was true once upon a time, but now it's just cryptographic inertia (do you still type "sync; sync; sync"?). While it's not true that cryptographic algorithms can be used for an unlimited number of operations, the limits are extremely far out and mostly depend on the properties of the cryptographic modes rather than the algorithms themselves. For instance, in cipher block chaining mode, with a b-bit block, you can safely encrypt up to around 2^{b/2} blocks. With a modern algorithm like AES, this means 2^{68} bytes of data, which is a truly ridiculous number. Even in situations where there are limits (e.g., counter mode with a limited counter space), the threat is generally to the data rather than the key. When we're talking about asymmetric algorithms (public key encryption, digital signature, etc.) there's no realistic threat from processing an unlimited amount of data.

  • Damage limitation If a key is disclosed, but you change it in the meantime, then you're only vulnerable during the time period before it is changed. (This is especially relevant sounding if the key is disclosed but you don't know about it). For instance draft-ietf-karp-design-guide-00.
    Another reason for limiting the lifetime of a key is to minimize the damage from a compromised key. It is unlikely a user will discover an attacker has compromised his or her key if the attacker remains "passive." Relatively frequent key changes will limit any potential damage from compromised keys.

    This isn't totally crazy: obviously if you know that your key has been compromised, you should change it, so it's at least superficially plausible that if you think your key might have been compromised you should change it. But we need to ask how that could actually happen. You've got a 128-bit (or whatever) encryption key stuffed into some file on your hard drive. There are a few ways that key could become available to an attacker:

    • You could accidentally send it to them (e.g., cut-and-paste it into an AIM window [thanks to Chris Morrow for this vivid example]
    • They could compromise your endpoint and steal it.
    • They could gain temporary access, e.g., by bribing your staff.

    So, in the second case, changing your keys doesn't help, since the attacker will just steal them. In the third case, it might help, if the relevant staffer has been fired, or they have to take a risk every time they steal a key, or something. On the other hand, if they have continuous access, then changing the key doesn't help. The first is a real risk, but despite my native paranoia it seems like realistically you would know if this happened, and just be able to change the key then, rather than worrying about whether it happened and you didn't know.

    Having a key leak and not know it is primarily (though not exclusively) a concern for keys used for confidentiality. In order to exploit a key used for authentication/integrity you need to inject traffic, so this is more likely to be noticed. By contrast, to exploit a key used for confidentiality you just need to watch passively. This can also be done retroactively, since disclosure of an encryption key at time T still compromises data encrypted before the key was disclosed. Note that we're talking about the direct use of a key here, not about intent: protocols which use a long-term RSA key to authenticate a DH exchange aren't vulnerable to passive attack from disclosure of the RSA key, even if the only purpose of the protocol is confidentiality.

    Moreover, if we follow this logic to its obvious conclusion we should be changing keys every few minutes; we don't, of course, because changing keys has costs (I'm also concerned that the very process of changing keys increases the risk for leakage, since you need more direct contact with the key management process). Without a mathematical model for the cost/benefit computation (I don't have anything more than handwaving just yet), I don't think it makes much sense to provide guidance that keys should be changed frequently.

  •  

    March 11, 2010

    This paper on a new fault-based attack on RSA has been making the rounds (Pellegrini, Bertacco, and Austin, "Fault-Based Attack of RSA Authentication). The general idea here is that you have a system that is doing RSA signatures (e.g., an SSL/TLS Web Server). You induce faults in the signature computation by reducing the power to the processor, which causes the process to produce invalid signatures which can then be analyzed by the attacker to recover the private key. They demonstrate this attack on OpenSSL.

    Theoretically, this is interesting, but I'm not sure how much practical impact it has: in order to mount this attack, you need direct physical access to the machine in order to control the input voltage supply. Unless you're working with a computer that is fairly heavily secured, physical access generally translates into being able to take control of the device and extract the private key in any case. Second, the attack as implemented was performed on a FPGA-based SPARC implementation, and the researchers seem to have directly controlled the input power to the processor. In most computers (though DC-based datacenters may be different) the power to the chip is pretty heavily controlled by the power supply, and so it's at least an open question if you would be able to get good control over the chip input voltage by manipulating the AC line voltage. So, it's not like there are a huge number of environments in which this attack would be feasible.

    Based on my reading of this paper, because the attack relies on invalid signatures, the simple countermeasure is just to check signatures before you emit them, which OpenSSL doesn't currently do (I'm not sure I agree wih the authors call OpenSSL's failure to do this a "serious vulnerability", but I'm not sure I agree with this characterization, since my understanding is that it's pretty standard practice not to do so). Because RSA signature verification is about 20x faster than RSA signature generation, adding this additional check would not cause significant performance overhead. However, even without this countermeasure, this doesn't seem like a significant risk to most uses of RSA.