COMSEC: April 2007 Archives


April 9, 2007

As I mentioned earlier, all symmetric authentication mechanisms have some level of key/password equivalence. However, this can be removed with asymmetric (aka "public key") techniques.

The simplest such technique is very much like a challenge/response technique except that the response sent by the AP is a public key digital signature. The way that this works is that the VP knows the AP's public key but not the AP's private key. The VP then provides the AP with a random challenge and the VP returns: Sign(Kpub, Challenge.1. This technique is what's used in SSL client authentication and in the SSH RSA and DSA modes (sometimes used with certificates). A related technique is to have the AP have a encryption/decryption key rather than a signature key and have the VP encrypt a message under that key (this is how SSL server authentication often works).

So, if these techniques are so great, why aren't they used all the time? There are a number of reasons:

  1. Public keys are hard to work with. They require the AP to have a public key pair stored on a disk somewhere (impairing portability) and requires some way to carry a fairly heavyweight data object (~100 bytes of binary data) to the server. People put up with this for SSH but they don't like it.
  2. It doesn't provide mutual authentication (and inherently can't because the server only has the public key). Obviously, you can have the server have a public key pair as well, but that makes the key management problem even more annoying.

The first problem is soluble, at least in part if you're willing to trade in some security. The idea here is that you generate your key pair by hashing your password. Then the AP doesn't need to store a public key (again, assuming that you have some other method of authenticating in the other direction). The security tradeoff here is that an attacker can now mount a dictionary attack on your communications with the server unless they're encrypted. He just captures a transcript and then keeps generating trial passwords until he finds one that generates the right private key and hence signature. Of course, this problem already existed with challenge/response-based password mechanisms so the problem hasn't been made any worse.

We can also improve the problem of the key that gets copied to the VP by storing a hash of the key rather than the full key. The AP then needs to provide the public key which can be compared to the hash. This brings the size of the data stored by the VP down to about 128-160 bits. An alternative is to simply give the VP the password on initial registration and let him compute the private/public key pair and then "forget" the password. This obviously needs to be used with some technique to ensure that the private keys are different for multiple VPs even if you use the same password all the time. None of these techniques solve the mutual authentication problem, which needs to be attacked by other means.

This brings us to the topic of zero-knowledge password proofs (also known as password-authenticated key agreement. These protocols use public key techniques to allow two parties who jointly share a password to establish a shared key which is not accessible to any attacker. These protocols can also be constructed so that the VP does not have a password/key equivalent. This differs from the easier-to-understand public key techniques in two important ways. First, they support mutual authentication natively. Second, they don't allow the attacker to mount a dictionary attack on a single connection. For each guess he wants to check he needs to do an online communication with one side. The major remaining attack is that the VP can mount a dictionary attack on his stored value in an attempt to recover the AP's password (though of course the AP/VP terminology is less useful in a mutually authenticated environment). Short of using high-entropy passwords/keys, this attack doesn't seem removable.

1. Actually standard practice is for the AP to provide some randomness of its own, but the attacks where that's relevant aren't that important for understanding the concept.


April 7, 2007

In our previous episode, we talked about key equivalence in physical locks and password systems. As you'll recall, conventional password systems have the problem that the authenticating party (i.e., the user, hereafter called the AP for generality) needs to provide their password to the verifying party (VP, i.e., the server). This has (at least) two bad properties:
  1. An attacker who can intercept your communication with the verifying party or who temporarily controls the verifying party can capture your authenticator (password) when you use it to log in and use it to impersonate you to that verifying party.
  2. An attacker who can intercept your communication with the verifying party or who temporarily controls the verifying party can capture your authenticator (password) and use it to impersonate you to other verifying parties with which you used the same password (and you know you do)

The way to solve the first problem is to have a protocol that allows the AP to prove that they know the password without actually revealing it to the VP. The standard solution to this is what's called a challenge-response protocol. The VP provides the AP with a randomly chosen challenge (technically the challenge just has to be one the VP hasn't used before, but this is almost always chosen randomly) and the AP computes some one-way function of the password/key as the response. The VP stores a copy of the password/key and can thus independently recompute the response. If they match, then the VP knows the AP is who he says he is (or at least knows the password/key).

But wait, last time I said that it was bad for the VP to have the password:

This has a big problem. If someone breaks into the server and gets a copy of the password list they get a copy of everyone's password and can impersonate users. This is what's called a password equivalent or a key equivalent for reasons that will become clear a little later. This lets them leverage a disclosure exploit (i.e., one that lets them read files on a system) into an intrusion exploit (i.e., one that lets them break in or pose as another user). It also means that the password file has to be stored with very strict permissions.

Previously, we solved this problem by storing the hash of the password, but that worked because the AP gave the VP the password to hash. In a challenge-response system the VP needs to independently compute the response. Now, you can of course compute the response based on the password hash rather than the password, i.e., response = F(challenge, H(password)) but that doesn't solve the problem because the VP's password file contains H(password). So, while you don't actually have the password you have a value which is equivalent to it, hence the term password equivalent. Anyone who compromises the password file can impersonate the AP to the VP. So, we've solved the problem of someone intercepting1 the authentication exchange being able to impersonate the AP but we've actually made the problem of password file theft worse.

We can improve the problem somewhat by making sure that each VP has a different password. Then at least you can't compromise one VP and use it to attack another. Of course, it's not practical to believe that people will actually use a different password for each of the 30 web sites they have logins for, but you can solve this problem by hashing in the name of the VP to the stored password. I.e., the VP stores H(VP-name, password)2 and the response is computed using that value as the input. So, if you get at a VP's password file you can impersonate APs to that VP, but not to any other VP. This is an improvement (call it weak password equivalence), but it's not perfect. However, it's the best we can do with symmetric cryptography. In our next installment, we'll see how to improve the situation still further.

1. Well, mostly. An attacker can still mount a man-in-the-middle attack on a single authentication, and then pose as the AP for the duration of that session, but he can't reuse the captured authenticator later. Moreover, this attack can be fixed by binding the challenge-response to a cryptographically protected channel between client and server. One example of this is TLS pre-shared key mode (RFC 4507). 2. Yeah, I'm sure you'd rather use HMAC, but a hash is close enough to get the idea across and is mostly secure in most settings.


April 6, 2007

A British jail is changing all its locks because the keys were shown on TV:
An ITN team mistakenly filmed keys on a visit to Feltham Young Offenders' Institution, West London — sparking fears they could be copied.

It meant the nick's 11,000 locks and 3,200 keys all had to be replaced.

First, I'm fairly skeptical that you can reverse-engineer the keys for a lock based on just seeing the key on TV (and unless the lock is incredibly badly engineered, I don't see how you can do it with the lock) unless it's some extreme close-up shot, in which case it should be easy for the jail to figure out what keys were compromised and just rekey them, rather than the whole jail. Second, keys are just part of the jail defense-in-depth system, so hopefully compromise of keys isn't a disaster. After all, it's not that hard to pick most locks, so you can't count on only the lock anyway.

In general it's not that attractive a property of a security system that just seeing one of the elements allows the attacker to break the system. This is sort of inherent in the construction of ordinary physical locks but even there you could improve the situation a bit by (for instance) putting the beveled sections of the key on the inside rather than the outside so just looking at the key doesn't reveal much information. It's of course harder to cut keys that way with conventional cutting machines but arguably that's a feature since it means that you need specialized equipment to duplicate the keys1, which presents a modest barrier. The bottom line is still that with physical lock systems if you can examine the key (even briefly) or the lock (sometimes quite extensively) you can typically figure out what the key looks like enough to get in.)

In digital security systems, by contrast, we can do quite a bit better. Let's start by talking about a simple password system like you would use to log in to your bank (and like people used to use to log in to their computers back when they were multiuser). The way this works is that you type your password into your Web browser and it's sent over the Intertubes (hopefully encrypted with SSL) to the server on the other side, which needs to check it. The easiest way to do this is to have the server just store a copy of the password locally and do a memory comparison.

This has a big problem3. If someone breaks into the server and gets a copy of the password list they get a copy of everyone's password and can impersonate users. This is what's called a password equivalent or a key equivalent for reasons that will become clear a little later. This lets them leverage a disclosure exploit (i.e., one that lets them read files on a system) into an intrusion exploit (i.e., one that lets them break in or pose as another user). It also means that the password file has to be stored with very strict permissions. The fix for this problem is well known. You don't store the password itself but rather you store a one-way function (originally computed with DES but now typically with a hash function) of the password. Call this H(Password). When the user provides their password you compute H(password) and compare it to the stored value. If they match, the user is in. This scheme has the advantage that compromise of the password file is much less dangerous. In fact, on old Unix systems password files used to be publicly readable until it became clear that you could simply try a bunch of candidate passwords until you got a hash that matched (this is called a dictionary search) at which point we went back to hiding the passwords. Even so, a dictionary search is a lot harder than just reading the passwords off the disk.

Even with this fix, simple passwords have the big problem that if you can convince the user to authenticate to you just once then you know their password (it doesn't help here that users tend to use the same password on multiple sites). This is the basis of both (pre-SSL) password sniffing attacks and of phishing. So, the state we have now is that we can make examining the lock basically useless (as long as people choose really strong passwords) but since authenticating requires presenting a copy of the key, if you can examine the key (e.g., by impersonating the lock) you can impersonate the user as much as you want. This is the state of nearly all Web-based login systems today, but it can can be improved upon quite a bit by some cryptography. I'll get to that next.

1.By contrast, the major security feature on "do not duplicate" keys is often the stamp that says "DO NOT DUPLICATE" (the capital letters are what make it mandatory.2.) Sometimes, but not always the blanks are restricted, but obviously the stamp has nothing to do with that.

2. In this document, the keywords "MUST", "MUST NOT", "REQUIRED", "SHOULD", "SHOULD NOT", and "MAY" are to be interpreted as described RFC 2119.

3. Note to advanced readers, don't bother me about timing analysis. I'll try to write that up later.

UPDATE: In the comments, Chris WalshByrd reminds me that someone actually has copied a Diebold key from a picture on a web site. I haven't seen the relevant picture, but I suspect it's a lot better than your average picture on a TV, which tend to be taken from funny angles and be fairly low resolution.