Understanding the TLS Renegotiation Attack

| Comments (5) | COMSEC IETF
Marsh Ray has published a new attack on the TLS renegotiation logic. The high level impact of the attack is that an attacker can arrange to inject traffic into a legitimate client-server exchange such that the TLS server will accept it as if it came from the client. This may allow the attacker to execute operations on the server using the client's credentials (e.g., order a pizza as the client). However, the attacker does not (generally) get to see the response. Obviously this isn't good, but it's not the end of the world. More details below.

TLS Details
The attack exploits TLS's renegotiation feature, which allows a client and server who already have a TLS connection to negotiate new parameters, generate new keys, etc. Renegotiation is carried out in the existing TLS connection, with the new handshake packets being encrypted along with application packets. The difficulty is that they're not otherwise tied to the channel, which gives the attacker a window. The simplest form of the attack is shown below [cribbed from draft-rescorla-tls-renegotiate.txt]

Client                        Attacker                        Server
------                        -------                         ------
                                  <----------- Handshake ---------->
                                  <======= Initial Traffic ========>
<--------------------------  Handshake ============================>
<======================== Client Traffic ==========================>

So, in order to mount the attack, the attacker first connects to the TLS server. He can communicate with the server as much as he wants, including making an arbitrary number of requests/responses, etc. This traffic is all encrypted and shown as ==. Then when he's ready, he hijacks the client's connection to the server (in practice he might start by hijacking the connection and then connect to the server but it doesn't matter) and just proxies the client's traffic over the encrypted channel. The client negotiates with the server and from that point on the client and the server communicate directly. Note that the client is communicating with the attacker in the clear but the second handshake is encrypted and goes over the attacker's channel. Thus, the client does not know that he is renegotiating. However, the server thinks that the initial traffic with the attacker is also from the client. There are also other (probably less useful) variants where both sides see a renegotiation but of different connections.

Impact on Existing Applications
TLS itself is just a security protocol, so the impact of this attack depends on the application protocol running over TLS. The most important of these protocols is of course HTTP over TLS (HTTPS). Most Web applications do initial authentication via a username/password pair and then persist that authentication state with HTTP cookies (a secret token that is sent with any request). An attacker might exploit this issue by sending a partial HTTP request of his own that requested some resource. This then gets prefixed to the client's real request.

E.g., the attacker would send:

GET /pizza?toppings=pepperoni;address=attackersaddress HTTP/1.1 
X-Ignore-This:

And leave the last line empty without a carriage return line feed. Then when the client makes his own request

GET /pizza?toppings=sausage;address=victimssaddress HTTP/1.1 
Cookie: victimscookie

the two requests get glued together into:

GET /pizza?toppings=pepperoni;address=attackersaddress HTTP/1.1 
X-Ignore-This: GET /pizza?toppings=sausage;address=victimssaddress HTTP/1.1 
Cookie: victimscookie

And the server uses the victim's account to send a pizza to the attacker.

You can mount a similar attack if the server uses certificate-based client authentication: it's common (well, common in the small number of cases where client certs are used at all) for the server to let the client connect and request a resource and then if the resource is protected renegotiate asking for a certificate. The attacker can exploit this by doing the initial handshake and asking for the resource and then letting the client do the renegotiation, at which point the server acts as if the initial request was made by the client.

It's important to note that in both the cases we just described the attacker doesn't get to see any sensitive information directly: that's just sent back encrypted to the client. However, he can exploit side effects of the exchange, e.g., to get a pizza. It may also be possible to exploit HTTP features to directly access the data. For instance, he might be able to generate a combined request that would cause the server (or perhaps somehow mirrored through the client) to send the original client request to the attacker. If the request contains the client's cookie or password, this would cause their credentials to leak. It's not clear to me if this is possible, but I'm hoping some of the Web security specialists will weigh in.

The impact on other protocols (IMAP, SIP, etc.) would depend on those protocols and I haven't seen complete analyses of these yet.

Mitigations
Eventually there will be a TLS level protocol fix (see below). However, in the meantime options are limited.

For 99+% of applications, the mitigation is very simple: the server should simply disable all renegotiation, which stops the attack. (OpenSSL will helpfully automatically renegotiate which facilitates the attack even if the application isn't otherwise set up to do renegotiation). Unfortunately, there is no similar defense on the client side. In the example I showed above, the client is unaware that renegotiation happened. Moreover, the client can't tell that the server will refuse renegotiation (it could explicitly probe but the attacker could of course fake a failure). So, the client can't really do anything useful to protect itself.

The second drawback is that there are a small number of cases (e.g., the certificate authentication one I described above) where renegotiation actually should happen. The most practical defense on the server side is to restructure the site so that requests which require client auth are redirected to a different address or port which always requests a certificate and itself refuses renegotiation. However, this obviously requires major work on many sites.

There are a few other legitimate reasons for renegotiation, but they're mostly things one doesn't need to do. For instance, people sometimes renegotiate to force the generation of fresh keying material. This is not necessary with modern cryptographic algorithms. Another example provided by Ray is sites that support algorithms with differential strength. Again, this isn't really necessary. Unless you're doing certificate-based client authentication it's very unlikely you need to renegotiate and if you are using it, the workaround above is the simplest.

Long-Term Defense
Despite the existence of some defenses, it seems clear that TLS should really defend against this. There's a fairly obvious defense that at least three people have independently come up with: carry information about handshake n-1 in handshake n (if n==1 you just carry an empty field). This prevents attacks where the client and server have different views of the negotiation state. The predisclosure group that worked on this after Ray discovered this worked to develop a draft describing this technique which can found at draft-rescorla-tls-renegotiate.txt and we'll be submitting it to the TLS working group shortly. Of course, this will take a very long time to make it into enough clients and servers to make a real difference, so for now we're mostly stuck with the mitigations I've described.

The Software Situation
People have also developed patches for existing implementations. These fall into two categories:

  • Just disable renegotiation (really important for OpenSSL because of the aforementioned automatic renegotiation) issue.
  • Implement the long-term fix.

As far as I know none of these have been formally released yet. Ben Laurie's OpenSSL disabling patch can be found here. I've developed a patch that performs the long-term fix protocol pieces but it doesn't disable non-fixed renegotiation so it's usable primarily for interop testing right now. That can be found here. Expect both of these to eventually appear in official OpenSSL distributions/patches. I don't know the release status of other fixes.

Big Picture Perspective
Now that we've covered how this attack works, how bad is it? It's being billed as a man-in-the-middle attack, but it's really far more limited than your classic MITM. Rather, it's a plaintext injection attack. I'm not saying that's good, but we need to put it into perspective: over the past two years, we have seen two separate issues that allow a complete TLS server impersonation: the Debian PRNG bug (see our paper here for a description of the issue [which we did not discover] and its impact) and the TLS certificate "NULL" issue. Both of these issues had much more severe impact since they allowed the attacker access to all the data sent by the client. However, it's worth noting that they differed from this issue in that they were detectable/fixable on the client side so people could in principle protect themselves. As a practical matter, however, most clients never checked the Debian keys against a blacklist, so that protection was minimal at best.

Second, it's important to remember that even simple MITM attacks with fake certificates don't seem that hard to mount. Cranor et al. report that users very frequently ignore browser warnings of bad certificates, which more or less obviates any security protections TLS provides. If the user will accept the attacker's certificate, there's no need to do anything as limited as the attack described here. Obviously, that's not information that should give you a huge amount of confidence in any form of Internet communications security, but it does provide some perspective on the marginal impact of this attack.

One other comparison point is a remote vulnerability in the server. TLS implementations have historically had and continue to have remote vulnerabilities which would allow an attacker to subvert the server. Such an attacker could read all your traffic to that server. As with this issue, you can't really detect that from the client side and need to rely on the server being upgraded. So, this issue is somewhat less serious than a remote vulnerability in a really popular server stack like OpenSSL. It's primarily unusual because it's an issue in the protocol not the software and those are quite rare.

Bottom Line
As I said at the beginning, this obviously isn't good. We'd like SSL/TLS to provide it's nominal security guarantees and it clearly doesn't. There likely to be exploitable attack paths using this issue. It's far less clear whether they will actually see exploitation, given that these attacks aren't necessarily as powerful as other already known paths. In the not too distant future you should expect most important servers to have fixed, at which point most of your Web server interactions will be safe (even though you can't verify it easily). In the long term, most clients and servers will adopt the protocol fix at which point clients will be able to be sure that the servers they are interacting with are safe.

5 Comments

Actually, I'd argue that this isn't a protocol bug at all. Rather, the problem lies in the security model presented to applications in (I assume, all) TLS implementations. It seems that implementations (and hence the applications that use them) are built to assume that the client principal never changes within a single request. (And I'd bet that more than a few implementations and applications assume an even more coarse-grained client-principal-to-client-traffic association than that.)

This is a specific instance of a TOC/TOU (Time-of-check/time-of-use) vulnerability, and hence has potential implications beyond the third-party attacker case outlined above. Consider, for example, a server that relies on an initial TLS-layer authentication/authorization to narrow down the set of users permitted access, and an application-layer check of the client identity to further narrow down access to specific resources.

Now, the server administrator might assume that the resulting configuration restricts access to any particular server resource to the intersection of two sets of identities: the set that passes the TLS-layer check, and the set that passes the application-layer check. But if the TLS layer simply returns the current client identity to the application on request, and the application simply assumes that identity to have held across the entire request--even though the TLS implementation supports renegotiation--then someone who can pass the first TLS authorization can collude with someone who can pass the application-layer one, to gain access to a resource that neither should have access to. All they have to do is authenticate first as the user who can pass the first check, then renegotiate as the other user midway through the request.

I would therefore argue that the proposed fix to TLS, while useful in protecting legacy applications from a particular attack, is insufficient by itself. Ultimately, applications need to be more "renegotiation-aware"--that is, either capable of recognizing and dealing correctly with the "true principal" represented by a renegotiated session, or else capable of spotting and rejecting renegotiated sessions in cases where there is a risk of TOC/TOU vulnerability.

For GnuTLS the long-term solution appears to have been implemented, see:

http://permalink.gmane.org/gmane.comp.encryption.gpg.gnutls.devel/3944

/Simon

It seems to me, at least from an engineering viewpoint, that this attack is relatively easy to detect on the server side, as long as the application has a way to ask the SSL layer at which point in the character stream a Renegotiate happened. It's probably extremely unlikely for a renegotiate to happen halfway through a header line, and even less likely for the rest of that header line to parse like a valid HTTP request.

As an engineer, I'd probably argue that this combination is unlikely enough that it warrants a "400 please try that again" error. The number of false positives would probably be close to zero.

An ugly client-side workaround might be to never send "authenticating" information on the first request, and make that first request innocent (like "HEAD /"), but of random length. Using connection keep-alive, the next request (after verifying the response to the first request is correct!) in the same channel would be the "real" request, including cookies, authentication headers, SSL renegotiation for client certs, etc.

The random length of the HEAD request should protect against a prepended POST with a fixed length. However, that's also the weak part of this workaround: it's raceable and doesn't offer a lot of security. You wouldn't want more than a rand(2^12) amount of randomness probably, and depending on the application, racing that
amount of randomness might be worth it for an attacker, given enough visitors.

However, this client-side "fix" would increase the load on pretty much every webserver.

Don't get me wrong: this would be a stop-gap solution. But at least the server-side fix looks plausible enough to me to be workable at least until a "real" (protocol level) fix has been rolled out. If you can't completely disable renegotiations, that is.

The noted pizza attack and the "get someone to submit a request, and authenticate it with their client cert" attack only work if the targeted webserver is vulnerable to XSRF attacks anyway, in which case the attacker could mount the same attack by getting the client to render a webpage with an IFRAME sourced from the "buy-a-pizza" URL.

That said, there are certainly other attacks this SSL issue allows; they're just more subtle and perhaps more website-specific than the examples given.

Dan Simon: "Actually, I'd argue that this isn't a protocol bug at all. Rather, the problem lies in the security model presented to applications in (I assume, all) TLS implementations."

That is certainly one way to view it: the TLS spec is, for the most part, internally consistent. However, another way to view it is that some huge proportion of application code that was (relatively) secure under SSLv2 silently developed this vulnerability when SSLv3 became enabled.

Tim Dierks: "attack[s] only work if the targeted webserver is vulnerable to XSRF attacks anyway"

I'm not so sure about that. For example, the attacker can change a browser's "GET /logo.png" to a his arbitrary POST and keep the cookies. Right there, that invalidates a lot of assumptions made in the XSRF realm.

But mainly I wouldn't count on it because Steve and I weren't really focusing on the art of HTTP abuse. Others will be though, just imagine what MITM might do with the CONNECT and TRACE verbs.

We demonstrated a few attacks on HTTPS because it makes a good example. We stopped before going much farther with it because we feel the correct fix lies at the TLS protocol level, and the HTTP mitigations are an attractive dead-end.

Leave a comment