Transitioning to universal HTTPS

| Comments (6) | COMSEC
Lauren Weinstein points out that the assclowns at Rogers are prototyping a system for splicing their own messages into other people's Web pages, like this:

Lauren argues that it's time to abandon unprotected web surfing:

That first, key action is to begin phasing out, as rapidly as possible and in as many application contexts as practicable, the use of unencrypted http: Web communications, and move rapidly to the routine use of TLS/https: whenever possible.

This is of course but an initial step in a rather long path toward pervasive Internet encryption, but it would be an immensely important one.

TLS is not a total panacea by any means. In the absence of prearranged user security certificates, TLS is still vulnerable to man-in-the-middle attacks, but any entity attempting to exploit that approach would likely find themselves in significant legal difficulty in short order.

Also, while TLS/https: would normally deprive ISPs -- or other intermediaries along the communications path -- of the ability to observe or modify data traffic contents, various transactional information, such as which Web sites subscribers were visiting (or at least which IP addresses), would still be available to ISPs (in the absence of encrypted proxy systems).

Another potential issue is the additional computational cost associated with setting up and maintaining TLS communication paths, which could become significant for busy server sites. However, thanks to system speed improvements and a choice of encryption algorithms, the additional overhead, while not trivial, is likely to at least be manageable.

Weinstein raises a number of issues here, namely:

  • Vulnerability to MITM attacks.
  • The effect of TLS on deep packet inspection engines.
  • The computational cost of TLS.
In this post, I want to address the second and third issues. MITM attacks deserve their own post. First, we need to be clear on what we're trying to do. The property the communicating parties (the client and server) want to ensure isn't that third parties can't read (the technical term here is confidentiality) the traffic going by but rather they can't modify it (the technical terms here are data origin authentication (knowing who sent the message) and message integrity (knowing that it hasn't been modified)). Obviously, there's no way to stop your ISP from sending you any data of his choice, but you can arrange to detect that and reject the data.

The general way that this is done is to have the server compute what's called a message integrity check (MIC) value over the data. The server sends the MIC, along with the data to the client. The client checks the MIC (I'm being deliberately vague about how this works) and if it isn't correct it knows that the data has been tampered with and the client discards the data. The way this works in TLS is that the client and the server do an initial handshake to exchange a symmetric key. This key is then used to key a message authentication code (MAC)1 function which is used to protect individual data records (up to 16K each).

So, going back to Issue 2, TLS actually provides confidentiality and message integrity/data origin authentication separately. In particular, there are modes which provide integrity but not confidentiality (confidentiality without integrity is only safe in some special cases so these modes aren't provided)—the so-called NULL modes. So, it's quite possible to arrange matters in such a way that intermediaries can inspect the traffic but not modify it. Of course, whether this is desirable is a separate issue, but I think it's pretty clear that many enterprises, at least, want to run various kinds of DPI engines on the traffic going by. Indeed, they want to so much that they deploy solutions to intercept encrypted traffic, so presumably they would be pretty unhappy if they couldn't see any Web traffic.

There are at least two major difficulties with providing a widely used integrity-only version of HTTPS. The first is that clients don't generally offer to negotiate it, at least in part because it's easier to just have users expect that HTTPS = the lock icon = security than to try to explain the whole thing about integrity vs. confidentiality. This brings us to the second issue, which is how we provide a UI which gives users the right understanding of what's going on. More on the UI issue in a subsequent post, but it should be clear that from a protocol perspective this can be made to work.

Moving on to the performance issue: HTTP over TLS is a lot more expensive than raw HTTP [CPD02]. So, TLS-izing everything involves taking a pretty serious performance hit. The basic issue is that each connection between the client and the server requires establishing a new cryptographic key to use with the MAC. This setup is expensive, but it's a more or less fundamental requirement of using a MAC because the same key is used to verify the MAC as to create it. So, in order to stop Alice from forging traffic to Bob from the server, Alice and Bob need to share different keys with the server. The situation can be improved to some extent by aggressive session reuse, thus amortizing the cost of the really expensive public key operations. Client-side session caching/TLS tickets can help here to some extent as well, but the bottom line is that (1) there's some per-connection cost and (2) it breaks proxy caches, which obviously puts even more load on the server.

One approach that doesn't have this performance drawback is to have the server authenticate with a digital signature. Because different keys are used to sign and verify, a single signed message can be replayed to multiple recipients. This reduces the load on the server, as well as (if the protocols are constructed correctly) working correctly with proxy caches. Obviously, this only works well when the pages the server is serving are exactly identical. If each page you're generating is different, this technique doesn't buy you much (though note that even dynamic pages tend to incorporate static components such as inline images.) Static signatures of this type were present in some of the early Web security protocols (e.g., S-HTTP) but SSL/TLS is a totally different kind of design and this sort of functionality would be complicated to retrofit into it at this point.

1. Yes, this whole MIC/MAC thing is incredibly confusing. It's even better when you're doing layer 2 communication security and MAC means the Ethernet MAC.


It occurs to me that you could maintain a registry of all the IP address blocks of ISPs that use these products. Web sites like Google could subscribe to this registry and redirect users from matching addresses to an HTTPS URL.

That way, you only incur the cost of the encryption when you need it. Moreover, if this countermeasure convinces a guy like Rogers to stop messing with people's traffic, then you can take him off the registry and stop encrypting traffic to his customers.

A: If they don't disable Javascript, you can detect such tampering if the server uses a tripwire (its not robust, but its an arms race vastly in favor of detection of modification).

See (we have a better site name, but I always just remember happyblimp. :) )

B: The big problem with HTTPS is not the computational expense (you can always stick an RSA accelerator card in the web servers, they are cheap enough if you are serving enough HTTPS), but the latency expense.

With several handshakes for connection initiation (and I think you still have a few even with a cached credential), the latency for connection startup is a killer. This is why Gmail doesn't use https by default: its the network time, not the compute time. (At least thats according to Steve Bellovin, and the explanation makes sense).

This also says that signatures can work even on dynamic content, IF the web server is willing to use an accelerator card.

C: SSL/TLS doesn't give integrity on the TCP layer, so an "attacker" can still do packet injection. Not necessarily horrible for HTTP(s), but its a big problem for BitTorrent, as even with encryption, traffic analysis can detect it and the ISP can disrupt it.

confidentiality can be useful if something along the path is employing selective filtering. For example, I'm here in Austin for an EAC meeting and the hotel seems to have a filter that blocks web connections to certain destinations (that seem random to me). If I use the VPN, they don't know what I'm doing and the block is effectively deleted (and since this is a business hotel, they can't just stop allowing these types of connections).

So, confidentiality could be a useful strategy if an arms race ensues that encompasses filtering (DPI or not).

PS: I wore my CA TTBR shirt under a blazer today and a number of people wanted one... I even got compliments from vendors and election officials.

in the last comment, s/deleted/defeated/

I agree that moving everything to https would be a good first step. It's tricky, though. For example, site owners need to remember to keep their certificates up to date:

And, to avoid user confusion, the use of commonly accepted root authorities are pretty much necessary:

Of course, those cost money -- so most people just use self-signed certs:

And, to top it off, there's the persistent issue of making sure that the whole setup is configured properly, like getting host names to match the names in the actual certs being used:

I was unclear in the introduction to my previous response; let me revise: "I agree with Weinstein that moving everything to https..."

Leave a comment