COMSEC: October 2009 Archives

 

October 17, 2009

Eugene Kaspersky argues that one should need to have a "passport" to use the Internet (via /.):
That's it? What's wrong with the design of the Internet?

There's anonymity. Everyone should and must have an identification, or Internet passport. The Internet was designed not for public use, but for American scientists and the U.S. military. That was just a limited group of people--hundreds, or maybe thousands. Then it was introduced to the public and it was wrong to introduce it in the same way.

I'd like to change the design of the Internet by introducing regulation--Internet passports, Internet police and international agreement--about following Internet standards. And if some countries don't agree with or don't pay attention to the agreement, just cut them off.

Isn't it enough to have everyone register with ISPs (Internet service providers) and have IP addresses made known?

You're not sure who exactly has the connection. I can have a Wi-Fi connection and connect using a password, or give away the password for someone else to use that connection. Or the connection could be hacked. Even if the IP address is traced to an Internet café, they will not know who the customer or person is behind the attacks. Think about cars--you have plates on the cars, but you also have driver licenses.

Unfortunately, Kaspersky didn't elaborate on how this would actually work, which is too bad because it's not really that clear to me how one would develop such a system. Let's stipulate for the moment that we had some mechanism for giving everyone who was allowed to access the Internet some sort of credential (the natural thing here would be an X.509 certificate, but of course you could imagine any number of other options). All that you would need to accoplish this would be to somehow positively identify every person on the planet, get them to generate an asymmetric key pair, issue them a certificate, and give them some way to move it around between all their Internet-connected devices (and it's not at all unusual to have both a PC and a smartphone), as well as find some way for them to use it in Internet cafes, libraries, etc. And of course, having the credential is the easy part: we still need to find some way to actually verify it.

At a high level, there are three places we could imagine verifying someone's credentials: (1) at the access point (2) in the network core (3) at the other endpoint. None of these is particularly satisfactory:

Access Point
The naive place to verify people's identity is at the point where they connect to the Internet. Of course, in the vast majority of cases (home service, mobile, etc.), no "passport" is required, because the user has a subscriber relationship with the service provider, so as long as the service provider keeps adequate records it's relatively straightforward to track down who was using any given IP address at a given time. This leaves us with a variety of "open access" type situations where someone has a network that anyone can use, such as libraries, conferences, and people's home networks. One could imagine requiring that those people program their network access elements to authenticate anyone who wanted to use them, but since this would require reprogramming an untold number of Linksys access points which are currently running whatever firmware they were loaded with when they were manufactured in 2006, this doesn't sound like a very practical proposition. Even if one did somehow manage to arrange for a mass upgrade, people who run open APs don't have a huge amount of incentive to keep them secure, so it wouldn't be long before there was a large population of APs which couldn't be trusted to properly report who had used them and we're back to where we started.

Network Core
Moving outward from the access point, one could imagine doing authentication somewhere in the network core (which is sort of what Kaspersky's comments imply). Unfortunately, this would involve some pretty major changes to the Internet architecture. Remember that as far as the core is concerned, there are just a bunch of packets flowing from node to node and being switched as fast as possible by the core routers which don't have any real relationship with the endpoints. Unless we're going to change that (pretty much out of the question no matter how ambitious you are), then about all that's left is having the endpoints digitally sign their packets with their credentials. And of course, those signatures would then have to be verified at something approaching wire speed (if you don't verify them in real time, then people will just send bogus signatures; if you only verify a fraction, then you need some sort of punishment scheme because otherwise you just reduce traffic by that fraction). And of course, the signatures would create massive packet bloat. So, this doesn't sound like a very practical retrofit to the existing Internet.

Other Endpoint
This leaves us with verifying the users's identity at the other endpoint, which is probably the most practical option, given that we already have technology for this in the form of IPsec, SSL/TLS, etc. Again, we have the retrofit problem, and also a huge incentive issue; most sites are primarily interested in having a lot of visitors and don't much care who they are, so they're not really incentivized to verify user identities, especially during the (extended) transition period when requiring authentication would mean rejecting traffic from legitimate visitors. Still, it's at least technically possible, though it's not clear to me why one would want to require this form of authentication through some regulatory process: the major entity which is hurt by being unable to verify whoever is sending them traffic is after all the other endpoint, so if they don't care to authenticate their peer, why would we want to require it.

 

Unfortunately, even the above issues (which aren't very promising) aren't the real obstacle. Remember that we're going to require everyone who wants to access the Internet have one of this credentials. That includes your grandmother, who hasn't ever run Windows update and has over half of her hard drive taken up with assorted varieties of malware. It's not going to be at all difficult for attackers to get their hands on an arbitrary number of "Internet passports" belonging to other people (remember that attackers don't have any trouble getting credit cards, which people actually do have some interest in protecting).

The bottom line, then, is that unless I'm missing something, it's not clear to me that fits Kaspersky's description is likely to be particularly useful.

 

October 6, 2009

Richard Barnes pointed me to the joint ICANN/VeriSign presentation from RIPE 59 (today in Lisbon) on their plans for rolling out signing of the root zone. For those who aren't up on DNSSEC, each TLD (.com, .net, .us, etc.) will sign the domains under it, but the design calls for the information for each of those domains to be signed as well at the root of the tree. There's some question about how important this really is from a technical perspective but the DNSSEC community seems convinced (wrongly in my opinion) that it's essential, so it's socially important even if not technically important.

Anyway, Richard pointed out something interesting to me: they plan to roll over the root Zone Signing Key (ZSK) four times a year (see slide 19) which doesn't really make sense to me. Actually, the whole key rollover scheme doesn't make much sense to me.

It might be helpful to start with a little background. The way things are going to work is this: ICANN is going to have a long-lived (2-5 years) Key Signing Key (KSK). The public half of this key will be built into people's resolvers. But the KSK will not be used to directly sign any user data. Rather, it will be used to sign a short-lived (3 months) ZSK [held by VeriSign] which will be used to sign the data. Because the relying party (i.e., your computer) knows the KSK, it can verify any new ZSK without having to get it directly.

Why are they doing this? As far as I can tell the rationale is as follows:

  • The security of RSA key pairs is directly connected to key length, which is also the length of the signature that the key pair produces.
  • Space in DNS packets is limited.

The combination of these two factors means that if you want to use longer (higher security) key pairs to sign zone data, you start running into size limitations in the packet. That's perfectly understandable, but why does having two keys help. The idea here is that you have a big (2048-bit) and a short (1024-bit) ZSK. But because the ZSK is changed frequently, you don't need as strong a key and can still get good security. I wasn't able to find a good description of this in the DNSSEC documents, but Wikipedia came through:

Keys in DNSKEY records can be used for two different things and typically different DNSKEY records are used for each. First, there are Key Signing Keys (KSK) which are used to sign other DNSKEY records and the DS records. Second, there are Zone Signing Keys (ZSK) which are used to sign RRSIG and NSEC/NSEC3 records. Since the ZSKs are under complete control and use by one particular DNS zone, they can be switched more easily and more often. As a result, ZSKs can be much shorter than KSKs and still offer the same level of protection, but reducing the size of the RRSIG/NSEC/NSEC3 records.

The only problem with this reasoning is that it's almost completely wrong, as can be seen by doing some simple calculations. Let's say we have a key with lifespan one year that requires C computations to break. An attacker buys enough hardware to do C computations in two months and then is able to use the key to forge signatures for the next 10 months (I'll try to write about keys used for confidentiality at some later point.) If we think about a series of keys, they will be vulnerable 10/12 of the time. Now, let's say that we halve the lifespan of the key to 6 months, which shortens the window of vulnerability to 4 months per key, or 2/3 of the time. But if the attacker just buys 2C compute power, he can break the key in 1 month, at which we're back to having the keys vulnerable 10/12 of the time. If we generalize this computation, we can see that if we increase the frequency of key changes by a factor of X, we also increase the attacker's workload by a factor of X.

More concretely, if we originally intended to change keys every 4 years and instead we change them every quarter, this is a factor of 16 (4 bits) improvement in security. Opinions vary about the strength of asymmetric keys, but if we assume that 1024-bit RSA keys have a strength of about 72 bits [*] then this increases the effective strength to around 76 bits, which is somewhere in the neighborhood of 1100 bit RSA keys, a pretty negligible security advantage and nowhere near the strength of a 2048 bit RSA key (> 100 bits of security). It's certainly not correct that this offers the "same level of protection".

The more general lesson here is that changing keys rapidly is nearly useless as a method of preventing analytic attacks. It's almost never practical to change keys frequently enough to have a significant impact on the attacker's required level of effort. If you're that close to the edge of a successful attack, what you need is a stronger key, not to change your weak keys more frequently. In the specific case of DNSSEC, just expanding the size of the packet by 10 bytes or so would have as much if not more security impact at a far lower system complexity cost.