Recently in DNS Category

 

January 21, 2012

You've of course heard by now that much of the Internet community thinks that SOPA and PIPA are bad, which is why on January 16, Wikipedia shut itself down, Google had a black bar over their logo, etc. This opinion is shared by much of the Internet technical community, and in particular much has been made of the argument made by Crocker et al. that DNSSEC and PIPA are incompatible. A number of the authors of the statement linked above are friends of mine, and I agree with much of what they write in it, but I don't find this particular line of argument that convincing.

Background
As background, DNS has two kinds of resolvers:

  • Authoritative resolvers which host the records for a given domain.
  • Recursive resolvers which are used by end-users for name mapping. Typically they also serve as a cache.

A typical configuration is for end-user machines to use DHCP to get their network configuration data, including IP address and the DNS recursive resolvers to use. Whenever your machine joins a new network, it gets whatever resolver that network is configured for, which is frequently whatever resolver is provided by your ISP. One of the requirements of some iterations of PIPA and SOPA has been that recursive resolvers would have to block resolution of domains designated as bad. Here's the relevant text from PIPA:

(i) IN GENERAL- An operator of a nonauthoritative domain name system server shall take the least burdensome technically feasible and reasonable measures designed to prevent the domain name described in the order from resolving to that domain name's Internet protocol address, except that--
(I) such operator shall not be required--
(aa) other than as directed under this subparagraph, to modify its network, software, systems, or facilities;
(bb) to take any measures with respect to domain name lookups not performed by its own domain name server or domain name system servers located outside the United States; or
(cc) to continue to prevent access to a domain name to which access has been effectively disable by other means; and ...
(ii) TEXT OF NOTICE.-The Attorney General shall prescribe the text of the notice displayed to users or customers of an operator taking an action pursuant to this subparagraph. Such text shall specify that the action is being taken pursuant to a court order obtained by the Attorney General.

This text has been widely interpreted as requiring operators of recursive resolvers to do one of two things:

  • Simply cause the name resolution operation to fail.
  • Redirect the name resolution to the notice specified in (ii).

The question then becomes how one might implement these.

Technical Implementation Mechanisms
Obviously if you can redirect the name, you can cause the resolution to fail by returning a bogus address, so let's look at the redirection case first. Crocker et al. argue that DNSSEC is designed to secure DNS data end-to-end to the user's computer. Thus, any element in the middle which modifies the DNS records to redirect traffic to a specific location will break the signature. Technically, this is absolutely correct. However, it is mitigated by two considerations.

First, the vast majority of client software doesn't do DNSSEC resolution. Instead, if you're resolving some DNSSEC-signed name and the signature is being validated at all it's most likely being validated by some DNSSEC-aware recursive resolver, like the ones Comcast has recently deployed. Such a resolver can easily modify whatever results it is returning and that change will be undetectable to the vast majority of client software (i.e., to any non-DNSSEC software).1. So, at present, a rewriting requirement looks pretty plausible.

Crocker et al. would no doubt tell you that this is a transitional stage and that eventually we'll have end-to-end DNSSEC, so it's a mistake to legislate new requirements that are incompatible with that. If a lot of endpoints start doing DNSSEC validation, then ISPs can't rewrite undetectably. They can still make names fail to resolve, though, via a variety of mechanisms. About this, Crocker et al. write:

Even DNS filtering that did not contemplate redirection would pose security challenges. The only possible DNSSEC-compliant response to a query for a domain that has been ordered to be filtered is for the lookup to fail. It cannot provide a false response pointing to another resource or indicate that the domain does not exist. From an operational standpoint, a resolution failure from a nameserver subject to a court order and from a hacked nameserver would be indistinguishable. Users running secure applications have a need to distinguish between policy-based failures and failures caused, for example, by the presence of an attack or a hostile network, or else downgrade attacks would likely be prolific.[12]

..

12. If two or more levels of security exist in a system, an attacker will have the ability to force a "downgrade" move from a more secure system function or capability to a less secure function by making it appear as though some party in the transaction doesn't support the higher level of security. Forcing failure of DNSSEC requests is one way to effect this exploit, if the attacked system will then accept forged insecure DNS responses. To prevent downgrade attempts, systems must be able to distinguish between legitimate failure and malicious failure.

I sort of agree with the first part of this, but I don't really agree with the footnote. Much of the problem is that it's generally easy for network-based attackers to generate situations that simulate legitimate errors and/or misconfiguration. Cryptographic authentication actually makes this worse, since there are so many ways to screw up cryptographic protocols. Consider the case where the attacker overwrites the response with a random signature. Naturally the signature is unverifiable, in which case the resolver's only response is to reject the records, as prescribed by the DNSSEC standards. At this point you have effectively blocked resolution of the name. It's true that the resolver knows that something is wrong (though it can't distinguish between attack and misconfiguration), but so what? DNSSEC isn't designed to allow name resolution in the face of DoS attack by in-band active attackers. Recursive resolvers aren't precisely in-band, of course, but the ISP as a whole is in-band, which is one reason people have talked about ISP-level DNS filtering for all traffic, not just filtering at recursive resolvers.

Note that I'm not trying to say here that I think that SOPA and PIPA are good ideas, or that there aren't plenty of techniques for people to use to evade them. I just don't think that it's really the case that you can't simultaneously have DNSSEC and network-based DNS filtering.

 

1. Technical note: As I understand it, if you're a client resolver who wants to validate signatures itself needs to send the DO flag (to get the recursive resolver to return the DNSSEC records) and the CD flag (to suppress validation by the recursive resolver). This means that the recursive resolver can tell when its safe to rewrite the response without being detected. If DO isn't set, then the client won't be checking signatures. If CD isn't set, then the recursive resolver can claim that the name was unvalidatable and generate whatever error it would have generated in that case (Comcast's deployment seems to generate SERVFAIL for at least some types of misconfiguration.)

 

December 18, 2011

The first step in most Internet communications is name resolution: mapping a text-based hostname (e.g., www.educatedguesswork.org) to a numeric IP address (e.g,, 69.163.249.211). This mapping is generally done via the Domain Name System (DNS), a global distributed database. The thing you need to know about the security of the DNS is that it doesn't have much: records are transmitted without any cryptographic protection, either for confidentiality or integrity. The official IETF security mechanism, DNSSEC is based on digital signatures and so offers integrity, but not confidentiality, and in an any case has seen extremely limited deployment. Recently, OpenDNS rolled out DNSCrypt, which provides both encrypted and authenticated communications between your machine and a DNSCrypt-enabled resolver such as the one operated by OpenDNS. OpenDNS is based on DJB's DNSCurve and I've talked about comparisons between DNSSEC and DNSCurve before, but what's interesting here is that OpenDNS is really pushing the confidentiality angle:

In the same way the SSL turns HTTP web traffic into HTTPS encrypted Web traffic, DNSCrypt turns regular DNS traffic into encrypted DNS traffic that is secure from eavesdropping and man-in-the-middle attacks. It doesn't require any changes to domain names or how they work, it simply provides a method for securely encrypting communication between our customers and our DNS servers in our data centers. We know that claims alone don't work in the security world, however, so we've opened up the source to our DNSCrypt code base and it's available on GitHub.

DNSCrypt has the potential to be the most impactful advancement in Internet security since SSL, significantly improving every single Internet user's online security and privacy.

Unfortunately, I don't think this argument really holds up under examination. Remember that DNS is mostly used to map names to IP addresses. Once you have the IP address, you need to actually do something with it, and generally that something is to connect to the IP address in question, which tends to leak a lot of the information you encrypted.

Consider the (target) case where we have DNSCrypt between your local stub resolver and some recursive resolver somewhere on the Internet. The class of attackers this protects against is those which have access to traffic on the wire between you and the resolver. Now, if I type http://www.educatedguesswork.org/ into my browser, what happens is that the browser tries to resolve www.educatedguesswork.org, and what the attacker principally learns is (1) the hostname I am querying for and (2) the IP address(es) that were returned. The next thing that happens, however, is that my browser forms a TCP connection to the target host and sends something like this:

GET / HTTP/1.1
Host: www.educatedguesswork.org
Connection: keep-alive
Cache-Control: max-age=0
...

Obviously, each IP packet contains the IP address of the target the Host header contains the target host name, so any attacker on the wire learns both. And as this information is generally sent over the same access network as the DNS request, the attacker learns all the information they would have had if they had been able to observe my DNS query. [Technical note: when Tor is configured properly, DNS requests are routed over Tor, rather than over the local network. If that's not true, you have some rather more serious problems to worry about than DNS confidentiality.]

"You idiot," I can hear you saying, "if you wanted confidentiality you should have used SSL/TLS." That's true, of course, but SSL/TLS barely improves the situation. Modern browsers provide the target host name of the server in question in the clear in the TLS handshake using the Server Name Indication (SNI) extension. (You can see if your browser does it here), so the attacker learns exactly the same information whether you are using SSL/TLS or not. Even if your browser doesn't provide SNI, the hostname of the server is generally in the server's certificate. Pretty much the only time that a useful (to the attacker) hostname isn't in the certificate is when there are a lot of hosts hidden behind the same wildcard certificate, such as when your domain is hosted using Heroku's "piggyback SSL". But this kind of certificate sharing only works well if your domain is subordinated behind some master domain (e.g, example-domain.heroku.com), which isn't really what you want if you're going to offer a serious service.

This isn't to say that one couldn't design a version of SSL/TLS that didn't leak the target host information quite so aggressively—though it's somewhat harder than it looks—but even if you were to do so, it turns out to be possible to learn a lot about which sites you are visiting via traffic analysis (see, for instance here and here). You could counter this kind of attack as well, of course, but that requires yet more changes to SSL/TLS. This isn't surprising: concealing the target site simply wasn't a design goal for SSL/TLS; everyone just assumed that it would be clear what site you were visiting from the IP address alone (remember that when SSL/TLS was designed, it didn't even support name-based virtual hosting via SNI). I haven't seen much interest in changing this, but unless and until we do, it's hard to see how providing confidentiality for DNS traffic adds much in the way of security.

 

April 7, 2011

As I said earlier, DANE faces serious deployment hurdles, even in certificate lock mode: it works best when the records are signed with DNSSEC, and even if you deploy it without DNSSEC, which is tricky, it only provides protection to clients which have been upgraded, which is going to take a while. Phillip Hallam-Baker, Rob Stradling, and Ben Laurie have proposed a mechanism called Certificate Authority Authorization (CAA), which, while more of a hack, looks rather easier to deploy.

With CAA, as with DANE, a domain can publish a record in the DNS that says says "only the following CAs should issue certificates for me." However, unlike DANE, the record isn't addressed to clients but rather to CAs. The idea here is before issuing a certicate to a domain a CA checks for the CAA record; if there's a record and the issuing CA isn't on it, it doesn't issue the certificate. This mechanism has the advantage that there are only a relatively small number of root CAs and they have a relationship with the browser vendors, so the browser vendors could at least in theory insist that the CAs honor it, thus providing some protection to everyone almost immediately. Moreover, it's easier to recover from screwups: if you misconfigure your CAA record, all that happens is a CA can't issue you a new cert till you reconfigure your DNS, as opposed to your site suddenly becoming unreachable.

Obviously, CAA is inferior to DANE an end-to-end security perspective, as it relies on all the CAs behaving correctly. However, since the primary risk here is probably not intentional CA malfeasance but rather the comparatively weak authentication procedures many CAs use. However, if a CA is able to determine that under no circumstances should they be issuing a certificate for a domain, then the weakness of those authentication measures becomes somewhat less important. It's not perfect but it seems like a potentially worthwhile risk containment strategy, especially as it allows you to select a certificate authority whose security measures you trust, rather than having to rely on the security of the weakest CA on the browser root list.

 

March 30, 2011

Over the past few years, there has been a lot of interest in putting TLS keys (or references to TLS keys) in the DNS (see here for background). Recently, IETF spun up the DNS-based Authentication of Names (DANE) WG which is designed to address this issue. The way that DANE for TLS works is that you can publish either:
  • A list of certificates (or hashes) which correspond to the TLS server itself.
  • A list of certificates (or hashes) which correspond to CAs in the TLS certificate chain.

In either case, the semantics are that the server certificate/chain must match one of the provided certificates. This match is both necessary and sufficient, which is to say that the client is supposed to accept an otherwise invalid certificate (i.e., one that it wouldn't ordinarily verify) if it is approved via DANE. Obviously, in order for this to work, you need to have the relevant DNS records protected with DNSSEC, since otherwise any attacker who can inject DNS records can simply generate a valid-appearing TLS connection, which obviates the whole point of TLS.

What's the point?
I've been trying to work out what I think about DANE for a while. My initial reaction was that it was a total non starter; anything that relies on DNSSEC has a huge deployment hurdle to jump. A little before today's WG, I started to rethink a bit. At a high level, it seems that DANE-type technology serves two purposes:

  • Cert lock: protecting the server from mis-issued certificates, e.g., one of the 100+ certificates embedded in your average browser other than the one that the legitimate server certificate was issued by.
  • Alternate certification: a lot of people complain that it's too inconvenient to get a CA-issued certificate (I don't really believe this) and putting keys in the DNS presents an alternative, allegedly easier avenue.

These objectives have fairly different value curves as a function of deployment. Cert lock is a mechanism for controlling the attack surface, so once you have deployed the DNS records, the value is roughly linear in the number of clients who are able to verify those records. If 50% of users deploy the ability to verify DNS-based certificates then that's 50% of users who aren't vulnerable to CA malfeasance.

By contrast, alternate certification's value is that you don't need to deal with a CA, so it saves you the cost of the certificate ($20-$100) plus whatever hassle it is dealing with the CA. However, if you don't get a CA certificate, than if X% of the users in the universe are DNS-certificate capable then 100-X% of users can't talk to you server at all without getting some horrible error/warning. So as a practical matter, you can't discard the CA-issued certificate until a very high proportion of clients are DNS-certificate capable. Since you obviously want clients to be able to talk to you this means you don't get any benefit at all until nearly all the clients in the world have upgraded.

So, I think that cert lock is a fairly compelling application of certificates in the DNS, whereas alternate certification isn't really that compelling, at least during the (very long) period when a large fraction of the machines in the world haven't upgraded.

Does cert lock need DNSSEC?
As of this morning, I thought that cert lock didn't really need DNSSEC. The argument runs like this: let's say that the attacker can forge DNS records. The worst he can do is suppress or replace the certificate indication in the DNS to point to his own certificate, but since that certificate must be valid and the ability to attack DNS implies the ability redirect traffic (via modifying A records) then he could have done this in any case, so DANE doesn't make this any worse. On the other hand, there may be some attackers who can't attack DNS, or clients who have cached copies of the cert lock records in DNS, in which case clients are protected from attack. So while DNSSEC is desirable, it's not actually required.

On second thought (and after some comments by Martin Rex), I'm a little less sure. An attacker who can intercept your connection is most likely also able to inject DNS records, so a naive client doesn't get protected by a cert lock style mechanism unless for some reason the attacker is unable to attack DNS. It's possible that a client might have a cached DNS record that contained a cert lock record, in which case it would be protected, but the duration of the cache time to live also limits the responsiveness of the client to unexpected certificate changes, so there are tradeoffs here.

Overall, then, I'm not sure how valuable I think a DNS-based cert lock mechanism is in the absence of DNSSEC. As they say, my views are evolving

 

November 20, 2010

Perhaps you've heard that the the Combating Online Infringement and Counterfeits Act (COICA) has made it out of committee in the Senate. This bill seems to be the latest attempt by the government to develop a strategy for preventing Internet access to cites they consider objectionable, in this case, copyrighted or trademark infringing material. In short, what it would provide for is that the Attorney General could get a court order to declare a domain name/site as bad (my term, not theirs).

Once a site is declared bad, the following blocks can be put in place:

  1. The registrar/registry is required to suspends and lock the domain name.
  2. ISPs are required to attempt to block resolution of the domain name.
  3. Advertising networks are forbidden to serve ads on the site.
It's not entirely clear to me from the bill how much of this happens for each site, but for now I'm just going to assume that all of these actions happen more or less automatically (i.e., the notifications/orders go out) as soon as a domain is declared "bad". And of course, all this stuff is obviously contingent on the relevant entities being located in the US, or at least doing enough business in the US that USG has leverage over them. For the rest of this analysis I'm going to assume that only the US is doing this sort of thing. If the rest of the world follows suit, the problem gets much harder, though still not impossible.

When I read stuff like this—or almost anything, for that matter—my thoughts immediately turn to how to attack it, or in this case how to circumvent the blocking. We need to consider two threat models from the blocker's perspective:

  • Static users, who won't adapt their behavior at all.
  • Adaptive users, who will attempt to actively circumvent blocking.

The history of file sharing suggests that many users in fact fall into the second category, as they have shifted from Napster to Limewire to BitTorrent, etc., but we should still consider both cases.

Static Users
Even if we only consider static users, a site can gain a fair amount of traction by moving as much of its dependencies outside the US as possible. In particular, they can register a domain name with a registrar/registry which is located outside the US. This is harder than it sounds since many of the allegedly foreign registries are actually run by US companies, but as far as I know it's not impossible. That solves the first type of blocking, leaving us with blocking by ISPs and ad networks. Obviously, if you don't serve ads you don't care about ad networks, so this may or may not be an issue and I don't know to what extent there are ad networks without substantial US operations you can use.

Getting around ISP blocking is more tricky. Many if not most people use their ISP's DNS server (they get it via DHCP) so if your customers are in the US then it's going to be trivial for the ISP to block requests to resolve your site. Basically, if your users aren't willing to do anything then you've pretty much lost your US audience.

Adaptive Users
If your users are willing to be a little adaptive then there are a bunch of progressively more aggressive measures they can take to evade this kind of DNS blocking. The easiest is that they can reconfigure their machines to use an external un-filtered DNS service. This doesn't help if ISPs are required to actively filter all DNS queries using some sort of deep packet inspection technology. It's not difficult to build a box which will capture DNS queries and rewrite them in flight, or alternately, to block DNS queries to any resolvers other than their own resolvers (note, many ISPs already block TCP port 25 for spam blocking, so it's not like this is particularly hard.) It's unclear to me that this particular bill would require ISPs to do this kind of filtering, since there is a specific safe harbor for the ISP to show that they do not "have the technical means to comply with this section", but obviously this is something that the government could require.

One natural response is to use Tor, which has the advantage of being available right now. The disadvantage is that Tor wants to tunnel all your traffic which means that performance isn't that great, and it's kind of antisocial (as well as slow) to be using Tor to transfer gigabytes of movies from place to place when all you want to do is get unfiltered name resolution.

What's really needed is a name resolution mechanism that resists filtering. One option would be to have an encrypted connection to your DNS resolver (Dan Bernstein, call your office) or some non-DNS service that acts as a DNS proxy, e.g., DNS over TLS. This requires pretty substantial work by users and client authors to deploy and the load on those resolvers would be significant. Note that you don't need to modify the operating system to do this; there are plenty of user-land DNS resolution libraries available that could be embedded into your client. Still, the amount of work here isn't totally insignificant.

Another option comes to mind, however. There's nothing wrong with the ordinary ISP-provided DNS for most resolutions. There aren't going to be that many domains on this block list and the government helpfully publishes a list of them. Someone could easily gather a list of the blocked domains and the IPs they had when blocked, or even maintain an emergency parallel update system to let the blocked domains update their records. All that's required is a way to retrieve that data, which could be easily fit into a single file. Moreover, the resulting file could be formatted as a /etc/hosts file which people could just install on their machines, at which point the standard operating system mechanisms will cause it to bypass DNS resolution. The result would be that you got ordinary DNS resolution most of the time but for blocked hosts you got the result from /etc/hosts. All that's required is some way to periodically update the bypass list, but that could be done manually or with a tiny program.

Of course, there are still plenty of blocking mechanisms available to the government: they could require IP-level blocking or attempt to block distribution of the bypass list, though that's probably small enough to make that impractical. However, I think this makes clear that just blocking DNS is not likely to be as effective as one would like if users are willing to put in a modest amount of effort.

Acknowledgement: This post benefited from substantial discussions with Cullen Jennings.

 

March 14, 2010

ICANN is deferring yet again approval of the (useless) .xxx domain:
A global Internet oversight agency on Friday deferred a decision until June on whether to create a ".xxx" Internet suffix as an online red-light district.

The board of the Internet Corporation for Assigned Names and Numbers, or ICANN, initiated a 70-day process of consultations on a domain that could help parents block access to porn sites. Use of the ".xxx" suffix would be voluntary, though, and would not keep such content entirely away from minors.

...

ICM Registry LLC first proposed the ".xxx" domain in 2000, and ICANN has rejected it three times already since then. But an outside panel last month questioned the board's latest rejection in 2007, prompting the board to reopen the bid.

"There's a lot of complex issues," ICANN CEO Rod Beckstrom said, without elaborating.

I wish he would elaborate, since I have no idea what the complex issues are. As far as I know, here's the situation: There are a large number of adult-oriented sites that currently live in .com, .net, etc. Those sites have a high profile and aren't going to suddenly abandon their existing domain names. As long as there are a large number of non-.xxx adult domains, then it's not a very useful criterion for filtering software to use, so I doubt it will be of any use in protecting children from pornography. Since I don't see any indication that adult domains will really be restricted to .xxx then the public good of the proposed new domain seems relatively limited. On the other hand, the harm seems fairly limited as well, especially as the general takeup of new GTLDs seems to be limited at best. (Note that this applies more or less equally to any other proposed new GTLD). I realize that there are people who think that there are important forces pushing in one direction (ICM's desire to sell new domain names) or another (silly worries that .xxx will somehow legitimize pornography), but as far as I can tell it's just a no-op.

 

February 25, 2010

OpenDNS (a free DNS service) has decided to adopt DNSCurve to secure its traffic. Some background might be helpful here: DNS, of course, is susceptible to a variety of forgery attacks, which is why the IETF has spent approximately 10 trillion man hours developing DNSSEC. There's a fair amount of dissatisfaction with DNSSEC (I'm not that thrilled with it either) and Dan Bernstein has developed an alternative called DNSCurve.

At a very high level, the approaches stack up like this:

  • DNSSEC is an object security system. Each DNS zone has an asymmetric (public/private) key pair which is used to sign the records in the zone.
  • DNSCurve is a channel security system. Each DNS server has a Diffie-Hellman (actually ECDH) key pair. When a client connects to the server, it does an ECDH key exchange and the shared key is used to authenticate traffic between the client and server.

The primary argument for DNSCurve seems to be performance: DNSCurve has better performance than DNSSEC in two respects: the packets are smaller and under certain conditions the load on the server is lower. These properties are related but not identical. The packets are smaller partly because DNSSEC replaces the digital signature with a symmetric integrity check. These can be a lot smaller than digital signatures using RSA (and a fair bit smaller than even the smallest digital signatures). Second, because DNSCurve uses elliptic curve cryptography, the keys that need to be carried in packets are smaller. (This is mostly relevant in packets which carry the key for a zone rather than packets which carry other data).

The packet size argument is straightforward. The load argument is more complicated. As I noted above, DNSCurve uses elliptic curve cryptography, which is faster and has smaller keys than RSA (the basic algorithm used by by DNSEC). This means that it's inherently faster to set up a DNSCurve association than it is to sign a DNSSEC record using RSA. In addition, DNSCurve has a mechanism for the client and server to cache the DH shared secret. The upshot of this is that it's substantially cheaper to authenticate a new DNS record with DNSCurve than it is with DNSSEC. In the worst case, it involves an ECDH operation. In the best case, it just involves a symmetric crypto operation. So, in this respect the performance of DNSCurve is superior.

However, this isn't really a heads-up comparison: in the DNSSEC model, you sign the zone once whenever it changes (possibly using a key thats kept offline) and then just send copies of it to any requester, so no cryptographic operations are needed after the initial signature. By contrast, because DNSCurve uses a symmetric, rather than asymmetric, integrity mechanism, the DNS server needs to compute that integrity check for each new request. This means that while DNSCurve is faster if you change your DNS data frequently and don't serve that many requests, it's slower if you don't change it often but serve a lot of requests. In addition, it means that while DNSSEC signing can be done offline with a key that is never on an Internet accessible machine, DNSCurve requires that the private key to the zone be kept on the DNS server, thus potentially exposing it to theft if the server is compromised. By contrast, if DNSSEC is used in an offline signing mode, then compromise of the server does not enable forgery attacks. Where DNSCurve has a major advantage is if your server provides dynamic responses (e.g., for DNS load balancing), in which case offline signing isn't that useful and the performance advantage of DNSCurve is most significant.

It's also worth noting that the use of faster algorithms isn't an inherent advantage of DNSCurve. DNSSEC (like all relatively modern COMSEC protocols) supports multiple algorithms and there have been proposals to add ECC (see, for instance, draft-hoffman-dnssec-ecdsa). An ECDSA-enabled DNSSEC would still be slower than DNSCurve if run in a dynamic environment, but performance would be a lot closer and how much slower would depend on how many repeat requests you got from the same client; if each client only makes one request, then performance would be more or less equivalent. The packet size of ECC DNSSEC would be a little worse, but probably not enough worse to make a big difference.

This isn't to say that there aren't other factors to consider (see the DNSCurve site for the other arguments in favor of DNSCurve). However, the performance argument doesn't seem to me to be dispositive, since which solution is faster depends on your deployment model and assumptions about the client environment.

 

October 6, 2009

Richard Barnes pointed me to the joint ICANN/VeriSign presentation from RIPE 59 (today in Lisbon) on their plans for rolling out signing of the root zone. For those who aren't up on DNSSEC, each TLD (.com, .net, .us, etc.) will sign the domains under it, but the design calls for the information for each of those domains to be signed as well at the root of the tree. There's some question about how important this really is from a technical perspective but the DNSSEC community seems convinced (wrongly in my opinion) that it's essential, so it's socially important even if not technically important.

Anyway, Richard pointed out something interesting to me: they plan to roll over the root Zone Signing Key (ZSK) four times a year (see slide 19) which doesn't really make sense to me. Actually, the whole key rollover scheme doesn't make much sense to me.

It might be helpful to start with a little background. The way things are going to work is this: ICANN is going to have a long-lived (2-5 years) Key Signing Key (KSK). The public half of this key will be built into people's resolvers. But the KSK will not be used to directly sign any user data. Rather, it will be used to sign a short-lived (3 months) ZSK [held by VeriSign] which will be used to sign the data. Because the relying party (i.e., your computer) knows the KSK, it can verify any new ZSK without having to get it directly.

Why are they doing this? As far as I can tell the rationale is as follows:

  • The security of RSA key pairs is directly connected to key length, which is also the length of the signature that the key pair produces.
  • Space in DNS packets is limited.

The combination of these two factors means that if you want to use longer (higher security) key pairs to sign zone data, you start running into size limitations in the packet. That's perfectly understandable, but why does having two keys help. The idea here is that you have a big (2048-bit) and a short (1024-bit) ZSK. But because the ZSK is changed frequently, you don't need as strong a key and can still get good security. I wasn't able to find a good description of this in the DNSSEC documents, but Wikipedia came through:

Keys in DNSKEY records can be used for two different things and typically different DNSKEY records are used for each. First, there are Key Signing Keys (KSK) which are used to sign other DNSKEY records and the DS records. Second, there are Zone Signing Keys (ZSK) which are used to sign RRSIG and NSEC/NSEC3 records. Since the ZSKs are under complete control and use by one particular DNS zone, they can be switched more easily and more often. As a result, ZSKs can be much shorter than KSKs and still offer the same level of protection, but reducing the size of the RRSIG/NSEC/NSEC3 records.

The only problem with this reasoning is that it's almost completely wrong, as can be seen by doing some simple calculations. Let's say we have a key with lifespan one year that requires C computations to break. An attacker buys enough hardware to do C computations in two months and then is able to use the key to forge signatures for the next 10 months (I'll try to write about keys used for confidentiality at some later point.) If we think about a series of keys, they will be vulnerable 10/12 of the time. Now, let's say that we halve the lifespan of the key to 6 months, which shortens the window of vulnerability to 4 months per key, or 2/3 of the time. But if the attacker just buys 2C compute power, he can break the key in 1 month, at which we're back to having the keys vulnerable 10/12 of the time. If we generalize this computation, we can see that if we increase the frequency of key changes by a factor of X, we also increase the attacker's workload by a factor of X.

More concretely, if we originally intended to change keys every 4 years and instead we change them every quarter, this is a factor of 16 (4 bits) improvement in security. Opinions vary about the strength of asymmetric keys, but if we assume that 1024-bit RSA keys have a strength of about 72 bits [*] then this increases the effective strength to around 76 bits, which is somewhere in the neighborhood of 1100 bit RSA keys, a pretty negligible security advantage and nowhere near the strength of a 2048 bit RSA key (> 100 bits of security). It's certainly not correct that this offers the "same level of protection".

The more general lesson here is that changing keys rapidly is nearly useless as a method of preventing analytic attacks. It's almost never practical to change keys frequently enough to have a significant impact on the attacker's required level of effort. If you're that close to the edge of a successful attack, what you need is a stronger key, not to change your weak keys more frequently. In the specific case of DNSSEC, just expanding the size of the packet by 10 bytes or so would have as much if not more security impact at a far lower system complexity cost.

 

January 21, 2009

I've written before about Kentucky's attempt to seize a bunch of domain names from gambling sites. They prevailed at the trial level and were able to take control of the names, but just lost at the appellate level. From the Register article:
The lower-court ruling rested on Franklin County Circuit Judge Thomas Wingate's highly specious finding that internet casino domain names constitute "gambling devices" that are subject to the state's anti-gambling statutes. Tuesday's decision disabused Wingate of that notion in no uncertain terms.

"Suffice it to say that given the exhaustive argument both in brief and oral form as to the nature of an internet domain name, it stretches credulity to conclude that a series of numbers, or internet address, can be said to constitute a machine or any mechanical or other device ... designed and manufactured primarily for use in connection with gambling," they stated. "We are thus convinced that the trial court clearly erred in concluding that the domain names can be construed to be gambling devices subject to forfeiture under" Kentucky law.

(Decision here. BTW, can you believe that in 2008 they're still distributing documents as scans turned into PDFs? Clearly this was sourced on a computer, so what's the problem?)

While I agree that it doesn't make a lot of sense to view domain names as a gambling "device", I'm not sure that this is quite as broad a ruling as I would have liked. As far as I can tell, this is just a ruling that this particular Kentucky law is inapplicable, but it's not clear what would stop Kentucky from passing a law explicitly giving them the right to seize domain names used in gambling, which would put us right back where we started. The problem here isn't so much the overreach of the particular Kentucky law, but rather with the potential for a situation where every political unit has joint universal jurisdiction over DNS entries just because the owners of the domain names exchange traffic with people in that political unit. It's understandable that the court didn't want to address that when it could find on narrower grounds, but presumably we'll eventually run into a case where the applicability of the local laws is clearer and we'll have to take the major jurisdictional issue head-on.

Thanks to Hovav Shacham and Danny McPherson for pointing me to this ruling.

 

January 18, 2009

Sorry it took me so long to get back to this topic. In previous posts I started talking about the possibility of replacing DNSSEC with certificates. Obviously, this can be done technically, but is it a good idea? The basic argument here (advanced by Paul Vixie but also others) is that putting keys in the DNSSEC would better than the existing X.509 infrastructure.

From a cryptographic perspective, this is probably not true: DNSSEC uses generally the same cryptographic techniques as X.509, including supporting MD5 (though it's marked as non-recommended). It may actually be weaker: one potential defense against collision attacks with X.509 is to randomize the serial number, but it's not clear to me that there's something as simple with DNSSEC RRSIG RRs though presumably you could insert a random-looking record if you really wanted to. Cryptographic security isn't the main argument, though. Rather, the idea is that DNSSEC would be more secure for non-cryptographic reasons.

I can see two basic arguments for why DNSSEC would be more secure, one quasi-technical and one administrative. The quasi-technical argument is that there is a smaller surface area for attack. Because a Web client (this is more or less generally true, but Web is the most important case) will accept a certificate from any CA on the trust anchor list, the attacker just needs to find the CA with the weakest procedures and get a certificate from that CA. This works even if the target site already has a certificate from some other CA, because the client has no way of knowing that. By contrast, at least in theory only one registrar is allowed to authorize changes to your domain, the attacker needs to target that registrar. That said, domain name thefts have occurred in the past, and so it's not clear how good a job the registrars actually do. Moreover, it wouldn't be that hard to import this sort of semantics into X.509; the CAs would need to coordinate with the registrars to verify the user's identity, but that's not impossible. Another possibility would be to have a first-come-first-served type system where CA X wouldn't issue certificates for a domain if CA Y had already done so without some sort of enhanced verification. Obviously, this is imperfect, but it would at least make hijacking of popular domains difficult.

The second argument is purely administrative, namely dissatisfaction with CA operational practices. Here's Vixie: "frankly i'm a little worried about nondeployability of X.509 now that i see what the CA's are doing operationally when they start to feel margin pressure and need to keep volume up + costs down." That said, it's not clear that registrar procedures will be any better (again, there have been DNS name thefts as well), and it's quite possible that they would be worse. Note that in some cases, such as GoDaddy, they're actually the same people. Moreover, there needs to be some coordination between the registrars and the registries, and that's a new opportunity for a point of failure. Anyway, this seems to me to be an open question.

The most difficult problem with this plan seems to be deployment. So far, DNSSEC deployment has been extremely minimal. Until it's more or less ubiquitous, users and clients need to continue trusting X.509 certificates, so adding DNSSEC-based certificates to the mix marginally increases the attack surface area, making the situation worse (though only slightly), not better. And as usual, this is a collective action problem: as long as clients trust X.509 certs there's not a lot of value to server operators to transition to DNSSEC-based certificates, which, of course, means that there's not a lot of incentive for clients (or more likely implementors) to start trusting them either. It's not really clear how to get past this hurdle in the near future.