Blaming the feds for lack of DNSSEC deployment

| Comments (2) | COMSEC
In today's Threat Level, Ryan Singel quotes a bunch of DNS types complaining about how USG isn't signing the DNS root (sorry for the long blockquote, but the context is important):
Those extensions cryptographically sign DNS records, ensuring their authenticity like a wax seal on an letter. The push for DNSSEC has been ramping up over the last few years, with four regions -- including Sweden (.SE) and Puerto Rico (.PR) -- already securing their own domains with DNSSEC. Four of the largest top-level domains -- .org, .gov, .uk and .mil, are not far behind.

But because DNS servers work in a giant hierarchy, deploying DNSSEC successfully also requires having someone trustworthy sign the so-called "root file" with a public-private key. Otherwise, an attacker can undermine the entire system at the root level, like cutting down a tree at the trunk. That's where the politics comes in. The DNS root is controlled by the Commerce Department's NTIA, which thus far has refused to implement DNSSEC.

The NTIA brokers the contracts that divide the governance and top-level operations of the internet between the nonprofit ICANN and the for-profit VeriSign, which also runs the .com domain.

"They're the only department of the government that isn't on board with securing the Domain Name System, and unfortunately, they're also the ones who Commerce deputized to oversee ICANN," Woodcock said.

"The biggest difference is that once the root is signed and the public key is out, it will be put in every operating system and will be on all CDs from Apple, Microsoft, SUSE, Freebsd, etc," says Russ Mundy, principal networking scientist at Sparta, Inc, which has been developing open-source DNSSEC tools for years with government funding, He says the top-level key is "the only one you have to have, to go down the tree."


"We would want to bring the editing, creation and signing of the root zone file here," to IANA, Lamb said, noting that VeriSign would likely still control distribution of the file to the root servers, and there would be a public consultation process that the change was right for the net.

But changing that system could be perceived as reducing U.S. control over the net -- a touchy geopolitical issue. ICANN is often considered by Washington politicians to be akin to the United Nations, and its push to control the root-zone file could push the U.S. to give more control to VeriSign, experts say.


Woodcock isn't buying the assurances of NTIA that it is simply moving deliberatively.

"If the root isn't signed, then no amount of work that responsible individuals and companies do to protect their domains will be effective," Woodcock said. "You have to follow the chain of signatures down from the root to the top-level domain to the user's domain. If all three pieces aren't there, the user isn't protected."

Without getting into how important/useful DNSSEC is (see Tom Ptacek's quite negative assessment for a contrary opinion), I'm having some trouble understanding the arguments on offer here.

It's true that the system is arranged in a hierarchy and that if the root were signed that would be nice, but as far as I know that's not at all necessary for DNSSEC to be useful. As long as the TLD operators (.com, .org, .net, etc.) sign their zones, than it's quite possible to have a secure system. All that needs to happen is that someone publishes a list of the public keys for each root zone and that list gets integrated into the resolvers. There's plenty of precedent for this: when certificate systems were first designed, the concept was that there would be a single root CA and that all other CAs would be certified by the root CA. This isn't how things actually worked out, though. Instead there were a bunch of different CAs and their public keys got built into browsers, operating systems etc. There are order 100 such CAs in your average browser, so it's not at all like 300 signing keys would necessarily be infeasible to distribute with resolvers.

In some respects, the situation with DNS is superior because each signing key would be scoped. So, for instance, the key for .com couldn't be used for .org. This is different than current PKI systems where any of those 100 or so CAs can sign any hostname, so a compromise of any of those CAs is quite serious. By contrast, if you're a .com customer, you don't need to worry so much about the security of .zw unless you happen to do a lot of business with sites in Zimbabwe.

With that in mind, if the root were to be signed tomorrow, it's not like it would be tremendously useful unless the individual TLS operators start signing their zones. It sounds like some of them are starting to, but at least from my perspective, this seems rather more important than the root. Now, it's true that having the root signed might have some sort of PR impact in terms of incentivizing the TLS operators, but technically, I don't see that it makes that much of a difference. The primary advantage is that it wouldn't be necessary to distribute a file full of individual zone keys, but that doesn't really seem to be the primary obstacle to DNSSEC deployment/usefulness, and given how fraught the political issues around signing the root are, it seems worth thinking about how we can make progresss without it.

UPDATE: I should mention that another way in which TLD keys are simpler than CA keys is that because of this problem where any CA can sign any name the browser vendors have to have a fairly complicated vetting process to decide whether a given CA follows appropriate practices and can be trusted. But because the TLD registries are already responsible for the zones they would be signing and can't sign names outside their zones, it's relatively clear who should and shouldn't be included on the list.


What I find MORE frustrating is that the DNS vendors (*cough* BIND *cough*) are actually NOT interested in securing the current system well, but ONLY pushing DNSSEC.

EG, 0x20 is a friendlier form of randomization than source port randomization, mostly because it is not destroyed by NATs and doesn't create an awful select loop in the server. Combined with a detect & respond feedback loop to generate duplicate queries, and the blind attack can be stopped to an arbitrary degree, even in the presence of NATs and firewalls derandomizing ports.

And you could do all this with static amounts of stateholding and only having an effect on a few abberent authoritative servers (which return additional garbage in a broken manner,

Likewise, if you ever get TWO different responses back for the same query with the same ID from the server, forward the second response to the client (indicating something that is wrong) AND void all cache entries associated with the response. If the client respects that something is wrong, good. If not, ah well, you warned em.

No effect on latency, and only a few b0rken DNS authoritative servers won't have their results cached. (Verified on >1 week of ICSI's DNS traffic)

Voila, you can defang blind attacks completely (becaues even when they match, unless the authority was knocked down, it doesn't add state to the cache!) and even a snooping attack is seriously limited (because it is DETECTED and can only have an effect while active).

More importantly, its how BIND et al handle glue records, namely anything beyond the A-B mapping requested by the client.

The only reason Kaminski's blind attack works is there are all sorts of conditions where Bind et al will cache these records.

But they really serve only two purposes: The first is to provide information needed to resolve the current query (such as the IPs and names of the authoritative nameservers) and to prefech data which would be useful later (such as the authoritative nameservers and other data).

So (what I understand a couple commercial vendors do) a SIMPLE rule which defangs the Kaminski attack completely:

Use all glue records within the context of the current recursive query.

If a glue record corresponds to an existing cache entry and is unchanged, do nothing.

If a glue record does NOT have a cache entry, or it is DIFFERENT than a current cached entry (eg, change of authoritative nameserver list), generate an ADDITIONAL internal query to populate that glue record.

The result is NO additional time for any lookup (normal lookups proceed normally, a lookup which would refer to a cached glue entry still works fine), changed glue records CAN reflect immediate changes (eg, new authoritative DNS servers), and load on the authoritative servers is, AT MOST, doubled. But lets face it, DNS is a very lightweight protocol.

Yet Paul Vixie, when I asked at the Usenix panel on BIND, about "Why cache glue records at all?" just hemmed and hawed about "we don't want to set unilateral policy as BIND". And did NOT answer the question about "IF you have to chose between performance and security, WHY choose performance?" (Especially since DNSSEC must cripple preformance!)

So three relatively simple cleanups (a detect & response for blind attack, duplicate-discard for both blind and snooping attacks, and proper treatment of glue) and voila.

Yeah, its not DNSSEC, and DNSSEC can counter man-in-the-middle attacks. But if you are a man-in-the-middle on DNS you can probably just as easily be a man-in-the-middle on TCP, so DNSSEC does you no good because you STILL end up needing application-level integrity.

Thus I think the role of securing the DNS infrastructure should be "Secure SHORT of a man-in-the-middle". And that you can do TODAY, with a better resolver, by looking at capabilities and prerequisitse of attakers, rather than the attacks themselves.

Your argument falls down as soon as there is a need (or, unfortunately, a desire) for one of the TLDs to roll over one of the trust anchors. In DNSSEC, if that is not done correctly by every recursive DNS resolver who has loaded the old key, whole zones go bad for all users who rely on those DNS resolvers. Unlike in web browsers, there is currently no automated way to get new trust anchors in a timely fashion. If the root were signed, this rollover is relatively trivial (unless it is the root's keys that need to be rolled over).

The operational part of DNSSEC for the people using it (as compared to the people signing their zones) has been mostly ignored so far. The operational parts will be fixed eventually, but it is far from ready today. Signing the root will make the operations for everyone much simpler and therefore make the whole system more stable.

Leave a comment