EKR: March 2011 Archives


March 31, 2011

My Web 2.03.0 friends tell me that Twitter is the new RSS. I don't plan to be doing any actual tweeting, but you can follow EG at @egtwitfeed
Here at IETF 80. Doing some presentations:

Sorry about the certificate validation errors. Blame the reSIProcate SVN admins.


March 30, 2011

Over the past few years, there has been a lot of interest in putting TLS keys (or references to TLS keys) in the DNS (see here for background). Recently, IETF spun up the DNS-based Authentication of Names (DANE) WG which is designed to address this issue. The way that DANE for TLS works is that you can publish either:
  • A list of certificates (or hashes) which correspond to the TLS server itself.
  • A list of certificates (or hashes) which correspond to CAs in the TLS certificate chain.

In either case, the semantics are that the server certificate/chain must match one of the provided certificates. This match is both necessary and sufficient, which is to say that the client is supposed to accept an otherwise invalid certificate (i.e., one that it wouldn't ordinarily verify) if it is approved via DANE. Obviously, in order for this to work, you need to have the relevant DNS records protected with DNSSEC, since otherwise any attacker who can inject DNS records can simply generate a valid-appearing TLS connection, which obviates the whole point of TLS.

What's the point?
I've been trying to work out what I think about DANE for a while. My initial reaction was that it was a total non starter; anything that relies on DNSSEC has a huge deployment hurdle to jump. A little before today's WG, I started to rethink a bit. At a high level, it seems that DANE-type technology serves two purposes:

  • Cert lock: protecting the server from mis-issued certificates, e.g., one of the 100+ certificates embedded in your average browser other than the one that the legitimate server certificate was issued by.
  • Alternate certification: a lot of people complain that it's too inconvenient to get a CA-issued certificate (I don't really believe this) and putting keys in the DNS presents an alternative, allegedly easier avenue.

These objectives have fairly different value curves as a function of deployment. Cert lock is a mechanism for controlling the attack surface, so once you have deployed the DNS records, the value is roughly linear in the number of clients who are able to verify those records. If 50% of users deploy the ability to verify DNS-based certificates then that's 50% of users who aren't vulnerable to CA malfeasance.

By contrast, alternate certification's value is that you don't need to deal with a CA, so it saves you the cost of the certificate ($20-$100) plus whatever hassle it is dealing with the CA. However, if you don't get a CA certificate, than if X% of the users in the universe are DNS-certificate capable then 100-X% of users can't talk to you server at all without getting some horrible error/warning. So as a practical matter, you can't discard the CA-issued certificate until a very high proportion of clients are DNS-certificate capable. Since you obviously want clients to be able to talk to you this means you don't get any benefit at all until nearly all the clients in the world have upgraded.

So, I think that cert lock is a fairly compelling application of certificates in the DNS, whereas alternate certification isn't really that compelling, at least during the (very long) period when a large fraction of the machines in the world haven't upgraded.

Does cert lock need DNSSEC?
As of this morning, I thought that cert lock didn't really need DNSSEC. The argument runs like this: let's say that the attacker can forge DNS records. The worst he can do is suppress or replace the certificate indication in the DNS to point to his own certificate, but since that certificate must be valid and the ability to attack DNS implies the ability redirect traffic (via modifying A records) then he could have done this in any case, so DANE doesn't make this any worse. On the other hand, there may be some attackers who can't attack DNS, or clients who have cached copies of the cert lock records in DNS, in which case clients are protected from attack. So while DNSSEC is desirable, it's not actually required.

On second thought (and after some comments by Martin Rex), I'm a little less sure. An attacker who can intercept your connection is most likely also able to inject DNS records, so a naive client doesn't get protected by a cert lock style mechanism unless for some reason the attacker is unable to attack DNS. It's possible that a client might have a cached DNS record that contained a cert lock record, in which case it would be protected, but the duration of the cache time to live also limits the responsiveness of the client to unexpected certificate changes, so there are tradeoffs here.

Overall, then, I'm not sure how valuable I think a DNS-based cert lock mechanism is in the absence of DNSSEC. As they say, my views are evolving


March 11, 2011

I've complained before about Farhad Manjoo's shallow analysis of the social implications of technical decisions, which seems to begin and end with what would be convenient for him. His latest post is an argument against anonymous comments on Internet forums/message boards/etc. Manjoo writes:

I can't speak for my bosses, who might feel differently than I do. But as a writer, my answer is no-I don't want anonymous commenters. Everyone who works online knows that there's a direct correlation between the hurdles a site puts up in front of potential commenters and the number and quality of the comments it receives. The harder a site makes it for someone to post a comment, the fewer comments it gets, and those comments are generally better.

I can appreciate how Manjoo might feel like that. No doubt as a writer it's annoying to get anonymous people telling you that you suck (and much as I find Manjoo's writing annoying, I'm forced to admit that even good writing gets that sort of reaction from time to time). However, this claim simply isn't true—or at least isn't supported by any evidence I know of—to the contrary, the Slate comments section (which Manjoo endorses later in his article) isn't really that great and one of the most highly regarded blog comment sections, Obsidian Wings is almost completely anonymous (though moderated), with the only barrier to posting being a CAPTCHA. Similarly, some of the most entertaining pure-comments sites such as Fark only require e-mail confirm, which, as Manjoo admits, is virtually anonymous. I don't really know everything that makes a good comments section work, but it's a lot more complicated than just requiring people to use their real names.

I think Slate's commenting requirements-and those of many other sites-aren't stringent enough. Slate lets people log in with accounts from Google and Yahoo, which are essentially anonymous; if you want to be a jerk in Slate's comments, create a Google account and knock yourself out. If I ruled the Web, I'd change this. I'd make all commenters log in with Facebook or some equivalent third-party site, meaning they'd have to reveal their real names to say something in a public forum. Facebook has just revamped its third-party commenting "plug-in," making it easier for sites to outsource their commenting system to Facebook. Dozens of sites-including, most prominently, the blog TechCrunch-recently switched over to the Facebook system. Their results are encouraging: At TechCrunch, the movement to require real names has significantly reduced the number of trolls who tar the site with stupid comments.

This is an odd claim since Facebook actually makes no real attempt to verify your full name. Like most sites, they just verify that there is some e-mail addres that you can respond at. It's not even clear how Facebook would go about verifying people's real names. Obviously, they could prune out people who claim to be Alan Smithee, (though consider this) but the world is full of real John Smiths, so why shouldn't I be another one of them?

What's my beef with anonymity? For one thing, several social science studies have shown that when people know their identities are secret (whether offline or online), they behave much worse than they otherwise would have. Formally, this has been called the "online disinhibition effect," but in 2004, the Web comic Penny Arcade coined a much better name: The Greater Internet Fuckwad Theory. If you give a normal person anonymity and an audience, this theory posits, you turn him into a total fuckwad. Proof can be found in the comments section on YouTube, in multiplayer Xbox games, and under nearly every politics story on the Web. With so many fuckwads everywhere, sometimes it's hard to understand how anyone gets anything out of the Web.

I don't disagree that this is to some extent true, though I would observe that (a) the link Manjoo points to doesn't actually contain any studies as far as I can tell, just an article oriented towards the lay public and (b) it's not clear to what extent people's bad online behavior is a result of anonymity. Some of the most vicious behavior I've seen online has been on mailing lists where people's real-world identities (and employers!) are well-known and in some cases the participants actually know each other personally and are polite face-to-face.

As I said above, I don't think anyone really knows exactly what makes a good online community (though see here for some thoughts on it by others), but my intuition is that it's less an issue of anonymity than of getting the initial culture right, in a way that it resists trolling, flamewars, etc., or at least has a way to contain them. In comments sections that work, when someone shows up and starts trolling (even where this is easy and anonymous), the posters mostly ignore it and the moderators deal with it swiftly, so it never gets out of hand. Once the heat gets above some critical point on a regular basis, though, these social controls break down and it takes a really big hammer to get things back under control. It's not clear to me that knowing people's real names has much of an impact on any of that.


March 5, 2011

As I've mentioned before, a world with a lot of vampires is a world with a blood supply problem. I recently watched Daybreakers, which takes this seriously; nearly everyone in the world is a vampire and the vampires farm most of the remaining humans for blood while sending out undeath squads to round up the rest. Obviously, this isn't a scalable proposition and sure enough the vampires are frantically trying to develop some kind of substitute for human blood before supplies run out.

In a world where synthetic blood isn't possible, there's some maximum stable fraction of vampires, dictated by the maximum amount of blood that a non-vampire can produce divided by the amount of blood that a vampire needs to survive. According to Wikipedia blood donations are typically around 500ml and you can donate every two months or so. This works out to about 3 liters of blood per donor per year. Presumably, if you didn't mind doing some harm to the donors (e.g., if it's involuntary), you could get a bit more, but this still gives us a back of the envelope estimate. I have no idea what vampires need, but if it's say a liter a day, then this tells you that any more than about 1% of the population being vampires is unstable. This is of course a classic externality problem, since being a vampire is cool, but not everyone can be a vampire. If we wish to avoid over-bleeding, they will need some sort of system to avoid creating new vampires.

Luckily, this is a relatively well understood economics problem with a well-known solution: we simply set a hard limit on the number of vampires and then auction off the rights (cap-and-trade won't work well unless we have some way of turning vampires back into ordinary humans). I'd expect this to raise a lot of money which we can then plow into synthetic research to hasten the day when everyone can be a vampire; either that or research into better farming methods the better to hasten the red revolution.