Once a site is declared bad, the following blocks can be put in place:
- The registrar/registry is required to suspends and lock the domain name.
- ISPs are required to attempt to block resolution of the domain name.
- Advertising networks are forbidden to serve ads on the site.
When I read stuff like this—or almost anything, for that matter—my thoughts immediately turn to how to attack it, or in this case how to circumvent the blocking. We need to consider two threat models from the blocker's perspective:
- Static users, who won't adapt their behavior at all.
- Adaptive users, who will attempt to actively circumvent blocking.
The history of file sharing suggests that many users in fact fall into the second category, as they have shifted from Napster to Limewire to BitTorrent, etc., but we should still consider both cases.
Static Users
Even if we only consider static users, a site can gain a fair amount of traction
by moving as much of its dependencies outside the US as possible. In particular,
they can register a domain name with a registrar/registry which is located
outside the US. This is harder than it sounds since many of the allegedly
foreign registries are actually run by US companies, but as far as I
know it's not impossible. That solves the first type of blocking, leaving
us with blocking by ISPs and ad networks. Obviously, if you don't serve
ads you don't care about ad networks, so this may or may not be an
issue and I don't know to what extent there are ad networks without
substantial US operations you can use.
Getting around ISP blocking is more tricky. Many if not most people use their ISP's DNS server (they get it via DHCP) so if your customers are in the US then it's going to be trivial for the ISP to block requests to resolve your site. Basically, if your users aren't willing to do anything then you've pretty much lost your US audience.
Adaptive Users
If your users are willing to be a little adaptive then there are a bunch
of progressively more aggressive measures they can take to evade this
kind of DNS blocking. The easiest is that they can reconfigure their machines
to use an external un-filtered DNS service. This doesn't help if ISPs are required to
actively filter all DNS queries using some sort of deep packet inspection
technology. It's not difficult to build a box which will capture DNS queries
and rewrite them in flight, or alternately, to block DNS queries to any
resolvers other than their own resolvers (note, many ISPs already
block TCP port 25 for spam blocking, so it's not like this is particularly
hard.) It's unclear to me that
this particular bill would require ISPs to do this kind of filtering,
since there is a specific safe harbor for the ISP to show that
they do not "have the technical means to comply with this section",
but obviously this is something that the government could require.
One natural response is to use Tor, which has the advantage of being available right now. The disadvantage is that Tor wants to tunnel all your traffic which means that performance isn't that great, and it's kind of antisocial (as well as slow) to be using Tor to transfer gigabytes of movies from place to place when all you want to do is get unfiltered name resolution.
What's really needed is a name resolution mechanism that resists filtering. One option would be to have an encrypted connection to your DNS resolver (Dan Bernstein, call your office) or some non-DNS service that acts as a DNS proxy, e.g., DNS over TLS. This requires pretty substantial work by users and client authors to deploy and the load on those resolvers would be significant. Note that you don't need to modify the operating system to do this; there are plenty of user-land DNS resolution libraries available that could be embedded into your client. Still, the amount of work here isn't totally insignificant.
Another option comes to mind, however. There's nothing wrong
with the ordinary ISP-provided DNS for most resolutions.
There aren't
going to be that many domains on this block list and the government
helpfully publishes a list of them. Someone could easily gather
a list of the blocked domains and the IPs they had when blocked,
or even maintain an emergency parallel update system to let
the blocked domains update their records. All that's required is
a way to retrieve that data, which could be easily fit into a single
file. Moreover, the resulting file could be formatted as a /etc/hosts
file which people could just install on their machines, at which
point the standard operating system mechanisms will cause it
to bypass DNS resolution. The result would be that you got ordinary
DNS resolution most of the time but for blocked hosts you
got the result from /etc/hosts
. All that's required
is some way to periodically update the bypass list, but that could
be done manually or with a tiny program.
Of course, there are still plenty of blocking mechanisms available to the government: they could require IP-level blocking or attempt to block distribution of the bypass list, though that's probably small enough to make that impractical. However, I think this makes clear that just blocking DNS is not likely to be as effective as one would like if users are willing to put in a modest amount of effort.
Acknowledgement: This post benefited from substantial discussions with Cullen Jennings.