Networking: February 2008 Archives


February 28, 2008

One of the main reasons to have a blog is to call a bad idea a bad idea. Here's one. Former FBI Agent Patrick J. Dempsey suggests:
It's obvious that the Internet requires some type of governance. But it is just as obvious that trying to establish this governance through the numerous legal systems might not be practical. The other possibility for governing the Internet, and, more specifically, the criminal activity that occurs on the Internet, would be to change the structure of the Internet. Although I don't support ideas like the "national firewalls" put in place by some countries, this type of solution does afford some level of control over Internet traffic flowing through said country.

However, knowing all the possibilities with disguising or "spoofing" one's information on the Web, I'm not sure that there is a way to truly "protect our borders" when it comes to the Internet. The solution might be to establish two Internets -- the current Internet and a new, more secure Internet where users would be required to register prior to gaining access. Once again, though, we're confronted with the issue of what would be the governing body that would manage the user registrations? Would it be an organization similar to the IANA (Internet Assigned Numbers Authority) or InterNIC that would manage user registrations on the "new" Internet, or do we need to establish an entirely new entity to manage a more secure Internet?

The problem with this idea is it's totally confused about the security problem with the Internet, which has a lot more to do with stupid users and insecure software than it does with failing to authenticate everyone with a modem.

Let's play this out: you set up your new secure Internet. There's already an Internet 2, so let's call it Internet 3 or I3. Anyway, we've got I3 up and running and before they'll give you a connection you have to give them your fingerprint, irisprint, a blood sample and the keys to your car. Of course, if if you want I3 to be useful, you have to let pretty much anyone on, so just like the Internet, I3 is full of hackers. And since your software isn't any more secure than it was before, you're still just as likely to have your machine compromised. Now, it's true that having positive identification for each user might forensics a tiny bit easier: once you've managed to track the user down to the account they initially logged in from, you know who to arrest. But of course, hackers use compromised machines as stepping stones, so tracking them down isn't easy, and of course it's not exactly difficult to steal people's account information and log in as them instead of yourself.

Even if we somehow were able to create an I3 without any hackers on it, it wouldn't stay that way for long. I3 is one big sterile area, so as soon as any significant number of compromises happen it's game over. Initially, I3 is going to be pretty lame, so people are going to use both the Internet and I3. And since the Internet is full of hackers and their machines are compromised and they're going to use the same machines for both the Internet and I3, it's not going to be long before plenty of I3 credentials are circulating in the hacker community. Creating isolated networks is really hard even when you're working in real high security environments. It's basically impossible when you're dealing with millions of people, many of whom are willing to run any random .exe file you send them.


February 20, 2008

Cayman bank Julius Baer Bank and Trust has convinced a federal judge to shut down DNS service for
On Friday, Judge Jeffrey S. White of Federal District Court in San Francisco granted a permanent injunction ordering Dynadot, the site's domain name registrar, to disable the domain name. The order had the effect of locking the front door to the site -- a largely ineffectual action that kept back doors to the site, and several copies of it, available to sophisticated Web users who knew where to look.

Domain registrars like Dynadot, and GoDaddy .com provide domain names -- the Web addresses users type into browsers -- to Web site operators for a monthly fee. Judge White ordered Dynadot to disable the address and "lock" it to prevent the organization from transferring the name to another registrar.

The feebleness of the action suggests that the bank, and the judge, did not understand how the domain system works, or how quickly Web communities will move to counter actions they see as hostile to free speech online.

The site itself could still be accessed at its Internet Protocol address ( -- the unique number that specifies a Web site's location on the Internet. Wikileaks also maintained "mirror sites," or copies usually produced to ensure against failures and this kind of legal action. Some sites were registered in Belgium (, Germany ( and the Christmas Islands ( through domain registrars other than Dynadot, and so were not affected by the injunction.

There's also a mirror at cryptome.

For those of you who don't know how this all works, there's registries, who actually run the domain name (.org in this case) and then there are registrars, who actually deal with the customers. Any given top level domain typically has multiple registrars that service it, all of whom populate the same database, operated by the registry. So, the locking thing stops Wikileaks from transferring their domain to another registrar who would then reactivate it.

OK, so this order controls the registrar. But can Wikileaks just go to the registry and get them to move it to some other registrar, locking notwithstanding? In this case, Wikileaks is under .org, which is run by the Public Interest Registry. Operationally, the PIR is run by Afilias. Both of these are based in the US, so presumably the injunction could be expanded to include them as well. On the other hand, as the article notes, there are plenty of registries with no US connection and the only way for a US judge to take down them domains under them would be to go after ICANN, which, despite complaints about the US running the DNS seems pretty unlikely.

As you may be gathering at this point, this is all pretty pointless. It's basically impossible to censor stuff like this once it gets out. We're seeing the first level of countermeasure here, but even if by some miracle the judge managed to shut down every domain name serving the contraband material (and since the decision loop for spreading those domain names is a lot faster than your average judge's decision making process), people can just move to IP addresses published by some other means (like other people's web sites). And there are about three levels of escalation up from there, all of which are progressively harder to censor.

It will be interesting to see if JBBT goes after, though.


February 16, 2008

The EFF has obtained a document under FOIA describing an incident in which an email provider which was served by an NSL for some email communications and accidentally sent far too much information to the FBI:
In late February 2006, a surge in data being collected by the FBI's Engineering Research Facility (ERF) was identified by ERF personnel. As a result ERF investigated the issue and recognized that the collection tools used to collect email communication from the subject of the investigation were improperly set and appeared to be collecting data from the entire email domain. due to an apparent miscommunication, the private internet provider accidentally collected mail from the entire domain and subsequently conveyed the email to ERF.
(NYT story here).

I'm sort of curious what kind of tools the ISPs are using here. You certainly can reconfigure your mailer to forward copies of emails to certain addresses to somewhere else, though mail going out is a little trickier. In any case, I'd be a little surprised if the FBI expected something quite so DIY. Maybe when they send you an NSL it comes with a pamphlet telling you how to reconfigure Outlook.

Apparently, this happens reasonably often. The FBI calls it "overproduction":

A report in 2006 by the Justice Department inspector general found more than 100 violations of federal wiretap law in the two prior years by the Federal Bureau of Investigation, many of them considered technical and inadvertent.


In the warrantless wiretapping program approved by President Bush after the Sept. 11 terrorist attacks, technical errors led officials at the National Security Agency on some occasions to monitor communications entirely within the United States -- in apparent violation of the program's protocols -- because communications problems made it difficult to tell initially whether the targets were in the country or not.

Past violations by the government have also included continuing a wiretap for days or weeks beyond what was authorized by a court, or seeking records beyond what were authorized. The 2006 case appears to be a particularly egregious example of what intelligence officials refer to as "overproduction" -- in which a telecommunications provider gives the government more data than it was ordered to provide.

The problem of overproduction is particularly common, F.B.I. officials said. In testimony before Congress in March 2007 regarding abuses of national security letters, Valerie E. Caproni, the bureau's general counsel, said that in one small sample, 10 out of 20 violations were a result of "third-party error," in which a private company "provided the F.B.I. information we did not seek."

To quote Broken Arrow, " I don't know what's scarier, losing a nuclear weapon or that it happens so often there's actually a term for it." Outstanding!


February 1, 2008

Writing about Microsoft's acquisition offer for Yahoo (everyone has a clever proposed name for this, mine is MiHoo, pronounced my-hoo), Chris Wilson says:
As access to the Internet has become ubiquitous, computer users have increasingly gone to the Web for what were once offline tasks. In the near-future, it will become more efficient to run an application, like a word processing program, off of a central network of computers rather than an individual hard drive. The concept is known as "cloud computing": Documents will begin and end their lives on a server rather than a personal computer, and users will be able to access their personal documents and favorite programs wherever they are with any networked device.

Wilson is certainly right to observe that there is real interest in moving functionality from the machines on people's desks to servers, but of course we've been around this cycle a number of times before (remember mainframes? minis? XTerminals? the SunRay?) so it's not like conditions here are unique. What's semi-unique is that the we're returning to the really early days of computing where (at least sometimes) clients from multiple organizations connect to a central server owned by some other organization. Back in the day, it was because computers were so expensive, now it's driven by networks and computers being cheap, but management being expensive. (As Tech Ennui observes, this is related to, but not the same as Myers and Sutherland's wheel of reincarnation).

It's also not really clear how much Web apps like the ones Google is building are really going to take off. People are really pretty committed to their desktop word processing apps, spreadsheets, etc. It remains to be seen if they're really going to be willing to outsource them to Google, Microsoft, whatever.

Interestingly, in order to move the nominal location of services into the cloud, we're moving processing onto the desktop (this is different from say XTerminals, where nearly all the processing was on the server (the X client)). Classic Web apps use the client as a dumb display engine but Web 2.0 AJAX-style apps move processing onto the client using JavaScript. In fact, if you squint hard enough, AJAX looks a tiny bit like NeWS.