Networking: January 2011 Archives


January 22, 2011

Slate has published another Farhad Manjoo screed against unlimited Internet service.
And say hooray, too, because unlimited data plans deserve to die. Letting everyone use the Internet as often as they like for no extra charge is unfair to all but the data-hoggiest among usand it's not even that great for those people, either. Why is it unfair? For one thing, unlimited plans are more expensive than pay-as-you-go plans for most people. That's because a carrier has to set the price of an unlimited plan high enough to make money from the few people who use the Internet like there's no tomorrow. But most of us aren't such heavy users. AT&T says that 65 percent of its smartphone customers consume less than 200 MB of broadband per month and 98 percent use less than 2 GB. This means that if AT&T offered only a $30 unlimited iPhone plan (as it once did, and as Verizon will soon do), the 65 percent of customers who can get by with a $15 planto say nothing of the 98 percent who'd be fine on the $25 planwould be overpaying.

This seems extremely confused. First, it's generally true that whenever a business offers a limited number of product offerings with each at a fixed price that some people overpay because they only want some cheaper offering that the company doesn't provide. For instance, when I bought my last car, Audi insisted on selling me the "winter sports package" (heated seats and a ski bag). Now, I don't do a lot of skiing and I didn't want either but thats the way the thing came. Now by Manjoo's logic, it was unfair that I had to pay more for a ski bag I would never use (the heated seats are great, by the way) but that's just the way the product comes. Sure, I'd rather the company offered exactly the package I wanted but a limited number of offerings is just a standard feature of capitalism.

Its worth observing that there's nothing special about the "unlimited" plan in Manjoo's logic (It's not really unlimited anyway, since the network has some finite amount of bandwidth available so that provides a hard upper limit on how much data you can transfer in a month; it's just that that limit is really high.) Say Verizon offered only a 2GB plan, would he be whining that he only used 200 MB of bandwidth and so he was being made to overpay so Verizon can make money on the 2GB-using bandwidth hogs? So, this objection is pretty hard to take seriously.

Manjoo goes on:

But it's not just that unlimited plans raise prices. They also ruin service. Imagine what would happen to your town's power grid if everyone paid a flat rate for electricity: You and your neighbors would set your thermostats really high in the winter and low in the summer, you'd keep your pool heated year-round, you'd switch to plug-in electric cars, and you'd never consider replacing your ancient, energy-hogging appliances. As a result, you'd suffer frequent brownouts, you'd curse your power company, and you'd all wish for a better way. Economists call this a tragedy of the commons, and it can happen on data networks just as easily as the power grid--faced with no marginal cost, it's in everyone's interest to use as much of the service as they can. When that happens, the network goes down for everyone.

So, first this is just wrong: it's actually reasonably common for utilities to be included in people's leases and yet when that happens people don't automatically switch to plug-in cars or start up home aluminum refineries. That isn't to say that at having to pay for each watt of power doesn't have some impact on your consumption, but there is only so much power that it's really convenient for people to use; it's not like power being free causes consumption to spin off into infinity. To take another example, it's absolutely standard for local voice telephony service to be sold flat rate and yet practically nobody leaves their phone line tied up 24x7 just in case they want to say something to Mom and don't feel like taking the trouble to dial the phone. (Full disclosure, I actually have used dialup internet as a replacement for a leased line this way, but that's a pretty rare use case.)

The second problem with this claim is that computer networks don't behave the way the electrical grid does in the face of contention. Like the electrical grid, computer networks are sized for a certain capacity, but unlike the grid, computers aren't built with the assumption that that capacity is effectively infinite. If the electrical grid in your area is operating at full capacity, and you turn on your AC, this can cause a brownout because there is no way for the power company to tell everyone to use 1% less power and even if there was, many of the devices in question are just designed to operate in a way where they draw constant power. By contrast, computer network protocols are already designed to operate in conditions where they can't use as much bandwidth as they would like because non-infinite bandwidth is a basic feature of the system. Even if there is no contention for the network, applications need to work behind a variety of connection types so people who build applications typically build them to automatically adapt to how much throughput they are actually getting. For instance, Netflix has adaptive streaming which means that it tries to detect how fast your network is and if it's slow it compresses the media harder to reduce the amount of data to send. What this means is that unlike the electrical grid where your computer may just crash if it doesn't get enough power, if the network suddenly gets slower, performance degrades relatively smoothly.

The second thing you need to know is that in data networks congestion is (almost) the only thing that matters. If nobody else is trying to use the network right now then it's fairly harmless if you decide to consume all the available capacity. What's important is that when other people do want to use the network you back off to give them room. So, to the extent to which there is a scarce resource it's not total download capacity but rather use of the network at times when it's actually congested. To a great extent network protocols (especially TCP) already do attempt to back off in the face of congestion but there's also nothing stopping the provider from deliberately imposing balance on you (cf. fair queueing). In either case, this is a relatively orthogonal issue to the volume of data transferred; a cap on total transfer is an extremely crude proxy for the kind of externality Manjoo is talking about. Not only is it crude, it's inefficient: it discourages use of the network which would be cost-free for others and of value to the customer using the network.

All this stuff has of course been hashed out endlessly in the networking economics literature and the above is only the barest sketch. Suffice to say that just applying this sort of naive "tragedy of the commons" analysis doesn't really get you very far.