Networking: September 2009 Archives

 

September 14, 2009

David Coursey complains about how long it took IEEE to develop 802.11n:
802.11n is the poster child for a standards process gone wrong. Seven years after it began and at least two years after 802.11 "draft" devices arrived, the IEEE has finally adopted a final standard for faster, stronger, more secure wireless.

Ideally, standards arrive before the products that implement them. However, the IEEE process moved so slowly that vendors adopted a draft standard and started manufacturing hardware. After a few little glitches, the hardware became compatible and many of us have--for years--been running multivendor 802.11n networks despite the lack of an approved standard.

...

If standards bodies expect to be taken seriously, they need to do their work in reasonable periods. Releasing a "final" standard long after customer adoption has begun is not only anti-climatic but undercuts the value of the standards making process.

In this case, the process failed. The IEEE should either improve its process or get out of the way and left industry leaders just create de facto standards as they see fit. That is not preferable, but if the IEEE process is stuck, it will be what happens.

My experience with IEEE standards making is limited, but I have extensive experience with IETF's process, and I'm a little puzzled as to what Coursey thinks the problem is here. Developing standards is like developing any other technical artifact: you start out with an idea, do some initial prototypes, test those prototypes, modify the design in response to the testing, and iterate till you're satisfied. Now, in the case of a protocol standard, the artifact is the document that defines how implementations are supposed to behave, and the testing phase, at least in part, is implementors building systems that (nominally) conform the the spec and seeing how well they work, whether they interoperate, etc. With any complicated system, this process needs to include building systems which will be used by end-users and seeing how they function in the field. If you don't do this, you end up with systems which only work in the lab.

There's not too much you can do to avoid going through these steps; it's just really hard to build workable systems without a certain level of testing. Of course, that still leaves you with the question of when you call the document done. Roughly speaking, there are two strategies: you can stamp the document "standard" before it's seen any real deployment and then churn out a revision a few years later in response to your deployment experience. Alternately, you can go through a series of drafts, refining them in response to experience, until eventually you just publish a finished standard, but it's based on what people have been using for years. An intermediate possibility is to have different maturity levels. For instance, IETF has "proposed standards", "draft standards", and then "standards". This doesn't work that well in practice: it takes so long to develop each revision that many important protocols never make it past "proposed standard." In all three cases, you go through mostly the same system development process, you just label the documents differently.

With that in mind, it's not clear to me that IEEE has done anything wrong here: if they decided to take the second approach and publish a really polished document and 802.11n is indeed nice and polished and the new document won't need a revision for 5+ years, then this seems like a fairly successful effort. I should hasten to add that I don't know that this is true: 802.11n could be totally broken. However, the facts that Coursey presents sound like pretty normal standards development.

 

September 13, 2009

One of the results of Joe Wilson (R-South Carolina) calling President Obama a liar on national TV was that money started pouring in, both to Wilson and his likely opponent in 2010 (Rob Miller). Piryx, who hosts Wilson's site, claims that on Friday and Saturday they were then subject to a 10 hour DoS attack against their systems:
Yesterday (Friday) around 3:12pm CST we noticed the bandwidth spike on the downstream connections to Piryx.com server collocation facility. Our bandwidth and packet rate threshold monitors went off and we saw both traditional DOS bandwidth based attacks as well as very high packet rate, low bandwidth ICMP floods all destined for our IP address.

...At this point we have spent 40+ man hours, with 10 external techs fully monopolized in researching and mitigating this attack.

To give a sense of scale, the attacks were sending us 500+ Mbps of traffic, which would run about $147,500 per month in bandwidth overages.

I think most people would agree that technical attacks on candidates Web sites, donation systems, etc. aren't good for democracy—just as it would be bad if candidates were regularly assassinated—and it would be good if they didn't happen. While there are technical countermeasures against, DoS, they're expensive and only really work well if you have a site with a lot of capacity so that you can absorb the attack, which isn't necessarily something that every HSP has.

This may turn out to be a bad idea, but it occurred to me that one way to deal with this kind of attack might be for the federal government to simply run its own HSP, dedicated solely to hosting sites for candidates and to accepting payments on their behalf. Such a site could be large enough—though compared to big service providers, comparatively small—to resist most DoS attacks. Also, to the extent to which everyone ran their candidate sites there, it would remove the differential effect of DoS attacks: sure you can DoS the site, but you're damaging your own preferred candidate as much as the opposition. Obviously, this doesn't help if the event that precipitates the surge of donations massively favors one side, but in this case, at least, both sides saw a surge. I don't know if this is universally true though.

Of course, this would put the site operator (either the feds or whoever they outsourced it to) in a position to know who donated to which candidate, but in many cases this must be disclosed anyway, and presumably if the operation was outsourced, one could put a firewall in to keep the information not subject to disclosure away from the feds.