March 2008 Archives

 

March 30, 2008

Speedo has put an immense amount of effort into developing faster swimsuits, and swimmers using their newest suit, the LZR racer, have broken 13 world records in the past 6 weeks. Understandably, FINA (the governing body for swimming) is somewhat concerned:
EINDHOVEN, Netherlands - The slick new swimsuit that has led to 12 world records already this year will be examined by swimming's governing body amid debate about the quest for speed in the pool.

"There are concerns about suits being like triathlon suits, which are thicker," FINA executive director Cornel Marculsecu told SwimNews.com on Monday. "There are buoyancy issues. We have to review this."

There have been 13 world records set since mid February, 12 in the LZR Racer, a full-length body swimsuit made by Speedo, a brand of Warnaco Group Inc.

There's a lot of science in the LZR. Principally, there are a bunch of features (bonded seams, water repellent fabric) to reduce drag, but the most interesting one is that it's deliberately stiff and supportive around the waist:

The internal core stabilizer supports and holds the swimmer in a corset-like grip and helps them to maintain the best body position in the water for longer.

Body position really is important in swimming—when I was doing triathlon it was one of the things I found hardest to learn. I can't say whether something like this would have helped me and I don't know enough about the technology of swimsuits to say if any of this helps. I skimmed Speedo's site and they claim to have research that shows that these suits improve oxygen efficiency, which basically maps to performance. That said, it's certainly that case that your clothes can change swim performance: baggy suits create more drag and wetsuits dramatically increase performance due to bouyancy and (as far as I can tell) especially keeping your lower body high thus reducing drag even without the need to kick (cf. pull buoys).

The most interesting part of Speedo's site for me was the interviews with athletes who talked about how much they loved the suits and how nice they felt to swim in. The topic of the impact of technology on sports (and fair competition, and purity, etc.) has been discussed ad nauseum, but one thing that rarely comes out in such discussions is the user experience, namely that going fast is fun, and even small differences in your gear make a big difference in how responsive and fast you feel. One of the great things about racing is you get to use your very best, fastest gear—often stuff you couldn't train with every day, either because it's too expensive or (as with racing flats) it's too hard on your body to use all the time. Of course, after a while you get used to the good stuff and then it doesn't feel as great which is another reason why most athletes reserve it for race day. (I've heard this said by swimmers about shaving down as well.)

 

March 29, 2008

This NYT article talks about the problem of assuring the quality of food and medical products sourced outside the US (though all the incidents here actually are of products from China).
When cold medicine containing a poison made in China killed nearly 120 Panamanians in 2006 and early 2007, Americans could take some comfort in the belief that a similar epidemic could never happen here, not with one of the best drug regulatory systems in the world.

Then last spring, hundreds if not thousands of pets died or were sickened in the United States by a Chinese pet food ingredient that contained lethal levels of melamine, an industrial product used to artificially boost protein levels. That was followed quickly by the discovery that Americans were brushing their teeth with Chinese toothpaste containing a poisonous chemical used in antifreeze.

Still, no Americans died from the chemical.

And then came heparin.

A hugely popular blood thinner used in surgery and dialysis, heparin turned out in some cases to contain a mystery substance that sophisticated lab tests earlier this month determined to be a chemically modified substance that mimics the real drug. The United States Food and Drug Administration has linked it to 19 deaths and hundreds of severe allergic reactions, though the agency is still investigating whether the contaminant was the actual cause.

...

Congressional Democrats are talking about authorizing more money so the F.D.A. can do more overseas inspections, particularly in China, where more and more drug ingredients are made. The agency is also completing a plan to permanently station employees in China for the first time.

The article also comes with an extremely scary photo of one of the small workshops in China where the the pig intestines are processed. That doesn't appear to be the problem, though. Based on the article and the Wikipedia article on heparin, the contamination doesn't seem to have been a manufacturing problem but rather an intentional adulterant, as, it seems, were the toothpaste and pet food incidents. That's a different story entirely and it seems a lot less likely that just jacking up the inspection rate or having onsite inspectors is going to be very effective, for several reasons:
  • You only inspect a small fraction of the product and processes (e.g., you come by one day a month), so it's a lot better at detecting systemic carelessness than cases where the inspected party is deliberately trying to defraud you, which seems to be what is happening here.
  • Because inspection regimes only catch a small percentage of the offenses, they need to be coupled with a pretty aggressive punishment regime. Without that, you just stop the particular incident, but what you want is to deter all incidents.
  • One of the people quoted in the article, Dr. Roger L. Williams from USP, suggests that we could have a better testing procedure. Again, this is something that works best when you're looking for sloppy processes, since those follow a somewhat predictable pattern. When you're dealing with active attack, we should expect the attackers to simply adapt their methods.

Fundamentally, the entire food and drug system is based on trust-but-verify. If we're dealing with suppliers which can't be trusted at all, we need to either get a different attitude or deal with a different set of suppliers.

 

March 26, 2008

I'm not a WoW player but a bunch of my friends are, and they seem to put in a really enormous amount of hours just acquiring experience and loot. I guess this is pretty boring even by WoW standards, so it's not at all surprising that people have developed automatic WoW players. Now Blizzard is suing MDY, the creator of one such bot, MMO Glider Unsurprisingly, Blizzard doesn't like bots, since they provide a very substantial advantage to bot users over everyone else (again, I'm not a WoW player, but I think one has to concede they have a point here) and go to a lot of effort to block them.

Unfortunately for Blizzard, determining what software is running on a remote computer controlled by your adversary is known to be an incredibly difficult problem—as far as I know there is no general solution that doesn't involve some sort of trusted computing base on the remote computer (cf. TCG), 1 which of course most people don't have. That hasn't stopped Blizzard from trying, of course. They install a program called Warden on your computer which tries detect whether you're running cheat programs in parallel with WoW itself. Unsurprisingly, MDY has circumvention technology which evades Warden. So, from a technical perspective, this is a losing game for Blizzard. However, that doesn't mean that they can't win their lawsuit.

As I understand the situation, Glider isn't a WoW reimplementation, it's just a control program for WoW. So you start up WoW (or rather Glider does) and then Glider runs the various WoW operations for you. Blizzard argues that running WoW this way exceeds the EULA and so by building a tool designed to be used this way, MDY is engaged in contributory copyright infringement.

I'm not a lawyer, so I'm not going to offer an opinion on the value of this argument, but say that this holds up in court, does MDY have a technical recourse? That's a difficult problem. Since Glider depends on WoW, if they're enjoined from doing that, then life gets a lot harder. They obviously could do a WoW client implementation from scratch, but aside from that being a lot of work, it is actually incredibly easy for Blizzard to detect; they simply can have the server ask the client for a randomly chosen (by the server) section of its code. In order to emulate a real client, Glider would need to have a copy of the WoW client floating around. Would sending the requested copy to Blizzard then constitute copyright infringement as well?

1. The two contexts in which this problem is most relevant are DRM (where the content provider wants to be able to determine that the playing application will enforce its content controls) and network access control/network endpoint assessment, where the network wants to determine that an endpoint is uninfected. In neither case are there adequate solutions against an adversarial endpoint.

 

March 23, 2008

In response to lawsuits over missing emails, the White House claims that they have been following some somewhat unusual IT practices:
"When workstations are at the end of their lifecycle and retired ... the hard drives are generally sent offsite to another government entity for physical destruction," the White House said in a sworn declaration filed with U.S. Magistrate Judge John Facciola.

It has been the goal of a White House Office of Administration "refresh program" to replace one-third of its workstations every year in the Executive Office of the President, according to the declaration.

Some, but not necessarily all, of the data on old hard drives is moved to new computer hard drives, the declaration added.

In proposing an e-mail recovery plan Tuesday, Facciola expressed concern that a large volume of electronic messages may be missing from White House computer servers, as two private groups that are suing the White House allege.

Facciola proposed the drastic approach of going to individual workstations of White House computer users after the White House disclosed in January that it recycled its computer backup tapes before October 2003. Recycling -- taping over existing data -- raises the possibility that any missing e-mails may not be recoverable.

Some initial observations:

  • Every three years is a fairly fast retirement cycle. For comparison, the IRS depreciation schedule for computers is 5 years.
  • It's not clear to me that the hard drive destruction issue is that relevant. When you convert from one machine to another, it's by far easiest to simply move your entire mail archive over, rather than picking and choosing. If you do that, the primary difference between the old and new computers in terms of what data is available is going to be remanent data from explicitly deleted messages, which obviously is not on the new macine. First, most mail systems don't store data in large flat files (yes, yes, I know about MH, but I think we an assume Karl Rove does not use that) or databases, so it's reasonably likely that anything that old will already have been reclaimed and written over. Second, I would really hope that if the White House wants to securely delete something, they do better than just hitting the delete key and hoping.
  • I wonder what mail server logs are available. Even if the data has been deleted, many mail servers keep extensive logs. This could be used both for traffic analysis and as a guide to what should be found with enough effort. Of course, there's always the chance of remanent data on the server as well.
  • What you want is to have confidence that the data you want retained really is retained and that the data you want destroyed really is destroyed, not to rely on the relatively unpredictable properties of your media. It doesn't sound to me like this policy really achieves either. Of course, there is always the possibility that the White House is playing dumb and/or lying, but incompetence wouldn't exactly shock me either.
 

March 21, 2008

One of the things I've always found difficult about navigating with standard topo maps (and a major motivation for being able to download maps into my GPS) is the pain in the ass that's finding your position based on lat/long. Let's take your typical 1:24000 scale map of the Bay Area:

  • The scale is about 1" for each minute of longitude and 2.5" for each minute of latitude.
  • On the USGS quad I'm looking at, tick marks are every 2.5 minutes of longitude and every 3 minutes of latitude.

Using a map like this and starting with GPS readings, finding your position involves converting minutes and seconds to inches (with different scaling factors for lat and long) and then measuring to find the right point on the map.

It turns out—and I feel pretty stupid for not knowing about this already— that there's a much easier way: Universe Transverse Mercator. The basic idea is that you divide the earth into sectors which are small enough to be treated as rectangular and then you can describe any position within the sector by measuring the distance (in meters) from the corner. Generally your GPS can emit UTM coordinates and good topo maps come labeled with UTM grid lines, so finding your position is a simple matter of locating the nearest grid ref and doing a little interpolation. You can even get nice little map tools that let you measure UTM distances on maps of common scales (especially the 1:24000 scale used on the most useful topos). This is dramatically easier; I've known about it for less than 8 hours and I'm already quite a bit better at finding my position with it than I ever was with lat/long.

 

March 20, 2008

After all my complaining about the xml2rfc bibliography system, I decided to do something about it. I thought for a while about hacking xml2rfc itself, but after spending a while reading the crufty tcl code in xml2rfc, I decided it might be easier to do a secondary bibliography management tool in the style of bibtex.

There are two tools:

  • bibxml2rfc: the bibliography manager. It runs through the XML file, finds everything that looks like a reference, and the builds a pair of bibliography files (one for normative and one for informative) for inclusion into your XML file. It automatically fetches reference entries for RFCs and Internet Drafts. You can use ordinary XML inclusion techniques so you don't need to modify the XML file reference section when the bibliography changes.
  • bibxml2rfc-merge: a tool to make merged files for submission. The Internet Drafts submission tool won't accept a bunch of separate XML files, so bibxml2rfc-merge merges them into one for submission. You don't need this for your ordinary work.

The source can be found at: https://svn.resiprocate.org/rep/ietf-drafts/ekr/bibxml2rfc. Documentation for bibxml2rfc can be found at https://svn.resiprocate.org/rep/ietf-drafts/ekr/bibxml2rfc/bibxml2rfc.txt

 

March 19, 2008

Ed Felten reports on inconsistencies in the vote totals reported by Sequoia Advantage voting machines in New Jersey (Note: these machines are different from the touch screen machines we looked at in the California TTBR, so I don't have any inside information.) Anyway, the anomaly is that the number of votes for Democratic and Republican candidates doesn't match the number of times that the ballots were activated. If the number of votes were less than the number of ballots, you could explain that as an undervote, but in the results tape Ed shows, the Republican ballot was selected 60 times and there were 61 votes!

I haven't thought much about potential causes (Ed's commenters theorize) but my money is on simple bugs in the system rather than an attack. If you were an attacker and you had managed to take control of the machine, one of the first things you would want to do is make certain that the results were consistent. Moreover, since this is a primary and not a general election, an attacker wouldn't really benefit from moving votes from one party to another. Much easier (and harder to get caught) to move them from one candidate to another within a party.

Not that this should make you feel any better, since the most basic function of voting machines is to correctly count votes. It shoud also make you wonder about both Sequoia's testing and the testing done by the certification labs. We already know that it's insufficient from a security perspective, but (assuming the problem is in the system), then this seems like it should have been caught by the testing/SQA process.

Sequoia's explanation can be found here. Felten says it's inadequate and that he'll explain why tomorrow. Stay tuned.

 

March 13, 2008

The topic of routing security has started to heat up quite a bit in IETF. Historically, there have been two general types of routing security measures:
  • Trying to defend against outsider attacks by securing adjacencies between routers.
  • Trying to defend against insider attacks by authenticating (and more importantly, authorizing) route advertisements to make sure that routers are only advertising permitted routes.

The second class of mechanisms (e.g., S-BGP) haven't really seen any significant deployment, despite the fact that there is a real threat from incorrect advertisements. (See this post about the Pakistan/YouTube outage for an example.)

The first class of mechanisms have seen modest deployments, but the protocols are fairly primitive, with insecure (or at least pre-modern) MAC function and minimal support for key management. Basically, you used a shared key between the communicating routers (a pair in the case of unicast protocols like BGP or LDP, or a group in the case of multicast/broadcast protocols like IS-IS or OSPF). All was well—or at least quiet—until 2005, when Bonica et al. published a draft which was intended to make key rollover easier for integrity protected TCP and also to update the MAC algorithms. This, coupled with some concerns about the lack of automated keying mechanisms, caused an avalanche efect of interest in revising all the routing adjacency security mechanisms.

IETF 71 had two meetings addressing this topic:

  • TCPM which covered standardizing TCP integrity protection.
  • KMART which covered keying for the average

For some reason that's not entirely clear to me, I got sucked into this stuff. My materials are below:

  • A draft discussing strategies for keying for TCP.
  • Extensive comments on the current TCP authentication option draft
  • A presentation on the generic key management problem.
 

March 7, 2008

Benny Shanon from Hebrew University argues that Moses was taking psychedelics when he saw the burning bush, etc.:
Such mind-altering substances formed an integral part of the religious rites of Israelites in biblical times, Benny Shanon, a professor of cognitive psychology at the Hebrew University of Jerusalem wrote in the Time and Mind journal of philosophy.

"As far Moses on Mount Sinai is concerned, it was either a supernatural cosmic event, which I don't believe, or a legend, which I don't believe either, or finally, and this is very probable, an event that joined Moses and the people of Israel under the effect of narcotics," Shanon told Israeli public radio on Tuesday.

Moses was probably also on drugs when he saw the "burning bush," suggested Shanon, who said he himself has dabbled with such substances.

...

He said the psychedelic effects of ayahuasca were comparable to those produced by concoctions based on bark of the acacia tree, that is frequently mentioned in the Bible.

I'm pretty ignorant of the religious practices of the pre-covenant Israelites, and it's certainly undeniable that intoxicant/psychedelic use is a common feature of a number of religions. That said, I don't really see the point of looking for natural explanations for events in the Bible (for instance, this article arguing that the 10 plagues in Exodus were caused by a volcanic eruption.

What's weird about efforts like this is that they're simultaneously religious and anti-religious. Trying to provide a natural explanation for religious history fundamentally undercuts the religious claims, which rely on supernatural explanations. The Bible pretty clearly says that God spoke to Moses (Ex 3:4). If you believe Moses was just hallucinating, what does that say about God? On the other hand, once you deny the special status of the Bible, then why bother trying to explain the stories at all. It's not like the Bible is this uniquely consistent book of history with just a few mythological pieces. On the contrary, even even the history is to a large degree unverifiable stuff that people only believe because of their preexisting religious (or ethnopolitical) commitments. If you've abandoned those commitments, there's no more need to try to provide scientific explanations for biblical events than there is to provide a scientific explanation of how Sauron crafted the One Ring.

 

March 6, 2008

There's more than one way to censor information you don't like on the Internet. At the end of February, Pakistan's Telecommunication authority decided they didn't like a specific YouTube video and issued an order requiring ISPs to block access to YouTube. The ISPs responded by advertising BGP routes to blackhole YouTube's traffic. Unfortunately, they screwed up and the routes leaked, bringing down YouTube for everyone. Danny McPherson at Arbor Networks has the story.
Either way, the net-net is that you're announcing reachability to your upstream for 208.85.153.0/24, and your upstream provider, who is obviously not validating your prefix announcements based on Regional Internet Registry (RIR) allocations or even Internet Routing Registry (IRR) objects, is conveying to the rest of the world, via the Border Gateway Protocol (BGP), that you, AS 17557 (PKTELECOM-AS-AP Pakistan Telecom), provide reachability for the Internet address space (prefix) that actually belongs to YouTube, AS 36561.

To put icing on the cake, assume that YouTube, who owns 208.65.153.0/24, as well as 208.65.152.0/24 and 208.65.154.0/23, announces a single aggregated BGP route for the four /24 prefixes, announced as 208.65.152.0/22. Now recall that routing on the Internet always prefers the most specific route, and that global BGP routing currently knows this:

  • 208.65.152.0/22 via AS 36561 (YouTube)
  • 208.65.153.0/24 via AS 17557 (Pakistan Telecom)

And you want to go to one of the YouTube IPs within the 208.65.153.0/24. Well, bad news.. YouTube is currently unavailable because all the BGP speaking routers on the Internet believe Pakistan Telecom provides the best connectivity to YouTube. The result is that you've not only taken YouTube offline within your little piece of the Internet, you've single-handedly taken YouTube completely off the Internet.

The problem here is that BGP security is a complete mess. To a first order anyone can advertise any route and they'll be believed. In other words, the Internet is horribly vulnerable to routing attacks. There's been some work in trying to prevent this sort of thing happening (whether via accidental misconfiguration or worse yet, maliciously) but none of the solutions (S-BGP, SoBGP, etc.) but none of it has gone very far, in part because many of the proposed designs are really heavyweight and in part because (or so I'm told) the database of who actually owns what prefix is in such bad shape that you can't use it as a basis for cryptographic assertions about who can advertise what.

 

March 5, 2008

I wrote before about a US court blocking access to Wikileaks. The Judge dropped the order and then the bank dropped the lawsuit. Turns out that this wasn't the only such incident, though. Treasury blacklisted Steve Marshall's travel agency and forced his registrar to disable his domain for allegedly helping Americans travel to Cuba:

"I came to work in the morning, and we had no reservations at all," Mr. Marshall said on the phone from the Canary Islands. "We thought it was a technical problem."

It turned out, though, that Mr. Marshall's Web sites had been put on a Treasury Department blacklist and, as a consequence, his American domain name registrar, eNom Inc., had disabled them. Mr. Marshall said eNom told him it did so after a call from the Treasury Department; the company, based in Bellevue, Wash., says it learned that the sites were on the blacklist through a blog.

Either way, there is no dispute that eNom shut down Mr. Marshall's sites without notifying him and has refused to release the domain names to him. In effect, Mr. Marshall said, eNom has taken his property and interfered with his business. He has slowly rebuilt his Web business over the last several months, and now many of the same sites operate with the suffix .net rather than .com, through a European registrar. His servers, he said, have been in the Bahamas all along.

...

A Treasury spokesman, John Rankin, referred a caller to a press release issued in December 2004, almost three years before eNom acted. It said Mr. Marshall's company had helped Americans evade restrictions on travel to Cuba and was "a generator of resources that the Cuban regime uses to oppress its people." It added that American companies must not only stop doing business with the company but also freeze its assets, meaning that eNom did exactly what it was legally required to do.

The situation here is much like that with Wikileaks. Both eNom (his direct registrar) and VeriSign (the registry for .com), so the US government can clearly force them to do stuff (I'm not passing judgement on the law here, just talking about power). Even if eNom were outside the US, USG could just force VeriSign to redirect the domain somewhere else. In fact, all three of the big TLDs (.com, .net, and .org are operated by organizations based in the US. That said, plenty of the ccTLDs are operated outside the US; .cu looks pretty safe. Obviously, in principle USG could lean on ICANN to reassign one of the ccTLDs, but it's hard to believe that ICANN's legitimacy could survive doing it.

Once we get past the question of what USG can or can't do, there's the question of what's smart. Obviously, the USG can make the lives of people they don't like miserable if they use .com but that just gives anyone who thinks they might piss of the government incentive to choose another TLD. So, you'd expect to see a new equilibrium where people who worry about this move their domains outside the US the same way that people do banking offshore, which is bad for any registrar or registry which operates in the US and not clearly any better for USG in the long term.

 

March 2, 2008

More on movie plot holes...

In Battlestar Galactica, the Cylons attack the "12 Colonies" from space and we get treated to the usual terrifying scenes of people running from strafing Cylon fighters. Here's the thing, though: they're trying to kill every human in the galaxy, so what's with the up close and personal attack? Wouldn't it be simpler to just nuke the cities form orbit? For that matter, you could skip the nukes and just use kinetic weapons from space.

 
OK, so Mrs. Guesswork and I just finished watching Casino Royale and something bugging me. (spoilers after the jump).