December 2005 Archives


December 30, 2005

Wow, I can't believe I missed this! The Rapture Ready web site maintains a Rapture Index that measures, well, let's let them explain it:
The Rapture Index is by no means meant to predict the rapture, however, the index is designed to measure the type of activity that could act as a precursor to the rapture.

You could say the Rapture index is a Dow Jones Industrial Average of end time activity, but I think it would be better if you viewed it as prophetic speedometer. The higher the number, the faster we're moving towards the occurrence of pre-tribulation rapture.

Rapture Index of 85 and Below: Slow prophetic activity
Rapture Index of 85 to 110: Moderate prophetic activity
Rapture Index of 110 to 145: Heavy prophetic activity
Rapture Index above 145: Fasten your seat belts

Unfortunately, unlike e-quote, they don't publish graphs, but someone has done one from mid-2004.

Today's Rapture Index is up one to 154. Outstanding!

This article in The Register complaining about redelegation of the ccTLDs .kz (Kazakhstan) and .iq (Iraq). What's special about these redelegations is that they weren't agreed to by the current registrys for those domains but were done on the basis of giving control to the civil authorities in the country.

The Register's complaints seem to be as follows:

  • This isn't the way that things have traditionally been done.
  • This was done at the behest of the US government (and that the .kz redelegation was done to facilitate the .iq redelegation).
  • This makes it easier for governments to censor the Internet.

None of these seems like very valid criticisms. The whole reason that ccTLDs exist is to have national registries. Indeed, the existence of a national (or at least discrete geopolitical) entity and the corresponding ISO 3166 country code is a necessary condition for the creation of the corresponding ccTLD. Given, that, it's perfectly reasonable for the civil authorities in that entity to control who gets to assign names inside what's effectively their ccTLD. Now, in the past delegation of those ccTLDs has been fairly sloppy. As the Register article notes:

Control of Iraq's domain was far more complicated however. The .iq domain was registered instead to two brothers living in the US. The Elashi brothers and other members of their family at the time were also in US jail awaiting trial for funding terrorists - which in the end amounted to shipping computer parts to Libya and Syria and for which they all received hefty sentences.

Given that this sort of ad hoc delegation is widespread, it's hardly surprising that we would eventually run into the situation where the existing delegee would not want to give up control. Assuming you accept that countries should be able to control their own ccTLDs, then there had to come a time when one of those transfers would be involuntary. It's not clear what the problem with that is. Moreover, I'd observe that the Register itself doesn't seem to have any problem with the idea that ICANN would take control of .com away from VeriSign, even though VeriSign demonstrably doesn't want to give it up.

As for the second point, I'm sure it's true that the US government encouraged ICANN to make this redelegation, but that doesn't inherently make it illegitimate. .iq was redelegated to the National Communications and Media Commission of Iraq, with the endorsement of Prime Minister Allawi. Whatever one thinks about the legitimace of the US invasion, if there's anyone in Iraq who's entitled to claim to be the current civilian authorities, presumably it's the Allawi administration. It's not like ICANN redelegated .iq to Halliburton.

The final point, however, that this somehow facilitates censorship, is the silliest. It's certainly true that having a permanent DNS name is an asset if you want to serve content, but there's no requirement that you use any particular name, as long as people know what your name is. The particular case the Register cites is that of Ali G:

Of course this would never happen. Except it has already. Within months of the government-run "Association of Kazakh IT Companies" getting control of Kazakhstan's internet domain, it shut down the website of British comic Sacha Baron Cohen (best known as Ali G). The site at featured another of Cohen's comic creations, Borat Sagdiyev, a Kazakh journalist. It was removed from the Internet.

Of course, it's not like this actually presents much of an obstacle to people finding Ali G content, since there's you know, Google. So, all that's really happened is that you can't go to, but there's nothing stopping Cohen from picking a non Kazakhstan domain name:

The Register again:

Why? The president of the organisation said it was so the comic "can't bad-mouth Kazakhstan under the .kz domain name". If you want an example of government-owned and run censorship on the internet, you'll be hard pushed to find a clearer example.

Well, except for that whole China business and the thing with France, Yahoo, and the Nazi pariphernalia, and the fact that noone's actually stopping Cohen from distributing his content, yeah, I guess that's true.

There is one thing in this article that's sort of interesting if true:

When the US government took over Afghanistan in 2001, it was fortunate in that the current ccTLD owner was killed during bombing of Kabul. It simple forged the man's signature on a piece of paper handing over control to the US-created authority and the job was done.

I've never heard this story before, the Register doesn't present a citation, and a little bit of searching doesn't turn up another source for it.


December 29, 2005

It's that time of year again, when we get to change health care plans. There's nothing like spending a day trying to figure out whether a 90/70 PPO plan with a $500 deductible and 80% home health care is better than a 90/70 Managed POS plan with a $750 deductible and 90% home health care benefit (not to mention the Blue Shield and Aetna variants) to make you appreciate a a one-size-fits all single payer plan.

December 28, 2005

Michigan has just banned machines that allow people to inhale alcohol vapor (alcohol without liquid (AWOL)):
Machines that allow people to inhale alcohol are now illegal in Michigan.

Governor Granholm signed a bill prohibiting devices known as "alcohol without liquid" that vaporize hard liquor. The law makes it illegal to possess, sell or use an AWOL machine. Violators face up to 90 days in jail and a 500 dollar fine.

Supporters of the law say the machines cause a more rapid, intense buzz because the alcohol moves through the lungs instead of the digestive system.

I'm not sure why creating a more rapid, intense buzz is bad. On the contrary, I would expect this to be more readily controllable because there's less lag between consumption and effect.

The truth, I suspect, is that this is more of our schizophrenic attitude towards intoxicants. It's clear that much of the appeal of alcoholic drinks is that they're intoxicating, but the fact that there's also an aesthetic experience in terms of taste provides cover for that (which is also why wine and cocktails are considered classy but drinking pure hard liquor is less so). AWOL strips away that veneer and reveals itself as naked intoxicant use, which is apparently something that many Americans are unwilling to face.


December 27, 2005

Hey boys and girls! Want to help your country defeat that mean old Osama? Then check out the National Security Agency's CryptoKids web site (via Hit and Run).
On this site, you can learn all about codes and ciphers, play lots of games and activities, and get to know each of us - Crypto Cat, Decipher Dog, Rosetta Stone, Slate, Joules, T.Top, and, of course, our leader CSS Sam.

You can also learn about the National Security Agency/Central Security Service - they're Americas real codemakers and codebreakers. Our Nation's leaders and warfighters count on the technology and information they get from NSA/CSS to get their jobs done. Without NSA/CSS, they wouldnt be able to talk to one another without the bad guys listening and they wouldnt be able to figure out what the bad guys were planning.

We hope you have lots of fun learning about cryptology and NSA/CSS. You might be part of the next generation of Americas codemakers and codebreakers.

The site comes complete with a bunch of material on making and breaking simple codes (cool), resources to teach kids about crypto (also cool), and detailed biographies of the CryptoKids characters (kind of creepy). Here's some of what CryptoCat does for fun:

I'm usually hanging out with my friends at the mall or catching the latest movie. I love helping people so I find different ways to help out around the community. Right now, I volunteer as a swim coach for children with special needs. Its a lot of fun AND I get to spend extra time with my sister who has Downs Syndrome.

The NSA Gifted and Talented program looks pretty cool, though.

Bruce Schneier writes:
Paying people rewards for finding security flaws is not the same as hiring your own analysts and testers. It's a reasonable addition to a software security program, but no substitute.

I've said this before, but Moshe Yudkowsky said it better:

Here's an outsourcing idea: get rid of your fleet of delivery trucks, toss your packages out into the street, and offer a reward to anyone who successfully delivers a package. Sound like a good idea, or a recipe for disaster?

Red Herring offers an article about the bounties that some software companies offer for bugs. That is, if you're an independent researcher and you find a bug in their software, some companies will offer you a cash bonus when you report the bug.

As the article notes, "in a free market everything has value," and therefore information that a bug exists should logically result in some sort of market. However, I think it's misleading to call this practice "outsourcing" of security, any more than calling the practice of tossing packages into the street a "delivery service." Paying someone to tell you about a bug may or may not be a good business practice, but that practice alone certainly does not constitute a complete security policy.

While I agree that bug bounties shouldn't be one's sole method of providing software security, I think this analogy kind of misses the point. As I see it, the objective of a bug bounty system isn't to find vulnerabilities: it's to control their distribution by incentivizing researchers to bring them to you first rather than just publishing them. Remember that a vendor's proximal security incentive isn't to minimize the number of security vulnerabilities in their software but rather to minimize the amount of exposure that customers experience due to unfixed vulnerabilities that are known to bad actors.

There are two basic strategies that a vendor can follow to effect this goal. First, they can invest effort in reducing the number of vulnerabilities in their software, thus making it harder for anyone to find vulnerabilities and presumably reducing the number found by bad actors. The second strategy is to try to keep vulnerabilities out of the hands of bad actors after they're discovered. This is where bug bounties come in. They provide an incentive for non-malicious researchers to come to the vendor first rather than publishing first (this is obviously important if it takes you a long time to develop a fix) and if they're high enough they might even persuade someone who was otherwise malicious to sell you their vulnerabilities rather than exploiting them.

I think what's underlying Bruce and Moshe's objections is that if all you do is pay bounties then your software still has a large number of vulnerabiliites in it just waiting to be discovered by someone who doesn't want to take your bounty. That's certainly true, and it feels yucky to have all that stuff in there, but this objection sort of tacitly assumes that that's somehow not the case (or at least significantly less so) if you do hire your own analysts and testers. It's not clear that that's so at the levels of investment that organizations are typically willing to put in.


December 26, 2005

We may have a coffee shortage in '07-'08:
LONDON -- A world coffee shortage is looming two years from now as yields from Brazilian trees dwindle and a global surplus in 2006-07 will fail to replenish stockpiles in producer countries, predicts commodity analysts F.O. Licht.

"In 2007-08, stocks could be at a critically low level," F.O. Licht managing director Helmut Ahlfeld said Thursday.

Costs have risen for producers in Brazil, the world's biggest grower and exporter of coffee, while the country's strengthening currency has hurt earnings.

The Valley runs on caffeine. Better short QQQs.

I recently caught Sam "The End of Faith" Harris's lecture "The View From the End of the World" on NPR. Harris's basic theme is the convention of deference that our society pays towards matters of faith--he's against it. (MP3). I'm fairly sympathetic to this basic point: as things stand, claiming that your ethico-moral position is grounded in your religion is more or less a conversation-ender, no matter how indefensible that position would otherwise be. And many positions that people defend that way do strike me as otherwise indefensible. On the other hand, Harris's suggestion that this may be the worst problem the world faces strikes me as a bit of an exaggeration.

December 25, 2005

I just finished reading Alastair Reynolds's Pushing Ice (thank you, Amazon Prime). I really enjoyed Reynold's Revelation Space books, but found Century Rain a bit of a disappointment.

Pushing Ice is basically a Big Dumb Object novel. Janus, one of the moons of Saturn turns out to be an alien artifact which suddenly leaves orbit and takes off for Spica at high speed. The crew of the mining ship Rockhopper (owned by one of those heartless megacorporations that apparently run everything in the mid 21st century), is dispatched to rendezvous with Janus, investigate, and get out before it vanishes for good. It won't exactly come as a surprise that things don't exactly go as planned.

The scope here is a lot smaller than that of the Revelation Space books, mostly being confined to Rockhopper's crew, but Reynolds does a fair job with the politics and personalities and generally manages to keep things moving along. There's a sort of clunky bit about the relationship of the frame story to the main story, but generally it's worth reading.


December 24, 2005

First, we found out that the NSA has been performing wiretapping without the benefit of a warrant, and now it turns out that the FBI and DOE have been doing radiation monitoring of a bunch of private sites1 (without any positive results, it turns out):
In search of a terrorist nuclear bomb, the federal government since 9/11 has run a far-reaching, top secret program to monitor radiation levels at over a hundred Muslim sites in the Washington, D.C., area, including mosques, homes, businesses, and warehouses, plus similar sites in at least five other cities, U.S. News has learned. In numerous cases, the monitoring required investigators to go on to the property under surveillance, although no search warrants or court orders were ever obtained, according to those with knowledge of the program. Some participants were threatened with loss of their jobs when they questioned the legality of the operation, according to these accounts. advertisement

Federal officials familiar with the program maintain that warrants are unneeded for the kind of radiation sampling the operation entails, but some legal scholars disagree. News of the program comes in the wake of revelations last week that, after 9/11, the Bush White House approved electronic surveillance of U.S. targets by the National Security Agency without court orders. These and other developments suggest that the federal government's domestic spying programs since 9/11 have been far broader than previously thought.


Cole points to a 2001 Supreme Court decision, U.S. vs. Kyllo, which looked at police use -- without a search warrant -- of thermal imaging technology to search for marijuana-growing lamps in a home. The court, in a ruling written by Justice Antonin Scalia, ruled that authorities did in fact need a warrant -- that the heat sensors violated the Fourth Amendment's clause against unreasonable search and seizure. But officials familiar with the FBI/NEST program say the radiation sensors are different and are only sampling the surrounding air. "This kind of program only detects particles in the air, it's non directional," says one knowledgeable official. "It's not a whole lot different from smelling marijuana."

If this distinction seems pretty artificial to you, you're not the only one. Indeed, this rationale would imply that it was OK to monitor people's cell phone communications--even those purely between US citizens and inside the US--without a warrant as long as you were using an omnidirectional antenna But we've clearly decided as a society that this isn't OK.

Whenever we run into cases like this, the question I get interested in is how to build some kind of system that will guide you in deciding hard cases like this, so you don't have to make them on a purely ad hoc basis. Operationally, what seems to happen is that we try on a bunch of such theories to see which ones produce some consistent set of results we can live with.

Clearly, any theory which produces the result that we're going to allow anything which you can monitor from public property isn't acceptable, and it's going to get even less acceptable as surveillance technology gets better. Already, we've been forced to restrict many such types of monitoring (cell phone communications, infrared surveillance), and attack is badly outrunning defense (think Van Eck Phreaking)

The framework we seem to be heading toward in Kyllo is one of "reasonable expectation of privacy". That sounds good in theory, but it strikes me as rather subjective and problematic. Consider that people have been telling you for years that Internet communications are inherently non-private. So, how do you have a reasonable expectation of privacy for e-mail or VoIP calls (For simplicity consider the situation in which it's a pure IP call, because people do expect privacy over the PSTN, and if you call someone you might not know that they're on a VoIP phone). And what of the situation of wireless networks, which are clearly highly insecure.

I don't really have a good answer here, unfortunately. Like many ethical questions, this seems like a case where people's intuitions about what's appropriate aren't easily systematized. That doesn't mean those intuitions are wrong, of course, but it probably does mean that we should be cautious about how confident we are of them. And, worse yet, this kind of uncertainty makes it very hard to predict in advance what will and will not considered acceptable by the legislatures and the courts.

1. And isn't the timing amazingly coincidental? It seems to me that there are three major possibilities:

  1. The wiretapping story prompted someone to leak.
  2. US News has been sitting on this story and this prompted them to publish it.
  3. it's a total coincidence.

(3) seems the most unlikely of these. I can't decide whether (1) or (2) is the most interesting, but both imply that there's may be more revelations coming sooner rather than later.


December 23, 2005

Yesterday KQED's forum ran a panel about Intelligent Design with Casey Luskin from the Discovery Institute, Nick Matzke from the National Center for Science of Education, and Vikram Amar from UC Hastings College of Law. For some reason, they've decided to run with a format that allows the panel participant, which allows Luskin to spin madly, repeating all the usual ID talking points.

About 3/4 of the way through, Matzke really nails Luskin, saying that he believes chimpanzees and humans were independently created, and then Krasney reads him a bunch of objections from e-mail. Luskin works the ref a bit, claiming that he's not getting a fair hearing, and Krasney backs it off to a question about whether the IDers have published peer reviewed paper, which lets Luskin muddy the waters a bit--revealing an obvious weakness in the long monologue format.

The best part, though, is when Luskin claims that there's a "systematic misinformation campaign" against Intelligent Design.


December 22, 2005

Here is the newly released list of prohibited and allowed items, which takes effect Thursday.


  • Ammunition
  • Baseball bats
  • Boxcutters
  • Cattle prods
  • Firearms
  • Golf clubs
  • Hammers
  • Ice axe/picks
  • Knives, excluding round-bladed, butter and plastic
  • Lighters
  • Meat cleavers
  • Pellet or BB guns
  • Pool cues
  • Razors
  • Scissors, metal with pointed tips and blades longer than four inches
  • Ski poles
  • Spray paint


  • Cigar cutters
  • Corkscrews
  • Cuticle cutters
  • Eyelash curlers
  • Knitting and crochet needles
  • Nail clippers or files
  • Disposable razors
  • Scissors, with a cutting edge of less than four inches
  • Tweezers
  • Tools, seven inches long or less, including screwdrivers, wrenches and pliers
  • Walking canes

The first thing you learn about designing security systems is to ask what your threat model is. So, what threat model does this correspond to? Because superficially it doesn't make that much sense. As far as I can tell, knitting needles1 are at least as dangerous as ice picks and much more dangerous than cigarette lighters (yes, yes, I know about Richard Reid, but it's not like it's really that hard to build a cigarette lighter that will get through a metal detector if you're really trying, and of course magnesium ribbon burns hot and can be easily lit with matches). Similarly, a slock is more dangerous than a ski pole, pool cue, a can of spray paint, or even a boxcutter.

The answer, it seems to me, is that these are the items that appear dangerous and that it's not too inconvenient to take away from people (which, I assume is why spray paint but not, say, spray deodorant, and pool cues and not canes, even though a cane probably makes a better weapon). But that's not what you do if you're trying to actually have security. It's what you do if you want to appear to be trying to have security (what Schneier calls security theatre).

So, every so often the TSA publishes these revisions to the allowed and prohibited lists and everyone complains about how stupid they are, but they never really explains the reasoning behind any particular list. 4 years after September 11, I think it's about time we had a serious conversation about it, because I'm getting pretty tired of taking my shoes off.

1 Ordinary knitting needles are generally not that strong, but it would be easy to manufacture a stainless or titanium needle that was plenty strong, useful as a weapon, and indistinguishable from an ordinary knitting needle. Anyone with access to a lathe would find this a trivial job.


December 21, 2005

TechNewsWorld reports that Reps. Sensenbrenner (R-WI) and John Conyers (D-NY) have introduced the Digital Content Security Act (Public Knowledge Page here) which is full of the usual MPAA/RIAA "analog hole" insanity. I don't really have a sense of whether this particular iteration has any real support, though.

UPDATE: Fixed Sensenbrenner's state. Thanks to Chris Walsh for pointing this out.

One of the big unanswered questions in the whole wiretapping story is why the Bush administration didn't want to to seek FISC approval for their wiretaps--not why they didn't feel they had to, but why they felt it was worth doing something that they must have realized would be controversial if it got out. The two main theories seem to be:

  • It took too long to get a FISA warrant.
  • They wanted to do use an entirely new style of surveillance where they captured everything and then processed it looking for patterns (see Schneier on this point.)

This raises the question of whether there's some way to satisfy those concerns that isn't basically a blank check for the government to tap people's communications.

There are two aspects to the timeliness issue. The first is that the communication may be occurring now and that you can't wait days for the recording to start. The second is that you need the intelligence right away. There's no substitute for rapid review in the second case, but in the first case, there's no reason that the surveillance and analysis have to happen contemporaneously. Simply capture the data and store it in some secure location and then apply to the FISA court for a warrant to retrieve the specific communication you're interested in. If you want to get clever, you could implement cryptographic controls: encrypt the messages under a key that the NSA doesn't control but the FISA court does.

Obvious objections: even under FISA the NSA can start capturing and then seek a warrant inside of 72 hours. This scheme would extend the window and decrease worries about getting your hand slapped. Another advantage is that you could actually lower the threshold for initial data acquisition without sacrificing completeness. An obvious objection here is that the NSA can just set up a parallel infrastructure that captures the data without encryption, but any set of administrative controls has this property.

The "data mine everything" approach is harder to accomodate, but still not impossible. In the past few years, there's been a fair amount of work on privacy preserving data mining and encrypted search. Once could imagine giving the NSA access to a transformed (blinded) version of the traffic which they could then run search algorithms on but requiring them to seek warrants for actually uncovering any given communication.

Obvious objections: data mining even blinded information yields private information. This is particularly true if meta-information is still available to the analysis system. These algorithms are generally quite slow, as well as limited in the kinds of queries they can perform. So, this is not as good as a system where you have all the data to work with. But again, people might be willing to let you have access to a broader class of data if they knew it was protected.

Why have this discussion at all? I'm generally not that sympathetic to the claim that the government needs expansive surveillance powers, but it's clear that many in government feel differently, and that legal limitations do not reduce the level of surveillance to one that I'm comfortable with. (Nor am I confident, for that matter, that there aren't programs that I'd be even less happy with going on.) Given that, I think it's worth trying to see if there's some way to strike a balance between surveillance capabilities and privacy that leaves both sides happier than they are now.


December 20, 2005

FISA court judge James Robertson (appointed by Clinton and selected for FISA by Rehnquist) has resigned from FISA, apparently as a protest against the Administration's warrantless surveillance program.
Judge Jones's decision in Kitzmiller v. Dover School District is up, and as has been widely observed, it's a more or less complete victory for the plaintiffs, etc.

One thing that stands out reading the opinion is that the extensive paper trail both inside the school board and from the Discovery Institute massively undercut the claim that the school board's action had a secular purpose. E.g.,

He explained that this country was founded on Christianity. Buckingham concedes that he said "I challenge you (the audience) to trace your roots to the monkey you came from." He said that while growing up, his generation read from the Bible and prayed during school. He further said "liberals in black robes" were "taking away the rights of Christians" and he said words to the effect of "2,000 years ago someone died on a cross. Can't someone take a stand for him?"

Of course, this kind of paper trail isn't inevitable; the pro-IDers were just incompetent at covering their tracks. In the future, you should expect school boards to do a better job of avoiding saying incriminating stuff, but there will still be a big paper trail on the Discovery Institute. Jones's finding that ID isn't science will obviously be a big obstacle for them to overcome in the future, since it will make it hard for anyone to argue that they had a purely secular purpose.


December 19, 2005

Recall that the issue in the current fuss over the NSA's wiretapping program is that the conversations being intercepted were terminated on at least one side in the US, which is why the NSA needed any kind of explicit authorization to monitor them. Outside the United States, different rules apply. Inside the security community, this has always been regarded as a bit disingenuous. As Cullen Jennings reminded me last night, the rumor was and is that the US and the UK collaborated to circumvent this rule, with the US spying on people inside the UK and the UK spying on people inside the US, but sharing intelligence so that the effect was that each country got to spy on their own citizens. (Keyword: ECHELON).
Google appears to have a new feature that does music search. See for instance the first link underHenry Rollins.

December 18, 2005

As has now been widely reported, after 9/11, the Bush Administration authorized NSA to conduct warrantless wiretaps of people inside the US, including American citizens. It's also been widely observed that the mechanism of FISA warrants already gave the government extremely broad latitude to conduct wiretaps secretly and subject to only extremely compliant oversight.

This latter fact is usually mentioned as evidence that the Administration is out of control--if they can't even live with this minimal oversight, they must want to do something truly awful. I'm not sure I disagree with this, but on the other hand, many of these same people--myself included--have long complained about how lax the FISA process is and how it's basically a sham check on the power of law enforcement to conduct searches. If you feel that way, then it seems to me that you it doesn't make much of a practical difference that the government decided to dispense with the fig leaf of oversight. Yes, it's true that they broke the law, but if the law was ineffective anyway, I'm not sure that I much care.

On the third hand, of course, this renders the Bush Administration's claim that publicizing this information somehow harms national security particularly silly. If it was already public information that they had near-unfettered discretion to conduct wiretaps, how does it help the terrorists to know that they were actually exercising double-plus-unfettered discretion?


December 14, 2005

Sorry about the posting slowdown. Things are really hectic at work. Hopefully I'll be back on the air over the weekend.

December 13, 2005

The GAO reports that the Whois database, which contains the registration information for people's domain names, has a lot of clearly bogus information:
Approximately 2.3 million domain names have been registered with obviously false information, such as (999) 999-999 for a telephone number or "XXXXX" for a postal zip code.

Another 1.6 million were registered with incomplete information, according to a report released yesterday by the US Government Accountability Office (GAO).

The GAO said individuals or organisations registering the names of their websites may have provided inaccurate information to domain name registrars to hide their identities or prevent the public from contacting them. The 3.9 million wrong or incomplete registrations represents 8.6 percent of the 44.9 million the agency was asked to check by Congress.

This isn't exactly a surprise: as long as you pay your money, the registrars don't exactly try that hard to ensure that your information is non-bogus. And remember, this is just the information that's trivially bogus. The fraction of information that's plausible but bogus or just plain out of date is likely to be quite high as well.

When confronted with the claim that ID is really just creationism (I was going to write "scientific creationism", but that was actually what the creationists rebranded creationism before they rebranded it Intelligent Design), the standard line is that ID just tells us when some organism was likely to have been designed, not who the designer was. IDers have to say this, of course, because that's what makes it at least potentially a scientific theory rather than just an appeal to divine intervention. The idea, recall, is that there's some test that you can apply to an object that will tell you whether that object was designed. Call it T. The point being that T doesn't tell you anything about the designer, just the object. The designer could have been supernatural or extraterrestrials or... is there really a third possibility? Time travelling humans I suppose. Anyway, call that designer D1.

Now, if D1 is supernatural (divine), then we're out of the realm of science. But since part of the value proposition of ID is that it's supposed to offer a potentially non-supernatural explanation for life, let's consider the other arm: that D1 is natural. But of course, if D1 is natural, then we can apply the same kind of analysis: D1 can have come about either through evolution or intelligent design (logically, of course, it could have come about through some other mechanism, but if we had another such mechanism, then we wouldn't need ID to explain terrestrial life either). So, we apply T to D1 and if it comes up that the D1 wasn't designed, then no problem, D1 evolved and we're done. On the other hand, if it comes up that D1 was designed, then we need to investigate the question of its designer, D2. D2 can similarly be natural or supernatural, and if it's natural, we can repeat the same analysis. If we don't want this to be a turtles all the way down type of situation, at some point we either need to get the answer that D? wasn't designed (i.e., it evolved) or that it's supernatural.

So, if ID is to actually be a scientific theory, then there must be some way for T to come up with the answer that a complex, intelligent organism wasn't intelligently designed. So, the question you need to be asking at this point is: what would the characteristics of such an organism look like. And more importantly, considering all the evidence that human life did evolve and given the fact that the design of humans (and other organisms on Earth) is, frankly, a mess, under what conditions could you imagine T coming up "not designed" if not these?


December 11, 2005

One of the infections with the biggest drug resistance problem is malaria. Already, there are chloroquine-resistant strains of malaria in most of the areas where malaria is endemic and there are areas where falciparum is resistant to all the major drugs other than the artemisins. Now, Jambou et al. report in the Lancet that they have found artemisinin-resistant parasites.

This is particularly depressing because (1) artemisinin therapy outside of China is relatively new (about a decade) and (2) there had been hopes that resistance would be slow to develop (see, for instance this Wikipedia article):

The drug is used these days in China and Vietnam without much regard to taking precautions against creating resistance of the malaria parasite to this drug as well, but nevertheless no resistance has been encountered in these parts of the world. Because ot the method of action, it is unlikely that resistance to artemisinine and derivatives will become a problem in the near future.

This reinforces the importance of using artemisinin as part of a combination therapy rather than a monotherapy. That and wearing insect repellent.


December 9, 2005

You can download a recording of Gilmore v. Gonzales here (WMA format). Pretty fast service, huh?

Anyway, I listened to it today. I haven't heard that many of these, so my opinion isn't that informed, but FWIW, Gilmore's lawyer didn't seem to me to come off as well as the government's lawyer. He didn't seem to be able to get the judges to stop any particular line of questioning, which suggests to me that they weren't satisfied with his answers. That said, it seemed--again to my uninformed ears--that the judges were less trying to figure out whether to dismiss the case outright so much as whether they should remand it to the lower court or whether they had jurisdiction themselves.


December 8, 2005

EG's correspondent in Israel, Hovav Shacham reports on the experience of signing up for Internet service:
I just signed up for broadband in Israel. There appears to be a legally-enforced firewall between last-mile providers and ISPs. My cable bill includes charges for the connectivity; separately, I have to pay an ISP. There's basically two sources of connectivity -- HOT cable, or Bezeq DSL -- and a dozen or so ISPs who work with both. The cable support person, on the phone, said they're not allowed to recommend an ISP.

From a technical point of view, the setup is bizarre. My cable modem gives me an RFC-1918-unroutable address via DHCP, but I can nly use it to connect to HOT's info about the ISPs. On the relevant web page, at , the ISP logos permute on each reload.

To get a useful IP, all the ISPs but one require PPTP or L2TP, with an ISP-supplied dialer for Windows, and in some cases also for Mac or Linux. (The other ISPs don't seem to have a provision for non- Windows machines at all.)

A single ISP,, advertises as a selling point that they don't require tunneling or dialers. Interestingly, their prices are about four times higher than the consensus price for service: 160 ILS/month, instead of 40 or so. [4.625 ILS to the dollar--ekr]

Our best guess is that this is simple market segmentation. A non-VPN ISP makes it a lot easier for an ordinary, non-wizardly, user to use multiple computers, which is the kind of thing that businesses (i.e., people willing to pay more money) want to do.

Well, meant to go to Gilmore v. Gonzales, but three things conspired to keep me home:
  • Work
  • Bay Area traffic.
  • The 9th Circuit appears to put oral arguments on the Web.

The last of these in particular dramatically decreases the marginal value of attendance. Expect comments once I've had a chance to hear the audio file on Friday. (BTW, if anyone is broadcasting it live, please do let me know...)


December 7, 2005

If you're interested in the topic of US Interrogation polcy, then Marty Lederman's post on the topic is required reading. Nut graf:
this confusing? You bet it is. But the confusion is not inadvertent -- it's intentional. The whole object is to hide the ball and constantly shift and recalibrate the (literal) terms of the debate. As the New York Times reported today, "'[i]t's clear that the text of the [Rice] speech was drafted by lawyers with the intention of misleading an audience,'" Andrew Tyrie, a Conservative member of Parliament, said in an interview. . . . Parsing through the speech, Mr. Tyrie pointed out example after example where, he said, Ms. Rice was using surgically precise language to obfuscate and distract."

As long as the public discussion is focused on abstract labels and vague, general standards the meaning of which is known only to a small cabal within the Administration, it will be impossible to have a meaningful debate about what's permitted and what is not. In order to have any legitimate public debate on these questions, we will first need to see the Administration's legal analysis that explains how the Administration understands the application of the standards as a practical matter. What -- as a practical matter -- does "torture" mean? "Humane" treatment? "Cruel, inhuman and degrading treatment"? "Prolonged mental harm"? "A likelihood of being tortured"? "Policy"? "Subject to its jurisdiction"? "Shocks the conscience"? Etc. There are, of course, slews of court decisions, international adjudications, articles, blogs, etc., that have addressed these questions. But when waterboarding and cold cell are part of our repetoire of interrogation techniques, and yet it's our "policy" not to torture, and to treat all detainees humanely, and to refrain from CIDT, then obviously that's a sign that the law as it's being applied by our government is a far cry from any ordinary, intuitive, or lay understanding of what the law means. (Tom Toles's cartoon today nails it.) The idea of a secret body of law in a democratic society is very disconcerting -- but that's what it's come to here.


Took the Prius in for the 5000 mile service today. I decided to sit and wait and was pleased to discover that they had WiFi in the lobby. Definitely appreciated, folks.

December 6, 2005

The Raw Story is carrying a report that claims to be sourced by an insider at Diebold. Allegedly, Diebold installed an untested patch shortly before the Georgia elections:
The insider harbors suspicions that Diebold may be involved in tampering with elections through its army of employees and independent contractors. The 2002 gubernatorial election in Georgia raised serious red flags, the source said.

Shortly before the election, ten days to two weeks, we were told that the date in the machine was malfunctioning, the source recalled. So we were told 'Apply this patch in a big rush. Later, the Diebold insider learned that the patches were never certified by the state of Georgia, as required by law.

Also, the clock inside the system was not fixed, said the insider. Its legendary how strange the outcome was; they ended up having the first Republican governor in who knows when and also strange outcomes in other races. I can say that the counties I worked in were heavily Democratic and elected a Republican.

In Georgias 2002 Senate race, for example, nearly 60 percent of the states electorate by county switched party allegiances between the primaries and the general election.

The insiders account corroborates a similar story told by Diebold contractor Rob Behler in an interview with Bev Harris of Black Box Voting.

Harris revealed that a program patch titled was left on an unsecured server and downloaded over the Internet by Diebold technicians before loading the unauthorized software onto Georgia voting machines. They didnt even TEST the fixes before they told us to install them, Behler stated, adding that machines still malfunctioned after patches were installed.

I'd never heard of the Raw Story before, so it's pretty hard for me to say anything about the accuracy of this story. We do know, however, that Diebold's software engineering practices weren't very good, so it wouldn't be totally surprising. I don't really know enough to have any useful opinion on whether there was any systematic fraud, as suggested here, but it's certainly problematic that Diebold's practices are so bad that you can even ask that question.


December 5, 2005

Slate writes:
The NYT fronts a compelling yarn about a troubled Kentucky couple that won a $34 million lottery jackpot in--but still couldn't escape their demons. The husband ended up dying of complications from alcoholism in 2003. The wife, who is said to have turned her geodesic dome-shaped mansion into a drug den, died of a possible overdose shortly before Thanksgiving. Between them, they had squandered much of their fortune.

In the words of the late George Best, "I spent most of my money on booze, birds and fast cars, and the rest I just squandered."


December 4, 2005

In the House episode Maternity, there's an epidemic in the hospital, with two really sick babies and four others getting sick. As usual, the House team can't figure out what kind of infection it is. The two leading candidates are MRSA and pseudomonas, so they prescribe vancomycin for the MRSA and aztreonam for the pseudomonas. Of course, something goes wrong, in this case that both babies start to experience renal failure, but since both drugs can cause renal failure, they don't know which one to stop.

House's oh-so-clever response is to take each baby off one drug and see what happens. Naturally, the hospital administrator and lawyer are freaked out and want him to get informed consent, which means telling both sets of parents that the other baby is getting a different treatment. House refuses because then they wouldn't consent, which obviates the point of the trial. Or does it? Let's work through the logic chain, using only baby A. If we take it off vancomycin and it gets better. Bingo, aztreonam works. On the other hand, if it dies, then we know that aztreonam doesn't work. Now, we don't know that the other works, but we know what doesn't, so the only choice is to use the other drug. Problem solved.

So, informationally it doesn't matter whether you take the patients off different drugs or the same drug. From a cost/benefit perspective it doesn't matter to the parents either. If you really have no idea which drug will work, then you shouldn't care which drug your baby is on--and it doesn't matter what the other one is on. There's a 50% chance of death either way. Superficially, you might be able to see why doctors would want to take the babies off different drugs: if you believe the problem is one of the two bacteria, then this lets you save one of them. But consider that it also makes it very likely that one of them will die. If you take them off the same drug, then there's a 50% chance that both will die and 50% chance one will live: expected value, one death, same as if you take them off different drugs. Of course, there are risk models where a 50% chance of 2X is worse than X, but that mostly requires nonlinearity of preference, which doesn't seem appropriate here.

Of course, at the end of the day it turns out that the problem is neither MRSA nor pseudomonas, which just goes to show that decision models are only as good as the data you feed into them.

Last time I went backpacking, I got a pretty serious case of blisters on both heels. I figured it would be a good idea to get a handle on blister treatment early, so when I saw Blist-o-ban in a magazine, I figured it was worth a try. Blist-o-ban looks like a pretty cool concept. Basically, it's a bandage with a double-layer central section. The two-layers slide against each other protecting you from blisters.

I've done some Blist-o-ban trials with a pair of running shoes I'm breaking over several runs ranging from 40-100 minutes. It's a little hard to assess how good a job of protecting me against blisters because the hot spots aren't actually that bad. Unfortunately, it really doesn't seem to stay on that well. I used two today (one on each arch), and the edge of one started to peel off as soon as I put my sock on. The other looked OK initially, but after the run when I took my socks off, both bandages seemed to be stuck to the inside of the socks. I've had similar results with putting the bandages on my heels.

It's possible, of course, that I'm doing something wrong putting them on (though I'm using the alcohol wipes provided) and it's obviously hard to get anything to stick to your feet when you're sweating, but since those are the conditions I'm going to be encountering in the field, that's an obvious problem. The Blist-o-ban people recommend using Mastisol for extra adhesion, but that doesn't sound super-convenient for hiking.

Truth be told, the best thing I've found so far is old-style waterproof first-aid tape. It sticks pretty well and since you can wrap it around your ankle or foot, sticking isn't quite as important. What I've been doing is sticking a piece of moleskin to the inside of the tape (so the furry side is on the blister). This stops the tape from sticking directly to your skin, which is good if you've already got a blister.


December 3, 2005

There's been a fair amount of buzz going around about peer-to-peer (P2P) naming as a replacement for the DNS. Here's one example a recent Slate article:
The best solution might simply be to allow any country that wants the job to host the DNS system. How? Peer-to-peer networks like BitTorrent.

Here's how it could work, according to computer security researcher Robert G. Ferrell, a former at-large member of ICANN. Countries that choose to house Torrent servers would receive a random piece of the DNS pie over a closed P2P network, with mirrors set up to correct data by consensus in the case of corruption or unauthorized modification. No one country would actually physically host the entire database. In essence, everybody would be in charge, but no one would be in control. Isn't that how the United Nations functions anyway?

I can't find a detailed writeup for this proposal, but in general it's hard to replace a distrubuted but hierarchical system like DNS with a purely P2P system.

A name resolution system is basically a method of mapping one set of strings onto another set of strings. You start with a string A and get the string B that it maps to. Obviously this is easy if you have a single server that holds the whole mapping table, but this isn't really practical on the Internet, where you want a distributed system.

Any distributed name resolution system has to deal with two separate issues:

  • Data distribution.
  • Authorization.

Data distribution is a relatively well understood problem. It's the kind of thing that P2P networks are good at. Authorization isn't so simple. How do we know, for instance, who is entitled to have ""? In the DNS, it's easy: the server which has the data for ".org" also has the authority to tell you who owns "". Even when you don't get the data directly from the authoritative server (e.g., from a cache), authority still descends from that server (in DNSSEC, that server signs the data and it gets placed with the secondaries).

P2P systems, however, break the binding between authority and the source of the data, and when you get the record for "" from some random site, you don't have any reason to trust it. Of course, you could use signed DNSSEC records and then store them in a P2P network (as described here), but that doesn't really solve the problem that people are complaining about, which is centralized control. Using DNSSEC with P2P just means that the data is distributed, but the authorization is still centralized, so this wouldn't make it any harder for the US to control things.

What people really want, of course, is decentralized control. Unfortunately, building a system like that is currently an unsolved problem.