EKR: October 2005 Archives


October 31, 2005

Marginal Revolution points to Thomas Schelling's suggestion that the US should share permissive action link technology link with the Iranians:
It is important for the Iranians to understand — and have access to — technology like we have in the U.S. that disables bombs if they get into the wrong hands. U.S. weapons, for example, have "permissive action links"— a radio signal code that arms weapons but that will also automatically disarm them it if launched at an unauthorized target.

This will be a big dilemma for the U.S. If the Iranians get weapons, will we be willing to share the technology to ensure the security of their use? That is where the debate is heading.

As Steve Bellovin points out in his page presentation on PALs, we offered PAL technology to the Russians (they refused) so I would expect us to offer the Iranians at the point where they get nukes.

Passed through SFO yesterday. A few notes:
  • They're still asking you to take your shoes off, but at least when you tell them you don't want to they directly tell you that they'll give secondary screening if you don't. Well, actually, they first tell you that they're "screening for stuff other than metal" and then if you still refuse to take your shoes off, they tell you that they'll have to screen you.
  • If you still refuse to take your shoes off, however, they don't give you the full secondary search---they just explosive screen your shoes, which goes pretty quick. This seems pretty sensible, since it's not like refusing to take your shoes off makes your carry-on any more dangerous. Sort of surprising, really, though, since I'd always assumed that part of the purpose of the secondary screening in this case was to incentivize you to take your shoes off.
  • SFO now has one of those GE entryscan machines. Kim Cameron hates them (link via Adam Shostack) but I've got to say that it's a pretty cool-looking piece of technology. I almost went through security again in order to get screened using it. Screw the indignity, it's full of science!

Update: Fixed the link in the above. Thanks to Bram Cohen for pointing this out.


October 29, 2005

Robert Sapolsky's new book, Monkeyluv, is a collection of essays that were published previously in the popular press. A lot of this (especially the stuff on stress) is ground he's covered before, but it's still worth reading. For my money, the two best are "The Genetic War Between Men And Women" and "Anatomy of a Bad Mood".

The Genetic War Between Men and Women (originally published in Discover in 1999) is about the different mating incentives that males and females have and the strategies they use to try to fulfill them:

The first battleground is the placenta, a tissue that can seem more than a little creepy. It's only partially related to the female, but it invades (a term used in obstetrics) her body, sending tentacles towards her blood vessels to divert nutrients for the benefit of this growing creature. The placenta is also the scene of a pitched battle, with paternally derived genes pushing it to invade more aggressively while maternally derived genes try to hold it back. How do we know this? In rare diseases, maternal or paternal genes related to placental growth are mutated and knocked out of action. Lose the paternal input, and the antigrowth maternal component is left unopposed--you get diseases where the placenta never invades the endometrium, so the fetus has no chance to grow. In contrast, remove the maternal input and let those paternal genes run wild unopposes, and you get placental invasiveness over the top--a stupendously aggressive cancer called choriocarcinoma. Thus, normal placental implantation represents an uneasy stalemate.

Anatomy of a Bad Mood (originally published in Mens Health in 2003) is basically a neurophysiological take on the way that fights happen:

Here's how it happens. You've done something piggish to your significant other, something stupid and selfish and insensitive. She's pissed. So you argue. And you make things worse initially by trying to defend yourself.

Somewhere amid the heated exchange, you actually think about what you've done, consider it from her perspective, and realize, Jeez, I was a total jerk. You apologize. You make it sound as if you sincerely mean it You actually do sincerely mean it.

She accepts your apology, does a "But don't you ever do that again" parting shot with a flair of the nostrils. You start to feel pretty pleased with yourself; got off easy this time, you realize. That flair of her nostrils has even made you think about sex. You eye the bedroom. Phew, sure glad that's over with.

And then she suddenly dredges up some argument the two of you had about some other jerky thing you did years ago, the time you forgot to do X, or the time she caught you doing Y. It has nothing to do with the jerky thing you just did. You barely remember it. But she remembers every detail and is raring to go over it again in all its minutiae, just when the tension was dissipating.

What's up with this? (And why have you done the same on occasion?) It's not because she's unconsciously trying to torpedo the relationship, or because she gets some obscure pleasure from fighting. It is simply that her limbic and autonomic nervous systems operate at different speeds.

So, definitely interesting and worth a read. My major complaint about this book is pricing: at $16 At $24 and just over 200 pages, I found it a little steep. Even at $16 from Amazon1, it was just at the limit of what I'm willing to pay.

1. Amazon's ultra-steep discounting on new books is making it pretty difficult for me to patronize my local bookstore. I already have Amazon Prime so 2-day shipping is free. When the price difference is $6-8 a book, it's pretty easy to justify waiting 2 days.

This document reads much more like an analysis of DKIM than it
does like a threat analysis of the environment in which DKIM
will be deployed. There's a subtle but real difference here.
Consider, for instance, Section 5.2.3 says:

   Reputation attacks of this sort are sometimes based on the
   retransmission (often referred to as a "replay") of a legitimately
   sent message.  DKIM provides little protection against such acts,
   except that the key used to sign the original instance of the message
   can be revoked.  Other reputation attacks, involving the fabrication
   and transmission of a fictitious message, are addressed by DKIM since
   the bad actor would not, without inside assistance, be able to obtain
   a valid signature for the fabricated message.

Now, it may be true that DKIM doesn't address this issue, but it's
certainly possible to address this issue (arguably well or badly)
in a message origin signature system. 

What I would have expected in a threat analysis of this type is that
one would start with a relatively broad view of the type of system
one was considering developing ("server-based message-based signatures
to prevent mail forgery") and then describe potential attacks on
such systems and the types of countermeasures that can be used to
protect against them. What I see here assumes that the system to 
be developed is essentially DKIM and asks how DKIM can be attacked.
That's only somewhat useful for determining whether DKIM should be
standardized and not at all useful for determining whether alternative
designs would be inferior or superior.

A related issue is the focus on forgery as the end-goal. The reason
that people are interested in stopping forgery is that they
believe that that can be used to block spam (see Section 2, where
this is explicitly stated). Indeed, if what you wanted to do
was stop message forgery as a general case, you would have to
consider the issue of forgery by other authorized users in
the same administrative domain, which generally leads to an 
S/MIME style solution. What motivates this particular design is
the spam application and I think that that requires a real argument
that this approach will be useful in stopping spam, not just
saying that stopping forgery is good and this stops forgery, so....

S. 1-3:
These sections do a pretty good job of laying out the basic
threat environment. 

S. 4.3:
   Bad actors may also exist with the administrative unit of the message

S 5:
   One of the most fundamental bad acts being attempted is the delivery
   of messages which are not authorized by the alleged originating
   domain.  As described above, these messages might merely be unwanted
   by the recipient, or might be part of a confidence scheme or a
   delivery vector for malware.

This seems to me to be too concrete. At a meta-level, the bad
act being attempted is the delivery of messages which the receiver
doesn't want to see (see Section 2 again). What's the necessary
connection between that and sending forged e-mail?

S 5.2.1:
Note that some such e-mail propagated worms
really did come from the person who they claim to come from,
because that person's computer has been taken over.

S 5.2.2:
Well, it's certainly true that much e-mail based fraud involves
forging the sender address, it's not clear that this kind of
forgery protection will help. Consider how a typical phishing attack

(1) attacker sends forged mail from @legitimate-site.com
    This message contains a link to
(2) Victim clicks on link. Note that this is can be an SSL link
    with at least a semi-legit cert.

This attack works because the victim doesn't do a sufficiently good job
of checking that the URL that he's connecting to actually corresponds
to the site he thinks he's connecting to. 

Now, in an environment where DKIM is deployed, the situation would
differ in that From address in the mail would contain

Now, in theory, of course, the user might notice that the From
address was wrong, but since the entire attack is premised on the
notion that he doesn't adequately check the URL in the browser
location bar, why should we expect him to do a better job of 
checking the From address?

This is acknowledged to some extent in 5.2:

   Similar types of attacks using internationalized domain names have
   been hypothesized where it could be very difficult to see character
   differences in popular typefaces.  Similarly, if example2.com was
   controlled by a bad actor, the bad actor could sign messages from
   bigbank.example2.com which might also mislead some recipients.  To
   the extent that these domains are controlled by bad actors, DKIM is
   not effective against these attacks, although it could support the
   ability of reputation and/or accreditation systems to aid the user in
   identifying them.

But doesn't this effectively say "DKIM (or any sender signing scheme)
doesn't work against attacks that attempt to involve impersonating
a specific source address"? What class of specific impersonation
attacks does this technology actually work against in practice?

S 5.2.3:
I don't see how you can reasonably deal with replay attacks
by revoking the key that was used to sign the message. This
enables a trivial DoS attack on any message sender--just
get some messages from that sender and generate some replays.

S 6.3:
Again, I realize that the DKIM designers have chosen not to
handle replay, but that doesn't mean there's no way to handle
it with a server-based signing scheme. This seems like exactly
the kind of issue that a threat analysis should cover.

October 28, 2005

Cogent and Level (3) have reached some sort of agreement and so L3 won't be depeering Cogent, as they had threatened to do November 9th.
BROOMFIELD, Colo. and WASHINGTON, Oct. 28 /PRNewswire-FirstCall/ -- Level 3 Communications (Nasdaq: LVLT - News) and Cogent Communications (Amex: COI - News) today announced that the companies have agreed on terms to continue to exchange Internet traffic under a modified version of their original peering agreement. The modified peering arrangement allows for the continued exchange of traffic between the two companies' networks, and includes commitments from each party with respect to the characteristics and volume of traffic to be exchanged. Under the terms of the agreement, the companies have agreed to the settlement-free exchange of traffic subject to specific payments if certain obligations are not met.

The modified arrangement is designed to mitigate any impact to customers' Internet connectivity as it sets forth an agreed process to protect customers upon the expiration of the peering relationship, or upon violations of the agreement that are not remedied in accordance with the revised agreement. Those protections include advance written notice to the customers of each party upon termination of the agreement, as well as terms assuring the continued exchange of traffic for a reasonable transition period.

The specific terms of the agreement are confidential.

There's been a bunch of speculation about what exactly the terms of the agreement are, but of course they're confidential. Even if Cogent is having to make fairly routine payments to Level(3) (notice that part about "subject to specific payments"), it's easy to believe that this is a desirable outcome for them. As I noted previously, it's very attractive to be able to claim that they're a Tier 1 provider, even if this means making the occasional semi-secret payment.


October 26, 2005

Oh, this is very clever:
10-26-05 - The newest form of identity theft is targeting one of America's least favorite obligations, jury duty. Scammers pretend to be court officials taking victims' private information over the phone.

Scammers call their victims at home claiming to be a jury coordinator. They say that you didn't show up for jury duty and a warrant has been issued for your arrest.

When you say you didn't get a summons they ask for your Social Security number and date of birth to verify their information, and that's where they get you.

Martha Rhynes, jury coordinator, says, "We never call and ask anyone for their Social Security number, date of birth, or other personal information."

Martha Rhynes is a real jury coordinator in Grayson County. She says the courts only communicate with potential jurors by mail, not by phone. That includes no-shows.

As has been observed before, phishing is a social attack. Such attacks work best when the victim has a real opportunity to cooperate. I'd say avoiding getting arrested is a pretty good incentive. To make things more effective, people don't interact with the criminal justice system that often, so they don't have enough experience to realize the call is improper.


October 25, 2005

My iPod Nano arrived today (discounted, thanks to a friend who works for Apple). All I can say is that it's amazingly small--a lot smaller than it looks on those billboards. I mean, we're talking about fitting 4G + an MP3 player + a battery in approximately the same volume as a 3 1/2" floppy. In fact, I'm concerned it's too small and I may lose it. Now to get one of those cases so it doesn't get scratched...

October 24, 2005

Chris Walsh points to in favor of Paul Black's Software Facts labels. The analogy here (explicitly made by Black) is to the "Nutrition Facts" labels found on food). But compare Black's sample label to a real nutrition label:

Software Facts

Name InvadingAlienOS
Version 1996.7.04
Expected number of users 15

Modules 5 483   Modules from libraries 4 102

     % Vulnerability

Cross Site Scripting 22 65%
Reflected 12 55%
Stored 10 55%

SQL Injection 2 10%

Buffer overflow 5 95%

Total Security Mechanisms 284 100%
     Authentication 15 5%
Access control 3 1%
Input validation 230 81%
Encryption 3 1%
    AES 256 bits, Triple DES

Report security flaws to: ciwnmcyi@mothership.milkyway

Total Code 3.1415×109 function points 100%
     C 1.1×109 function points 35%
Ratfor 2.0415×109 function points 65%

Test Material 2.718×106 bytes 100%
     Data 2.69×106 bytes 99%
Executables 27.18×103 bytes 1%

Documentation 12 058 pages 100%
     Tutorial 3 971 pages 33%
Reference 6 233 pages 52%
Design & Specification 1 854 pages 15%

Libraries: Sun Java 1.5 runtime, Sun J2EE 1.2.2,
Jakarta log4j 1.5, Jakarta Commons 2.1,
Jakarta Struts 2.0, Harold XOM 1.1rc4, Hunter JDOMv1

Compiled with gcc (GCC) 3.3.1

Stripped of all symbols and relocation information.

The most obvious thing about the Nutrition Facts label is that it's designed to be interpreted in an almost completely context free fashion. I don't need to know what a carbohydrate is or whether cholesterol is good or bad for me because the label tells me how much I'm supposed to eat and that this stuff, whatever it is, has about 10% of my total daily cholesterol intake. In fact, you can construct a mostly balanced diet (though certainly not a maximally healthy one) by just putting together a basket of foods that gets you to around 100% of each nutrient. The one big missing piece of information here is a list of all the vitamins you should be getting because that list would show mostly zeros here). But it's easy to get that list and then you can still use the adding up procedure. And you still don't need to know why you need Vitamin B1--just make sure you get some.

Of course, people in general do have some opinions about what their nutrient intake should be (e.g., low-carb or high-carb diets, reduced sodium levels), and the label also provides minimal information that lets you adjust your diet in line with such macro nutritional goals. But even then, relatively minimal context is required to understand the label. I.e. if you've been told to eat no more than XX mg of sodium, it's a simple matter of addition to work out what you should be eating.

Now let's take a look at Black's label. The first block (Name, etc.) is just identifying so we can pretty much ignore that. The next block reads:

Modules 5 483   Modules from libraries 4 102

     % Vulnerability

Cross Site Scripting 22 65%
Reflected 12 55%
Stored 10 55%

SQL Injection 2 10%

Buffer overflow 5 95%

Total Security Mechanisms 284 100%
     Authentication 15 5%
Access control 3 1%
Input validation 230 81%
Encryption 3 1%
    AES 256 bits, Triple DES

Report security flaws to: ciwnmcyi@mothership.milkyway

The first line is obviously supposed to be an analogy to "Calories/Calories from fat", but you can see immediately that there's something wrong here. First, there's no reference anywhere in the label to tell you how many modules there should be or how many should be from libraries. This isn't a simple matter of missing instructions because there's no reasonable consensus on the answer to either of these questions. Indeed, there's no reasonable consensus on how to even count modules in a particular program. (Are SSLv2 and SSLv3 different modules in OpenSSL? How about SSLv3 and TLS? Is OpenSSL one big module?) So, this first line is basically meaningless.

The second chunk here, % Vulnerability, is simply baffling. I think the numbers after the attacks (e.g., Cross Site Scripting 22) are meant to be counts and then we read that we're 65% Vulnerable to Cross-Site Scripting and 95% vulnerable to buffer overflow. What the heck does this mean? In the Nutrition Label, case, these percentages mean something very specific: the fraction of the RDA of this particular nutrient that this product contains, but that doesn't seem like what it means here, unless the point is that the recommended number of buffer overflows is a little over 5. So, I have no idea what these numbers mean, and I doubt Black does either.

Even if we ignore the percentages, the raw counts are totally meaningless. There are all sorts of vulnerabilities which the vendor doesn't know about, so how are they supposed to report them? On the other hand, if we're just going to report the ones the vendors know about, that's not really that useful because those are presumably the ones they're fixing and what we're really concerned with is the ones that will be discovered tomorrow?

Next we turn to the Total Security Mechanisms block. Again, this leaves us with the problem of defining a security mechanism: is SSL a single mechanism? Or is each algorithm its own mechanism? Each cipher suite? How about the PRF? The Finished message is one or two? Each X.509 extension? The mind boggles. The percentages here are equally baffling. Should they add up to 100? They don't. And even if they do, how do we do the math? Does 256-bit AES count for twice as much as 128-bit AES?

Moving on, we come to:

Total Code 3.1415×109 function points 100%
     C 1.1×109 function points 35%
Ratfor 2.0415×109 function points 65%

Test Material 2.718×106 bytes 100%
     Data 2.69×106 bytes 99%
Executables 27.18×103 bytes 1%

Documentation 12 058 pages 100%
     Tutorial 3 971 pages 33%
Reference 6 233 pages 52%
Design & Specification 1 854 pages 15%

Once again, we get a bunch of descriptive information without any normative context. Is it good that this software has C in it? How about Ratfor? I've got my opinions but this isn't really something that your average user can be expected to assess for themselves. The problem becomes even worse when we get to Test Material and Documentation. First, it's almost impossible to know what appropriate values are here. Second, it's incredibly trivial to game them even if we did have recommendations. Test material's too small? Here's a big file full of zeros. Documentation's too long? shrink the font. You may be able to standardize this stuff, but I doubt that you will be able to do it in any way that's not easy to game.

Finally, we have a block containing some "ingredients":

Libraries: Sun Java 1.5 runtime, Sun J2EE 1.2.2,
Jakarta log4j 1.5, Jakarta Commons 2.1,
Jakarta Struts 2.0, Harold XOM 1.1rc4, Hunter JDOMv1

Compiled with gcc (GCC) 3.3.1

Stripped of all symbols and relocation information.

I'm trying to figure out why anyone would need to know this stuff at the level of label reading. I dbout that one person in 100,000 cares what compiler some piece of software was build with. And to the extent people do care, they surely want to know stuff like the compilation flags and the header files it was compiled against. Perfect stuff for some nerd-oriented appendix, but hardly of much use to the average user deciding whether to buy the software. Note, again, the big difference between the nutrition facts label, which is totally usable to a layman, and this, which is practically impenetrable, even to an expert.

Obviously, this is a strawman that's intended to be evocative and the particular information set being described here could change, so why am I focusing on the details like this? Because I don't think that it can be made much better given the current state of knowledge. The computer security community is almost completely unable to offer any objective, easy-to-understand tools for assessing the prospective security of a software product. And when you ask people to do so, you get the kind of data dump of mostly irrelevant descriptive information that this kind of effort represents. Would it be great to be able to succinctly tell users what kind of security they could expect? Sure. But it's not something we're even close to ready for and we won't be until we understand the problem domain much better than we do now.


October 22, 2005

The other controversial resource that ICANN currently manages is IP address allocation. The background here is that every packet transmitted on the Internet needs to go from a specific source IP address to a specific destination IP address. There are only 2^32 (about 4 billion) possible such addresses, so there's obviously some contention for them. At this point, nearly 2/3 of the address space is allocated or otherwise unavailable. Opinions vary about how much time we have left before the address space crunch gets really bad (see articles by Tony Hain and Geoff Huston), but we're already at the point where people can't get all the addresses they want. IPv6 was supposed to fix this but deployment has so far been glacial (more on that another time).

The way that addresses get allocated is that IANA (which is part of ICANN) allocates them to the regional Routing Information Registries (RIRs), who allocate them to networks (generally ISPs, but sometimes to large enough end-user networks). Naturally, the RIRs are frequently in the position of denying requests for space, which doesn't please end-users.

To make matters worse, the addresses have been assigned extremely unevenly--a number of the early players in the Internet have address blocks far larger than they could ever plausibly use. Xerox for instance, is holding a block of 2^24 (16 million) addresses each. It's not clear how much of this address space is practically reclaimable, but it's a source of resentment for Third Worlders (and even Europeans and Japanese) who are having trouble getting addresses.

Unfair it may be, but it's not really clear what to do about it. Although many of these allocations are overlarge, many of them are still being used by entities who are not going to want to give them up. It's going to be hard enough to reclaim an address chunk from Halliburton (one /8) but even harder to reclaim it from the Defense Information Systems Agency (3 /8s). That's not something the US government is going to let go easily.

So, even if the EU/UN/ITU took over IP address allocation, they might be able to reclaim some of the address space, but mostly they'd be able to affect new allocations. Even then, it's not clear the extent to which they would be able to make those allocations different from the current policies. IANA's policies are defined in RFC 2050, which appears to be pretty neutral, so if you wanted to balance the final allocation state you'd have to take explicit action to disfavor the incumbent address holding countries (principally the US) in future allocations.

This may seem superficially fair, but remember that those addresses are being used by private entities. Xerox may have a lot of addresses but they're not giving me any (indeed, the RIR rules make it extremely difficult to transfer addresses. Indeed, it's explicitly forbidden to sell them, though there are some clumsy ways around this). So, it's not clear that an allocation policy which disfavored US entities would really be any fairer than the current policy.

One possibility, of course, would be to make it much easier to transfer addresses, indeed to create a market in them. This would have the effect of freeing up some of the unused address space and generally produce a more efficient allocation, but would of course be even more favorable to developed world entities who can afford to pay more than those in the developing world. That's probably not what the UN and ITU have in mind when they talk about taking control of the Internet.


October 21, 2005

I'm planning to have some t-shirts made but I've never used any Internet t-shirt printing service other than CafePress, which is too expensive for bulk usage. I've got the art all done so all I need is to send it to them and get it positioned on the shirt properly. Do any readers have any experience with t-shirt printing services that they would like to share? Comments on Bay Area local services also welcome...

October 20, 2005

Say for the sake of argument that the EU/ITU/UN manage to convince the US to give up "control of the Internet", or at least the part of DNS that ICANN controls. Furthermore, assume that they get the root servers to go with them, so that in principle they can do whatever they want. How should we expect things to be different?

As I indicated earlier, there are two major types of top level domains, the country code TLDs (ccTLDs) and the generic TLDs (gTLDs). Policies for the ccTLDs are pretty clear, so except for the occasional dispute about who should operate a given ccTLD, things shouldn't change much.

That leaves us with the gTLDs, which is where all the controversy is. As far as I can tell, there are three issues that are generally regarded as controversial:

  • Who should operate the existing gTLDs?
  • Under what terms should new gTLDs be created?
  • What policies should gTLD operators use to determine who gets what second level domains (e.g., who gets amazon.com)?

Who should operate the existing gTLDs?
The original seven gTLDs are .com, .edu, .gov, .int, .mil, .net, and .org. Of these, .edu, .gov, .int, and .mil are restricted (this is already a point of contention because .gov and .mil are reserved for US entities, which doesn't sit well with foreigners. It's a legacy from when the Internet was the ARPANET). Until 2001 or 2002 if you wanted to use a gTLD you had to use .com, .org, or .net. In 2001 and 2002, seven more gTLDs were added: .aero, .biz, .coop, .info, .museum, .name, and .pro. The new seven were added by ICANN so there isn't too much controversy over who should operate them. Rather, the controversy is over who should operate the big three legacy gTLDs: .com, .net, and .org, with the elephant in the room being .com.

The reason that it's attractive to operate a gTLD is simply that it allows the operator to extract monopoly rents for each registration. If you want a domain in .com or .net you have to pay Verisign (or pay someone who does) If you want a domain in .org, you have to pay the Public Interest Registry (or, again, pay someone who does). As you'd expect, this produces quite a bit of rent seeking, and ICANN's recent decision to let VeriSign retain .net resulted in quite a bit of complaining from the other bidders.

So, there's certainly some possibility that an ICANN replacement would reassign the existing gTLDs to someone else. While this is of enormous interest to the existing gTLD operators--or those hoping to supplant them--it would have practically no effect on end-users except perhaps to make the registration fees change a bit.

Under what terms should new gTLDs be created?
Another controversial question is what (if any) new gTLDs should be created. Many complain that ICANN is creating too few new gTLDs. Others complain it's creating too many. Indeed, the US government stopping ICANN from creating .xxx (for pornography) was part of what touched off this latest round of angst.

Realistically, though, this isn't really of much consequence to users either because overwhelming users want to be in .com, .net, or .org. The following table shows the number of hosts found by the ISC domain name survey in all the gTLDs:


All of the new gTLDs put together only report 80,000 or so hosts names1, which is about .1% of those found in .com and about .05% of those found in .net. For comparison, the Dominican Republic (.do) has more hosts (81598) than all the new gTLDs put together. The obvious conclusion to draw here is that new gTLDs aren't very important. As a sanity check, ask yourself the last time you dealt with a web site that used one of the new gTLDs (as opposed to one of the generic ccTLDs like .tv or .to).

What policies should gTLD operators use to determine who gets what second level domains?
In the past, a lot of the displeasure with ICANN has been directed at its procedures for determining who gets which domain name (the famous Uniform Domain-Name Dispute-Resolution Policy). Much of the criticism has focused on the claim that ICANN is too friendly towards trademark holders in these disputes. I haven't investigated this thoroughly enough to have an opinion, but it's not at all clear to me that a procedure operated by the UN, EU, or ITU would be any less friendly towards big corporations.

1. Note that this survey works by doing reverse name lookups so it disfavors names that don't correspond to unique IPs. These mostly correspond to organizations too small to operate their own Web server. Thus, the survey tends to somewhat favor bigger organizations. Still, even if it were off by an order of magnitude the results would be striking. Note that this methodology is probably why .net is so much more popular than .com in this survey: a lot of dynamically assigned IP addresses (for clients) don't correspond to real host names and just reverse resolve to something like


October 19, 2005

One of the minor annoyances of the DVD format is that the studios love to disable the menu/fast forward buttons to force you to sit through things they want you to watch. (This is called a User Operation Prohibited (UOP)). I'd just about gotten used to having to wait for the FBI warning when they raised the bar. The Hulk DVD disables it during the trailers as well.

I'm given to understand that there are DVD players which let you bypass these overrides, but so far I've been too lazy to research which ones they were. I think that's about to change.


October 18, 2005

Dahlia Lithwick complains about Harriet Miers's evasion on the topic of abortion:
Nor, she told Sen. Charles Schumer of New York in no uncertain terms, does anyone know "my views on Roe v. Wade." But today it seems that a whole lot of pro-lifers not only know her views on the subject but were assured of her support in reversing it. Affixed to the mostly innocuous responses to the Judiciary Committee questionnaire she turned in this morning was another questionnaire, from 1989, which she filled out at the behest of Texans United for Life while running for Dallas City Council. In this document, the eager-to-please candidate pledged her willingness to actively support ratification of a constitutional amendment to ban all abortion, unless it was necessary to save the life of the mother. She promised to oppose the use of city money to "promote, encourage or provide referrals for abortions." She pledged to vote against the appointment of "pro-abortion persons" to any city boards or committees that dealt with health issues. (Here she appended this lawyerly qualification: "to the extent pro-life views are relevant.") She also promised to use her influence as a pro-life official to "promote the pro-life cause."

While it does seem likely that Miers is dissembling about her views on Roe, saying that you're in favor of a constitutional amendment doesn't necessarily mean that you're going to vote to overturn Roe. Remember that at least in theory--although Dan Simon will no doubt argue not in practice--judges make decisions on the law not on their personal preferences. It's perfectly possible to believe that (1) abortion is bad (2) it should be illegal and yet (3) the constitution protects it. If you believed all those things, then you could certainly be in favor of a constitutional amendment to ban abortion. After all, consider that many people believe the opposite position: abortion should be legal and yet the constitution doesn't guarantee a right to abortion (i.e., that Roe was wrongly decided).


October 17, 2005

One of the most active areas of current security research is performance-enhancing drugs, both on the the detection and the evasion side. Testosterone is particularly difficult to detect because there's basically no chemical difference between the exogenous and endogenous testosterone (EPO is a somewhat easier case because the glycosylation is different) and you can't rely on testosterone levels because there's a lot of natural variation. The standard test relies on the testosterone/epitestosterone ratio, but there's a lot of variation there too and athletes can evade it by taking both testosterone and epistestosterone.

One clever technique, suggested by Southan et al. relies on the isotope ratio of carbon 12 (C-12) and carbon 13 (C-13). It turns out that the isotope ratio is determined by diet and therefore somewhat different in each individual. So, by comparing the C-12/C-13 ratio in testosterone to that of other precursors (e.g., cholesterol), you can determine whether the extra testosterone is exogenous. In order to make this technique work you first separate out the various compounds using gas chromatography (GC). You then use mass spectrometry (A MS to determine the isotope ratios of the various fractions. Unfortunately, the reactive groups on the steroids tend to react with the GC column (used to separate the fractions), which gives you lousy results. It's tricky to protect the groups because techniques that involve adding extra carbon atoms change the isotope ratios.

The Nov 30 issue of Rapid Communications in Mass Spectrometry contains a paper by Sephton et al. describing a technique (hydropyrolysis) for protecting these reactive groups without changing the carbon ratios, potentially rendering this a viable detection technique.

For discussion: How would you counter this detection technique?

AES Counter Mode Cipher Suites for TLS and DTLS
N. Modadugu and E. Rescorla

This document describes the use of the Advanced Encryption Standard (AES) Counter Mode for use as a Transport Layer Security (TLS) and Datagram Transport Layer Security (DTLS) confidentiality mechanism. [txt, html]


October 15, 2005

Allan Schiffman has a different theory about what home HIV tests are for:
All that misses the point. The unmet need that a fast "oral fluid" HIV antibody test satisfies is screening prospective sex partners. Here, handsome -- let me take your glass to the kitchen to freshen-up your drink.

Good idea, but I doubt you could do it surreptitiously, since it's not a saliva test:

No. The test uses oral fluid, which is slightly different from saliva. To perform the test, the person being tested for HIV gently swabs the device completely around the outer gums, both upper and lower, one time around and inserts it into a vial containing a developer solution. After 20 minutes, the test device will indicate if HIV antibodies are present in the solution by displaying two reddish-purple lines in a small window in the device.

Of course, you could still slip some roofies in your prospective partner's drink and then take a sample at your leisure....


October 14, 2005

It's of course well-known that Caller-ID (and its friend ANI) are untrustworthy with VoIP but I got a dramatic demonstration of that the other day. A friend called me with Skype and the number showed up as 0000123456. Nice work, Skype!

October 13, 2005

In the comments section, Colby Cosh makes a point which somehow totally escaped me when writing my post about HIV testing: home HIV tests have historically had a fairly high false positive rate, on the order of a percent. So, people could "discover" they have HIV when they actually don't.
That said, you have misrepresented the underlying issue just a teensy bit (and in a way that is surprising coming from you). The concern of the authorities is not with "people who discovered they had HIV", but with "people who got an AIDS diagnosis." These are not the same thing. As I understand it, the current generation of quick, cheap AIDS tests produce false positives about 1/200 of the time. The figure was presumably greater as long ago as 1987; if the false positive rate were as high as 1%, you can maybe see how a regulator would have misgivings about permitting OTC sale. The deliberate inculcation of panic about AIDS amongst heterosexuals increases the danger of psychological harm from the technology; even today people who had nothing like a 0.5% chance of having AIDS would be getting tested and freaking out when two little lines popped up instead of one. (Or just screwing up the damn directions on the box.)

The OraQuick Advance "oral fluid" test being discussed here has a false positive rate of about .2% (the blood version has better performance here). In 2003, there were about 4,000 HIV diagnoses in heterosexual, non-drug-using males and 7000 diagnoses in heterosexual females. Since there are 200-odd million people in this category, your risk is about .5% if your (heterosexual) behavior is about average. This means that if you get a positive result, there's only about a 70-80% chance that you're actually infected. Of course, if your behavior is low risk within this category, the chance that you're actually infected is even lower. Clearly, then, you need to take this into account before you take a home HIV test. However, it's not clear why this is a decision that the FDA is in a better position to make this decision for people than they are themselves.


October 12, 2005

The FDA is considering whether to allow home HIV testing. This has been possible for years, but the FDA has blocked it because of concerns about how people who discovered they had HIV would react:
The test, called OraQuick Advance Rapid HIV Antibody Test, is presently sold only to doctors and clinics. It has already proven to be effective, safe and easy to use. So the remaining hurdles are decisions by the F.D.A. about whether approving such a device is a good idea and whether people can understand the product's label well enough to administer it to themselves.

A 1987 application for an at-home AIDS test kit led to years of controversy. At the time, AIDS advocates and public health officials predicted that such a test would cause widespread suicides, panic and a rush to public health clinics.

At hearings, AIDS advocates handed out copies of an obituary of a San Francisco man who jumped off the Golden Gate Bridge after discovering that he was infected with HIV. An official for the Centers for Disease Control and Prevention told the F.D.A. that such tests could lead to "a sudden increase in referrals to already overburdened health clinics," according to an F.D.A. document.

Federal regulators stalled the application for nine years, and at-home AIDS testing never caught on.

Currently there are three main options for getting HIV testing:

  • Getting tested by a doctor. Unfortunately, your doctor creates a permanent record, which isn't something you necessarily want if you're HIV+.
  • In-person confidential testing. This isn't necessarily available in all locations, and even if it is but it still requires going somewhere and having someone actually see who you are and that you're HIV+ (assuming you are). This probably isn't a big deal in a big city, but what if you live in a small town where there's no confidential testing clinic or where the people working in the clinic have a reasonable chance of knowing you.
  • "At-home" testing. Actually, this is mail-in testing, where you get your results by phone (where, presumably, you can be offered counseling). This is confidential (assuming you don't buy the kit with a credit card), but a serious pain in the ass.

Clearly, this test is substantially better. But for 18 years you haven't been allowed to have it because the FDA doesn't trust you to be able to handle the truth.


October 11, 2005

OpenSSL has announced a new vulnerability in their SSL implementation. Ordinarily, these things are simple (and boring) coding errors, but in this one is kind of an interesting study in how things can go wrong with security protocol implementations.

One common problem in the design of protocols, especially security protocols, is version transitions. If you have multiple versions (or multiple algorithms) you generally want two implementations to use the strongest version (or algorithm) they have in common. But you also want switch-hitting implementations to roll back when they contact older implementations, in order to maximize compatibility. You also want to be able to detect if an active attacker is trying to downgrade you to a weaker set of parameters.

There are three commonly-used versions of SSL: SSLv2, SSLv3, and TLS. Despite the names, TLS and SSLv3 are very similar (and fairly strong) and SSLv2 is different (and somewhat broken). We naturally want to stop active attackers from forcing people's connections down from SSLv3/TLS to SSLv2. To make matters worse, while SSLv3/TLS has defenses against downgrading to weaker algorithms, SSLv2 does not. This makes it even more important to detect downgrade to SSLv2, because the attacker can roll you back to SSLv2 and then to a weaker algorithm inside SSLv2. Of course, it also makes it harder to detect downgrade to SSLv2 by active attackers.

In order to prevent this, SSLv3 and TLS use an interesting trick to detect rollback. When SSLv3 and TLS-capable client implementations communicate with an SSLv2 implementation, they use a special type of padding in the RSA encryption. SSLv3 and TLS server implementations automatically detect this padding and generate an error. What makes this work is that this looks like legitimate padding to ordinary SSLv2-only implementations.

So far so good. Here's the problem: old versions of Microsoft Internet Explorer always used this kind of padding even when SSLv3 had been turned off. This meant that there's no way to make an IE version work in SSLv2 mode with a conforming switch-hitting SSLv3/SSLv3 implementation. In order to compensate for this, OpenSSL had a flag SSL_OP_MSIE_SSLV2_RSA_PADDING, that turned off this check (thus making downgrade attacks possible, but also preserving compatibility). That's not a crazy design decision, but here's where things go wrong: there are a lot of such client bugs, all of which potentially need workarounds, so OpenSSL has a flag called SSL_OP_ALL, which turns them all on for maximal compatibility. This flag is set by default in some common OpenSSL-using programs programs, like mod_SSL.

Of course, if you're smart, you've probably already turned off SSLv2; very few clients are SSLv2 only these days and as I've said, it's not very secure under active attack. In that case, this problem doesn't affect you. But in most programs, having SSLv2 on is the default, and experience indicates that defaults are very powerful. The good news, of course, is that this is only useful for an attacker mounting an active attack on a connection and there's not a lot of evidence that those happen frequently. And of course, there are other potential active attacks (especially social ones) that we don't really control for, so it's not clear how much difference this particular vulnerability makes. Still, now would probably be a good time to turn off SSLv2, patch your copy of OpenSSL, or both.


October 10, 2005

NANOG is all abuzz with the news that Level(3) has depeered Cogent. What the heck does this mean and why do you care?

A Network of Networks
Most people's experience of getting Internet access is simple: you call up your ISP and order a line. You pay them some chunk of money per month and they carry traffic to and from your house. Now, any real ISP has a big network, so they have lots of customers just like you. The figure below shows the simplest such network, where every one of the ISPs customers is connected to the same central router (this is called a star configuration). When customer C1 wants to talk to customer C2, they send traffic to the ISP router which forwards it to C2. Return traffic follows the same path in reverse.

This is of course a very simple network. A big ISP will have more than one router in multiple locations. These routers are somehow interconnected in a way we don't really care about here. For the purposes of this discussion we can just think of the ISP's network as one big opaque blob that knows how to route traffic from any customer to any other customer.

What I've just described works great when all you want to do is talk to other people on the same ISP but as you may have noticed, there's more than one ISP in the world. If a customer on ISP A wants to talk to a customer on ISP B, they must be connected somehow. The simplest such topology looks like this:

Clearly, you can extend this to three ISPs or more. If we ignore the interior structure of the ISPs networks, it looks something like this:

In the figure above, each ISP is connected to the other two ISPs. Now, think about what happens when a customer of A wants to send a message to a customer on B. A has two links, one to B and one to C. In order for this to work properly, it has to know to send it down link 1 rather than link 2. Routing protocols (BGP in this case) are used to let the router(s) at B know which hosts (or networks) are on which ISP and hence which link to send packets down. The details of how this works are complicated, but roughly speaking each ISP advertises the network addresses that it knows how to reach. (For technical reasons, these are known as prefixes.1) This lets each other ISP build up a table of routes for the traffic to follow.

Peering and Transit
What I've just described works great if every ISP is connected to every other (the technical term for this is a full mesh), but there are zillions of ISPs so that's not very convenient. What actually happens is that there are a relatively small number of big ISPs and that little ISPs, rather than being connected to each other, connect to some big ISP, who carries some or all of their traffic to the rest of the network. So, if we imagine adding two such small ISP to the network we drew before, we get something like this:

In this network, ISP D has connected to ISP A. Any traffic to any other part of the Internet not operated by D or by A must go through A. The technical term for this is that ISP D is buying transit from ISP A. E's situation is similar except that they're buying transit from C.

At this point, it's worth noting that ISP D's position with respect to ISP A is very much the same as your position with respect to your ISP. In both cases, you're paying someone else to carry your traffic to the rest of the Internet. In fact, it's very common for end user customers to connect not their host to their ISP but rather an entire network consisting of a number of computers, sometimes distributed over a variety of locations. You wouldn't be wrong to think of an end-user like this as a degenerate sort of ISP--one that doesn't have any customers but their own users.

Now, it's easy to get your Internet service this way, but there are disadvantages. The first disadvantage is that you're paying someone else to give you service. And much like the situation with your own ISP, the more bandwidth that ISP D consumes on the link to ISP A, the more ISP A charges him. The second disadvantage is that you're totally dependent on one ISP. If something goes wrong on that ISP, then you're totally cut off from the rest of the Internet. Finally, imagine that you're served by ISP D and you want to communicate to someone who's served by ISP E. Traffic needs to go from D to A to C to E. As the supply lines get longer, it introduces latency and brittleness.2 A partial solution to the second and third problems is to establish a connection to a second ISP. This gives you both redundancy and a shorter path to that ISP's customer. The technical term for this is multi-homing (if you only have one connection, you're single-homed).

Consider the case of ISP D and ISP E. They both have equivalently good connections to the Internet, through a big ISP (A and C respectively) that's connected to all the other big ISPs. However, as noted before, traffic between them goes through a fairly inefficient route (D,A,C,E). They can improve this situation by connecting up directly, through link 6, as shown below.

So, you'll remember that I said that D pays A for transit and E pays C. So, you might ask does D pay E or the other way around? The answer is, it depends. If D is much bigger than E, E may pay D (because getting to its customers is more valuable to E) and if E is much bigger than D, it may go the other way around. However, if they're roughly equivalent sizes, they may choose to just connect and exchange traffic for free (well, technically without paying a fee. There are still all the equipment costs associated with getting lines attached to the same location, etc. This can sometimes be more expensive than buying transit through an existing connection!). This is called peering. Most of the big ISPs do some peering and the very biggest ones (called Tier 1s) never pay anyone for transit. They either peer or sell transit. Generally, it's considered a point of prestige for carriers to peer rather than buy transit--nobody wants to feel like they're not one of the big boys.

In order to understand the situation with Cogent and Layer(3), you need to understand one more thing. When you peer with someone else, you often don't carry their traffic to other parts of the Internet. I.e., traffic from D to C goes D,A,C, not D,E,C. The way that this works technically is that A advertises D's prefixes D but E does not. D, of course, advertises its prefixes to both A and E, but E filters those prefixes when it advertises its own routes to C. This means that the link between D and E may provides redundancy only for D-E communication. If D's link to A goes down, he won't be able to talk to anyone but E.

Level(3) and Cogent
With this background, we're now equipped to understand what's going on between Level(3) and Cogent. Level(3) is a Tier 1 provider; they don't pay for transit. Cogent is an almost-Tier 1; generally they peer but occasionally they pay for transit but only to a few select networks. Until very recently, Level(3) and Cogent peered, but Level(3) was obviously unhappy with that relationship and wanted Cogent to pay them for transit. Cogent didn't want to, probably partly for financial reasons and partly for prestige reasons. When negotiation didn't work out, Level(3) terminated the peering relationship. Cogent responded by offering free transit for a year to Level(3) customers, which is an obvious attempt to take business away from Level(3). Level(3) has temporarily reconnected Cogent until November 9th.

Because Cogent isn't paying for transit to Level(3) (and Level(3) certainly isn't paying for transit to Cogent), packets can't pass between the two networks. This only affects you if you (or your ISP) is single-homed to either Level(3) or Cogent (which is a lot of people). If you are, you won't be able to talk to anyone else who is single-homed with the other ISP. If you aren't, then you won't have a problem.

Basically, what's going on here is a game of chicken. It's valuable to both Level(3) and Cogent to have their customers be able to talk to the other. They're both suffering when they're not connected, but they both figure that the other will give in first. Level(3) can give up by turning back on the connection with Cogent. Cogent can give up by agreeing to pay Level(3) for transit or someone else for transit to Level(3). In the past, Cogent has pursued this stragegy at least twice, once successfully (Teleglobe) and once unsuccessfully (OpenTransit). It will be interesting to see what the result is this time.

This post relies heavily on the discussion of this event on NANOG (in particular this post by Richard A. Steenbergen), and on discussions with Dave Meyer. All errors are, as usual, my own.

1 The way that Internet routing works is that a route advertisement is for a contiguous block of IP addresses, For instance, the route 192.168/16 means "any IP address whose first two (most significant) bytes are 192.168". Because the addresses are written most significant to least significant, this means that any address in the block (e.g. must have the block's prefix.

2 The key parameter here is the number of ISPs (actually Autonomous Systems (ASs) that the traffic has to pass through. BGP uses the AS_PATH parameter to carry this information.

Fajid Mazil observes that the real action (angst) in domain names is in the generic Top Level Domains (gTLDs) rather than the country code TLDs (ccTLDs). Basically, there are two types of top level domain (TLD) that you can have: generic ones .com, .org, etc. and country-specific ones like .us, .uk, .ca, etc.

The way that the ccTLDs is allocated is described in RFC 1591 and is mostly non-contentious, because the IANA has extremely limited discretion:

The IANA is not in the business of deciding what is and what is not a country. The selection of the ISO 3166 list as a basis for country code top-level domain names was made with the knowledge that ISO has a procedure for determining which entities should be and should not be on that list.

Naturally there's a fair bit of whining about this: Should Taiwan have a ccTLD? (it does). How about Palestine? Yep. Tibet? No. But since the IANA doesn't have any real discretion in this matter, it's not like the US is somehow imposing its preferences on the world through ICANN/IANA.

Of course, just knowing which domain names should exist doesn't necessarily tell you who should get to control them. For instance, there's was for a while some question about who should be running Iraq's domain (.iq). In general, though, if you're a national government in clear charge of your territory you're going to get control of the ccTLD.

Incidentally, once the ccTLD is delegated, the country code manager has quite wide discretion about what they do with it. Many countries (e.g., the UK, or Canada) do the expected thing and use it to allocate domains for in-country users. Others, such as Tonga just sell the domains as if they were .com. Tuvalu is particularly lucky in this regard, having been assigned the country code .tv. Unsurprisingly, Tuvalu seems to be treating this as mostly a money-making opportunity. Their registry is run by VeriSign.


October 9, 2005

The EU's desire to "control the Internet" is getting a lot of press. Unfortunately, the press is doing a lousy job of explaining what this means, probably because it's not a well-formed concept. Nobody really "controls" the Internet, anymore than anyone controls the market. The Internet is basically what you get when a bunch of people agree to connect to each other using more or less the same protocols. That isn't to say that nobody controls segments of the Internet: ordinary legal mechanisms can be used to compel individual actors to do specific things, as 2257, the Yahoo/France incident, and the Great Firewall of China demonstrate quite clearly. But none of these authorities can really be said to control the Internet as a whole, and it's not what people mean when they talk about control of the Internet. What they mean is ICANN.

Remember that I said that the Internet requires that people agree to do more or less the same things? Well, there are two important things that ICANN controls: the names that are used to map hosts on the Internet (for the present purposes, this means things like Web sites, mail servers, etc.) to IP addresses and the IP addresses themselves. (They also control protocol code points, but those are typically pretty uncontroversial).

Domain Names
Domain Name System (DNS), used to map host names to IP addresses, is a hierarchical, distributed system. with each level of the name being controlled by the one above it. Take a name like educatedguesswork.org. The Public Interest Registry operates .org. They decide which server gets to control educatedguesswork.org. When I wanted that domain name, I registered it with them.1, for which I pay a fairly nominal fee.

ICANN controls which top level domains (.org, .com, etc.,) and who gets to operate them, and through that to some extent what their policies are. This tends to be a beauty contest and some of their decisions on all of these fronts have been controversial, most recently .net and .xxx.

To understand how ICANN controls this stuff, you need to have some idea how DNS name resolution works. Take, for instance the name www.educatedguesswork.org. The way that this is resolved looks something like this:

  1. Contact one of the root servers, e.g., a.root-servers.net and find the server for .org. This gives us six servers, TLD1.ULTRADNS.NET, TLD2.ULTRADNS.NET, etc.
  2. Contact TLD1.ULTRADNS.NET, and ask for educatedguesswork.org. This gives us 3 servers, ns1.dreamhost.com, etc. 2
  3. Contact ns1.dreamhost.com to get www.educatedguesswork.org. This gives the IP address,

Now, what ICANN controls is the first step of the operation, where you look up .org. They do this by pushing out the root zone to the various root name servers, imaginatively named A-M. But here's the interesting thing: what makes those the root servers is that people's name servers are configured to point to them. If people reconfigured their name servers to point to some other set of root servers, the control of the root zone would change just like that. On the other hand, as long as people's resolvers don't change, then it doesn't much matter what the governments of various countries try to do.

If the governments in question are serious, then, what they'll probably do is require that ISPs to reconfigure their servers. Since most people just use their local ISP's server, this would get most of the job done. Of course, this assumes that they can all agree on what the new roots should be and the US ISPs (and software vendors) go along. If not, the result will be a partition in the namespace--not a good outcome.

IP Addresses
Less well known, but probably more important than domain names are IP addresses. The Internet Assigned Numbers Authority (IANA) allocated IP address blocks to the Routing Information Registries (RIRs). The RIRs allocate them to ISPs and end users.

Why this matters is that IP addreses (well, at least IPv4 addresses, which is what everyone uses) are a somewhat scarce commodity. Back in the old days, they used to be allocated a lot more freely, so they're distributed fairly unevenly and that US and Europe got the bulk of them, with things being a lot more scarce outside those areas. I suspect there's some angst about this as well, but because it's less overtly political, there's less public fulminating about how unfair it is.

The situation with control of IP addresses is similar to that with domain names, but even more anarchic, since there's no real trustworthy master list of all the IP address assignees (though there are bogon lists of addresses which have not been assigned to anyone. The way that one ISP learns about addresses assigned to other ISPs is by BGP advertisements from ISPs, who are mostly trusted not to generate advertisements for addresses they haven't been assigned. Given this, it's not clear how ISPs would treat advertisements that they know weren't made by ICANN/IANA. If a substantial fraction choose not to accept them, you end up with a partition again--not exactly a desirable outcome for the recipient of that new space.

2. Note that if you don't have the IP address for ns1.dreamhost.com, you'll need to follow a similar procedure to get it. I'm omitting that for the sake of clarity.

1 Technically, I arranged for Dreamhost to have it registered, but this particular detail doesn't matter here.


October 7, 2005

Brian Palmer points out that while the Bush administration opposes the anti-torture bill, they also disclaim responsibility for reported prisoner abuses:
Going along with your argument, of course, is that the White House claims that they're -not- torturing people, and only a few bad apples are mistreating prisoners. So the restriction would, if Bush were being honest, only be theoretical and not affect day-to-day operations at all.

This isn't necessarily inconsistent, of course. The McCain amendment forbids interrogation techniques "not authorized by and listed in the United States Army Field Manual on Intelligence Interrogation" and "ruel, inhuman, or degrading treatment or punishment." The charitable interpretation here is that the Administration want to be able to use techniques that aren't in the army field manual but fall short of the kind of abuse that happened at Abu Ghraib. It would be interesting to hear them explain what techniques those are. Perhaps they could be added to the Army Field Manual. If they're not prepared to do that, doesn't that say something?


October 6, 2005

Check out the party line on why McCain's anti-torture bill is so bad that it should be the first bill that Bush vetos:
The White House has threatened to veto the $440 billion military spending bill to which the measure was attached, and Vice President Dick Cheney has lobbied to defeat the detainee measure. White House spokesman Scott McClellan objected that the measure would "limit the president's ability as commander-in-chief to effectively carry out the war on terrorism."

Well, duh.

Look, this is a totally vacuous argument. Yes, this limits the president's freedom of motion, but there are all sorts of restrictions on his ability as commander-in-chief. For instance, he can't have soldiers summarily executed for incompetence, even though this would arguably help him effectively carry out the war on terrorism. That's how things work in a democracy.

Unless McClellan's argument is that the president should have unfettered discretion to pursue the war in any way he sees fit, then the mere argument that this would limit his ability to do so doesn't get the job done. What's needed here is an argument for why the president specifically needs the ability to torture detainees. I'm not saying that there isn't such an argument, but that's not what we're getting from the Administration here.


October 5, 2005

I remember when your average standards meeting didn't have any Internet connectivity at all. This week I'm at the IEEE 1609.1 meeting in Albany, NY. They have wireless but are people happy? No.... They're too busy complaining that a bunch of ports (Jabber/SSL [5223], POP3S [1995], ...) are blocked.

October 4, 2005

Probably as a reaction to the great aircraft lavatory weapon-hiding incident, airlines have started placing security stickers on potential lavatory storage locations. I haven't tried prying one off, but I assume they're the usual tamper-evident seals [*] which can't be unstuck and restuck without some obvious visual sign and that the aircraft is periodically inspected for seal integrity. I've seen two kinds of sticker, one silver and one blue. I believe the blue ones are newer since I've seen them more recently and in at least one case a blue sticker was stuck over a silver one. Each sticker has some kind of number on it. The silver numbers seem to repeat but the blue numbers seem to be semi-unique. (A little Web searching didn't turn up any details).

The big question is how hard it is to fake up some stickers that will to fool casual inspection. The visual part is no problem, especially for the silver stickers. The obvious problem is the serial numbers. If the serial number of each sticker is recorded (and checked!) then it becomes impractical to just make up a bunch of stickers at home and replace whatever sticker you destroy. I wonder how the security people inspect them and how often.

One thing I noticed is that the blue stickers are basically unreadable without a flashlight, which is probably considered a feature (it's hard to replicated) but means that inspection is slower and easier to bungle. In any case, even if serial numbers are checked you might be able to use a label printer to print new serial numbers onto premade stickers once you know the serial number of the sticker you want to replace. Of course, if you're targeting a specific plane you can figure out what the serial numbers are on one flight and then replace on the next. This sounds like a lot of work, though, especially since there are other places to hide stuff on an aircraft (or in the airport).

Second, what happens if one of the seals is broken? For the system to be of use, this must trigger some kind of inspection. If the inspection is cheap, that's no big deal. But if it's onerous, then tampering with a few seals would force the airlines and TSA to incur a pretty substantial expense (and potentially delay).

Finally, it's not clear what threat is being defended against. Remember that people are generally not searched entering the plane, so it's easy to just hide your weapon (or whatever) in the airport somewhere (inside security) and then carry it on the plane in your bag. Is there some obvious advantage to hiding contraband on the plane proper that I'm overlooking?


October 2, 2005

Adam Shostack worries that he might need to eat in a disaster situation:
Since Katrina, I've been trying to spend about $25 a week on disaster preparedness. Fortunately, I already own some basic camping gear, so I'm starting out by storing more food and water. My pantry tends to be thin on food that can be eaten without preparations. I have powerbars and snack bars so I've been adding canned foods, trail mixes, and I'm going to get a couple of army "meals-ready-to eat." Each of those tastes about as good as a brick, but is far more nutritious: Each has about 2,000 calories, which is a day's eating.


One of the things I learned from Eric's posts is to think about water not only as hydration, but also sanitization, and so bought a few 8 oz jugs of hand sanitizer. Another thing I learned, as I was storing the trail mix: Check the 'best by' date on it. It turns out that one jar I got has a 'best by' date in January 06. And it looked so dehydrated and up-appealing.

The final food question is caffeine. I don't want to be stressed out, and have withdrawl symptoms at the same time. Nor do I want to be munching coffee beans raw. I did get some ground coffee, which can be made to work if I have heat. I could assume that my (gas) stove will work, and get a French press. I could get a camp stove, or a camp coffee maker. I could get chocolate-covered espresso beans. None of these seem really satisfactory.

MREs are a pretty popular choice because they're nutritionally complete, have a long shelf life, and aren't totally disgusting (though there's apparently a lot of variation in how good the flavors are). They can also be bought with a heating pack so you don't need a stove. On the downside, they're fairly heavy. Each MRE contains about 1000-1200 kcals and they weigh about 20 oz each, so it's about 50 kcal/oz. This is no big deal if they're in your basement or car, but if you have to carry them around. They're also fairly expensive, typically around $6/each, so about 160 kcal/dollar.

If you're not a picky eater, you can do a better with survival rations. The industry standard here is . At about 150 kcal/oz, they have about three times the caloric density by weight of MREs. Typical prices are about $8/bar, so that's about 450 kcal/dollar. The're rated for a five year shelf life. I broke one open a while ago and they taste OK, kind of like lemony shortbread. I'm not saying I'd want to eat them for dessert every night, but they're far from intolerable. Kevin Dick tells me he's also fond of ration tablets (I think these.) I've never tried them so I can't give an opinion on how they taste. I've heard claims of shelf lives up to 10 years, but I don't have any independent data.

An alternative strategy (used by Kevin, I believe) is just to keep a large stock of energy bars on hand and rotate them frequently. This obviously works a lot better if you eat a lot of energy bars anyway and you have the discipline to rotate. In my experience, the chocolate covered ones get kind of messy if they get hot, so you may want to stick with uncoated ones. My experience on camping trips is that Clif bars produce less palate fatigue than the more synthetic tasting PowerBars, but it's obviously an issue of individual taste.

As far as caffeine goes, if you want to make coffee I advise getting a camp stove. For this situation, I advise the MSR International, which will burn white gas, kerosene, and unleaded. This affords you the most flexibility in situations where it's hard to get fuel. (Note, for ordinary camping use I recommend the JetBoil, but it relies on pressurized gas, which may be hard to come by when being chased by an army of zombies).

If you need the caffeine but don't care about coffee, you could probably get by with caffeine tablets. Also, some energy gels have caffeine in them—though significantly less than in coffee—but then you're back to the shelf life issue.


October 1, 2005

The FBI has admitted that occasionally they tap the wrong phone:
The 38,514 untranslated hours included an undetermined number from what the FBI called "collections of materials from the wrong sources due to technical problems."

Spokesman Ed Cogswell said that language describes instances in which the tap was placed on a telephone number other than the one authorized by a court.

"That's mainly an instance in which the telephone company hooked us up to the wrong number or a clerical error here gives us the wrong number," Cogswell said.


"What do you mean you are intercepting the wrong subject? How often does it occur? How long does it go on for?" said James Dempsey (search), executive director of the Center for Democracy and Technology.

David Sobel (search), general counsel of the Electronic Privacy Information Center, said technological advances have made it harder, not easier, to "conduct wiretapping in a surgical way" because digital communications often carry many conversations. "It's not like the old days when there was one dedicated line between me and you," Sobel said.

The FBI has acknowledged errors in the past. An FBI memo from 2000, made public two years later, described similar problems in the use of warrants issued by a court that operates in secret under the Foreign Intelligence Surveillance Act. In 2002, an FBI official said the bureau averaged 10 mistakes a year in such cases.


The FBI is not supposed to use material it collects either by mistake or from people who happen to use phones that are tapped legitimately, but that requirement doesn't satisfy some lawmakers.

"They have recorded the information, but they're saying, 'Trust us, we won't listen to what we recorded,"' said Rep. Bobby Scott (search), D-Va. "People ought to be concerned."

The only thing that's surprising here is that anyone would be surprised that the FBI occasionally intercepts the wrong phone call. Think about how often you misdial a phone number that's right in front of you. Why would you expect the FBI not to make mistakes when they key in the number to intercept?

As for Rep. Scott's point that we have to trust the FBI, that's certainly true, but then we have to trust them not to forge warrants they give to the phone company, not to use scanners to listen to cell calls, not to break into your house and plant bugs without a warrant, etc. Now, you may not think that the FBI is trustworthy enough not to do these things (I'm not sure I do) but why should you be particularly concerned about them not listening to some surveillance of some number which is most likely owned by some entirely different person than the one they're trying to target? I'm much more concerned about the FBI intentionally targetting people they vaguely suspect but don't have enough evidence to get a warrant for than I am about them accidentally tapping people who happen to have similar phone numbers to people they do have warrants for.