EKR: February 2006 Archives

 

February 28, 2006

Eszter Hargittai writes:
On occasion, I get emails in which people address me as Mrs. Hargittai. I'm not suggesting that people need know my personal history or preferences. However, if you are going to contact someone in a professional context and they have a Ph.D. and they teach at a university (both of which are very clear on their homepage where you probably got their email address in the first place), wouldn't you opt for Dr. or Professor?

Most of the time when someone contacts me and says "Dear Dr. Hargittai" or "Dear Professor Hargittai" the first line of my response is: "Dear X, please call me Eszter." So the status marker that comes with these is not what's of interest to me. Rather, I'm intrigued by how gender ties into all this and would love to hear how male junior faculty get addressed in such situations.

...

When in doubt and you don't have the necessary information, how about just writing/mentioning both first and last names and skipping the rest?

Two thoughts here. I don't have a Ph.D. but I do enough academic-type work that I get a modest amount of correspondence. Not infrequently, people refer to me as "Dr. Rescorla" and I always end up correcting them, which feels pedantic, but I figure that anyone who slaved away in the academic salt mines long enough to get their doctorate deserves to get to keep it exclusive.

A related question is communicating with academics I don't know at all. Ordinarily, when it's someone you don't know you can use "Mr." or "Ms." but somehow calling someone "Dr." or "Professor" feels too heavy, even in an e-mail. I suspect that "full name" is right, but somehow "Dear John Smith" doesn't flow right. Usually I just dispense with the greating altogether or use something generic like "Hi".

Finally, there's the question of how you refer to third parties. I don't know Eszter Hargittai, so should I use her full name, first name, last name, or what?

 
One of the big questions in evolutionary biology is why sex evolved. There are obvious disadvantages to being a gene in an organism which reproduces sexually: there is a 50% chance that you won't end up in any given offspring. So, why do so many organisms reproduce sexually?

One idea, originally proposed by John Maynard Smith, is that selection is a lot more efficient with sexual reproduction. When you reproduce asexually, then selective pressure can only act on whatever complexes of mutations happen to form in given individuals. So, if I have beneficial mutation A and you have B (at different positions), there's no direct way for some future organism to get A and B--though they could start with A and independently mutate B. By contrast with sexual reproduction, an organism with A can mate with one with B and produce an organism with both A and B. So, this confers an advantage in terms of faster selection.

In the Feb 17 Science, Paland and Lynch provide confirming evidence for this theory. (more readable review of this work on which this blog post is partly based here). The water flea (Daphnia pulex) can convert from sexual to asexual reproduction (but not back). Paland and Lynch were able to measure selection rates (by measuring the stability of the amino acid sequences at various loci) in both lineages and the rate of selection was substantially higher in the sexual lineages, providing some confirmation for the faster selection hypothesis.

 

February 26, 2006

A bunch of potentially relevant -00 drafts for Dallas IETF:

T. Dierks, E. Rescorla, TLS 1.2 TXT

N. Modadugu, E. Rescorla, AES Counter Mode Cipher Suites for TLS and DTLS (TXT, HTML)

E. Rescorla Transport Layer Security (TLS) Partial Encryption Mode (TXT, HTML)

N. Modadugu, E. Rescorla, Extensions for Datagram Transport Layer Security (TLS) in Low Bandwidth Environments (TXT, HTML)

J. Fischl, H. Tschofenig, E., Rescorla, Session Initiation Protocol (SIP) for Media Over Datagram Transport Layer Security (DTLS) (TXT, HTML)

J. Fischl, H. Tschofenig, Session Description Protocol (SDP) Indicators for Datagram Transport Layer Security (DTLS) (TXT, HTML)

H. Tschofenig, E., Rescorla, Real-Time Transport Protocol (RTP) over Datagram Transport Layer Security (DTLS) (TXT, HTML)

 

February 25, 2006

For reasons which don't bear going into, I had the opportunity to eat an MRE today. We had Menu 22: Jambalaya.

SkittlesQuite good. We ate these first
JambalayaFairly intolerable. Mushy, salty, over-peppered and under-spiced
Wheat breadGhastly. Spongy and tasteless and yet somehow vile.
Cheese spreadIncredibly scary looking. I wasn't brave enough to try it but Fluffy said it was appalling
Oatmeal cookieBasically OK. A little dry and not quite as chewy as you would like, but what do you expect for something that lasts 130 months
Lemon-lime electrolyte beverageWe didn't try this, but I assume it's basically gatorade
Flameless heaterIncredibly cool. You just pour water into the pouch and it heats up immediately, warming your food in the process. The best part of this is the diagram on the side, instructing you to pour water into the pouch and then lean the pouch somewhere. Check out these instructions, complete with a picture of the pouch leaning on "a rock or something."
SpoonWe had our own spoons, but it appeared serviceable

Verdict: not totally inedible, but I'm definitely not excited about trying the "meatloaf with gravy."

 

February 24, 2006

Interesting NYT article about the question of why ice is slippery. It appears that the standard answer to this question (the pressure lowers the melting point) is wrong. The change in the pressure only lowers the MP about .03 F, and ice is slippery at well below 32 F. The two best competing answers seem to be:
  1. Friction from your skate blade or shoe causes the ice to melt which makes it slippery. This seems to be true but incomplete. The problem here is that ice is slippery even if you're standing still.
  2. Ice has an intrinsic liquid layer on the outside. It's not clear to me exactly how the physics of this works (but then what do you expect from something you read in the papers), and atomic force microscopy testing doesn't seem to indicate that the surface of ice is slippery at the microscale, which suggests that the liquid layer isn't a complete explanation.

Sort of amazing that such an apparently simple phenomenon still isn't understood.

UPDATE: Corrected the temperature measurements to be F instead of C. Thanks to Paul Hoffman for pointing this out.

 
One of the less good features of old-style HTTP is the way it handled TCP connections. In original HTTP 1.0, the way that things worked was that you'd open up a TCP connection, send a single request, get a response, and close the connection. As long as you don't know anything about TCP, This sounds simple and elegant, but it's actually a terrible idea.

The problem is that setting up a TCP connection takes time. First, there's the "3-way handshake" to set it up. So, you've consumed a round trip before you even get to send your request. Then to make matters worse, TCP congestion control uses an algorithm called slow start. The idea with slow start is that you don't start sending data at full rate right away. Instead, you start out with a slow sending rate and only increase it as you realize that data is being received at the current rate.1

Again, this doesn't sound so bad until you realize that your average Web page isn't the result of a single HTTP fetch but rather an HTML page with a bunch of inline images. Each of these images requires its own fetch. So, in the worst case scenario, you fetched each image in sequence, which provides suboptimal performance and looks lousy in the UI. The problem is even worse with SSL/TLS because the connection setup is rather more expensive. The initial TLS connection setup requires two round trips and costs the server an expensive private key operation, typically RSA (your average server can do a few hundred RSA operations per second).

Modern Web implementations have several features designed to alleviate these problems. The first is persistent connections. Instead of setting up a new connection for each fetch, you leave the connection up and then issue multiple fetches. This avoids the 3-way handshake and slow start, so you can get the full bandwidth of the link.2

The second feature is parallel connections. Instead of just opening one connection to the server, the client opens several. It can then fetch multiple images (or anything else in parallel). This has a number of advantages. The first is that it looks snappier since you can load more than one image on the page at once, so people don't feel like they're waiting as long for something to happen. The second advantage is that it works even if the server doesn't support persistent connections. Finally, if you're sharing the network connection with others, you get a bigger share of the bandwidth. (This isn't fair, but there you have it.) This feature was originally introduced by Netscape but everyone does it now.

SSL implementations also include a feature called "session resumption". Instead of doing a new complete handshake with every TCP connection you initiate, the client and server can reuse the same keying material initiated in connection N with connection N+1. This lets you avoid the RSA computation on the server and save a round trip in the handshake.

So, any given HTTPS session tends to involve some combination of both. By way of illustration, here's the sequence of events from my local client talking to my local server with Firefox, Apache, and a hacked up version of the Apache default page with some extra images. The way to read this is that X.Y is "Connection X, Request Y".

 
Connection 1: New handshake 
1.1  GET / HTTP/1.1 
1.2  GET /manual/images/feather.jpg HTTP/1.1 
1.3  GET /manual/images/apache_pb.gif HTTP/1.1 
1.5  GET /manual/images/openssl_ics.gif HTTP/1.1 
1.6  GET /manual/images/apache_header.gif HTTP/1.1 
Connection 2: Resumed handshake 
2.1  GET /manual/images/mod_ssl_sb.gif HTTP/1.1 
1.7  GET /manual/images/index.gif HTTP/1.1 
2.2  GET /manual/images/home.gif HTTP/1.1 
1.8  GET /favicon.ico HTTP/1.1 

So, as you can see, there's 10 requests, two connections, and one RSA handshake.

Typically, the lifetime of HTTP persistent connections is very short. My Firefox offers 300 seconds, but it looks like Apache gets bored after 15 seconds or so. Session caches are a lot longer. It's generally quite long with clients but servers tend to keep it fairly short. 5 minutes is the default with mod_SSL, but you can dial it up arbitrarily high since it's a classic memory/CPU tradeoff. In general, all the connections associated with a given page will be in the same session at least.

1. The standard treatment of this problem is The Case for persistent-connection HTTP.
2. There's also pipelining, which I won't talk about here.

 

February 23, 2006

When a clinical trial for a new drug fails to show a useful result, that's bad news. If the drug worked, then you'd have something useful (or at least maybe, since many drugs that work turn out to be unsafe, too expensive to manufacture, or otherwise unsatisfactory). But what about a drug that's been on the market for a while, like say glucosamine/chondroitin? If you aren't already taking the drug then this is bad news because you might someday have whatever condition you previously thought the drug treated and it doesn't work then you need to downgrade your estimate of how likely it is that you'll recover.

The one group of people this isn't bad news for is people who are already using the drug. Say you've already got whatever the condition is and you've been taking the drug for a while: this means you've already got a baseline for how well it works for you. If a study that comes out that shows that the drug doesn't work any better than placebo, that's good news for you; you can stop taking the the drug, get the same results, and save some money. So, you're up whatever the price of the drug was.

Obviously, in either case, the situation hasn't really changed. All that's changed is the information you have. And so whether it's good news or bad news is a matter of what information you had before.

 
Glucosamine and chondroitin are popularly used as treatments for osteoarthritis and by extension all manner of cartiliginous-type joint injuries. Unfortunately, the effectiveness data is pretty thin. The results of the Glucosamine/chondroitin Arthritis Intervention Trial are now out and they don't show that much of a useful result. Basically, neither glucosamine nor chondroitin alone outperforms placebo and the glucosamine/chondroitin combination outperforms placebo only for patients with moderate-to-severe pain.

It's not clear what to make of this, for the following reasons:

  • The response rate for placebo was incredibly high: 60% of patients showed response. This makes it hard for any treatment to look good.
  • celecoxib (Celebrex) was significantly better than placebo at the 95% confidence level for the whole group but not for either the mild or moderate-to-severe pain subgroups (this is a study power issue issue). So, clearly we need a larger study group. (n=1583 for this study).
  • Chondroitin does show a significant effect on joint swelling (one of the secondary outcomes).
  • The moderate-to-severe pain combined glucosamine/chondroitin group may just have been underpowered.
  • The study used glucosamine hydrochloride rather than glucosamine sulfate (see here for one comment on this). It's not clear if it makes a difference here, but the cation can make a difference (don't mistake NaCl for NaCN!).

The bottom line is that it's hard to draw any firm conclusions here.

Another thing to consider is that many people (especially athletes) who take glucosamine/chondroitin have an injury rather than osteoarthritis. As far as I know there's no data (other than anecdotal) that glucosamine/chondroitin works on this kind of injury (which is even less susceptible to study because it tends to heal on its own).

 

February 22, 2006

The IETF Nominating Committee (Nomcom) has finished its work and chosen a bunch of new IAB and IESG members:

IAB

Leslie Daigle
Elwyn Davies
Kevin Fall
Olaf Kolkman
David Oran
Eric Rescorla

IESG

Lisa Dusseault [*]Application Area Director
Jari Arrko Internet Area Director
Dan Romascanu Operations and Management Area Director
Cullen Jennings Real-time App. and Infra. Area Director (2 year term)
Jon Peterson Real-time App. and Infra. Area Director (1 year term)
Ross Callon Routing Area Director
Sam Hartman Security Area Director
Magnus Westerlund Transport Area Director

[*] Mrs. Guesswork.

 

February 20, 2006

Apple proves that anyone can make a dumb, easy to exploit mistake:
We received notice from Juergen Schmidt, editor-in-chief at heise.de, that a serious vulnerability has been found in Apple Safari on OS X. "In its default configuration shell commands are execute[d] simply by visting a web site - no user interaction required." This could be really bad. Attackers can run shell scripts on your computer remotely just by visiting a malicious website.

...

The problem is due to a feature that is activated by default: Open Safe Files after downloading. A zip file is considered safe and so they will be opened automatically. Subsequently, a shell script with no #! at the beginning of the script will be executed automatically. No user interaction!

Full description here. Via SANS.

I'm not ragging on Apple here. This is just the kind of error you get when you have a big software package written by actual humans. Still, it's a good reminder that just because it's not written by Microsoft doesn't mean it's safe.

 
Apple proves that anyone can make a dumb, easy to exploit mistake:
We received notice from Juergen Schmidt, editor-in-chief at heise.de, that a serious vulnerability has been found in Apple Safari on OS X. "In its default configuration shell commands are execute[d] simply by visting a web site - no user interaction required." This could be really bad. Attackers can run shell scripts on your computer remotely just by visiting a malicious website.

...

The problem is due to a feature that is activated by default: Open Safe Files after downloading. A zip file is considered safe and so they will be opened automatically. Subsequently, a shell script with no #! at the beginning of the script will be executed automatically. No user interaction!

Full description here. Via SANS.

I'm not ragging on Apple here. This is just the kind of error you get when you have a big software package written by actual humans.

 

February 19, 2006

Mrs. Guesswork pointed me to the IKEA or Ibsen quiz. Despite the fact that almost all of our furniture comes from IKEA, we only got 15/30.
 
In the comments, Hovav Shacham writes:
This may be tangential, but what I find weird about RSA is how much higher the entry price is than comparable conferences (including those conferences where you would send your poly-time factorization papers). I suppose the presence of John's type-b people [people willing to pay $2K to get in--EKR] above might justify it ..

As John Kelsey points out, a lot of the value in attending a conference like RSA is getting to network with other people attending it. And at some intuitive level it seems likely that people willing to pay $2K to attend would be the kind of people who you would want to meet, market yourself too, etc. The higher the price of the conference, the better prospects the other attendees will be and the more you might be willing to pay to attend yourself. So, the optimal price is very subject to network effects, which may be why it's so high even though the actual quality of the talks is so low.

What's really weird about RSA, though, is that as John points out, an expo pass is only $50 ($75 on the day of the conference), and you get to meet all the same people in the hallway. And it's so common for people to get expo passes that it doesn't decrease your credibility to have one. So, at the end of the day the high price remains a bit of a mystery.

 

February 18, 2006

WaPo has a pretty good article on botnets. Nothing super new but worth reading if you're not already familiar with the phenomenon:
But 0x80 and one of his friends -- who goes by the screen name Majy -- say they've easily disguised their installation methods. Their biggest complaint about the whole enterprise: being routinely shortchanged by the adware distribution companies, which often "shave," or undercount, the number of programs installed by their affiliates.

"It sucks, too, because the companies will shaft you, and there isn't a lot you can do about it," says Majy, 19, who claims to have had as many as 30,000 computers in his botnet.

The section about apparently sleazy adware company 180 Solutions is also quite interesting:

By 180's own count, its adware is installed on 20 million computers. The people who use those computers receive pop-up ads based on what they are searching for online. If the user searches for the term "travel," 180's software will look through its database of clients in the travel business and present an ad from the company that bid the most on that search term. The next time that user searches using the same term, 180 will serve the ad of the next-highest bidder for that word, and so on. 180 then gets paid from 1.5 to 2.5 cents for each ad it delivers to the user. The more computers with 180's adware, the more revenue each ad generates.

Consumer groups gathered mountains of evidence that 180 Search Assistant was being installed on thousands of computers without user consent. Once again, 180 tried to quiet its critics. Toward the end of last year, the company announced it was phasing out 180 Search Assistant in favor of the Seekmo Search Assistant. Company spokesman Sean Sundwall says Seekmo will be more fraud resistant than 180 Search Assistant, and that it will not be distributed or bundled with other software programs without 180's permission. The company says this will give it far more control over how Seekmo is installed and by whom.

...

Weeks after 180solutions said it was discontinuing its 180 Search Assistant software, a computer worm began spreading rapidly across AOL's instant message network, downloading and installing viruses and a host of other programs -- including 180 Search Assistant -- on victims' computers. While 180 denied it had anything to do with the worm, for the CDT, that was the last straw: On January 23, the nonprofit filed a detailed complaint with the Federal Trade Commission urging the agency to sue 180solutions for violating consumer protection laws.

In a statement, 180solutions denied that it was ignoring the problem, arguing that it had made "great progress in the fight against spyware" and insisting that it shared the CDT's vision of "protecting the rights and privacy of consumers on the Internet . . . We have made voluntary improvements to address every reasonable concern that the CDT has made us aware of."

An interesting illustration of how difficult it is to follow the money.

 

February 17, 2006

Sometime around 1993, I buy my first external hard drive, a 1.2 GB hard drive for around $1/MB.

This week, I buy McAfee Internet Security Suite for $79 with a $40 rebate. It comes with a 128 MB flash MP3 player.

 

February 16, 2006

Bruce Schneier writes about the RSA Conference's new method of dealing with badge fraud:
Last year, the RSA Conference tried to further limit these types of fraud by putting people's photographs on their badges. Clever idea, but difficult to implement.

For this to work, though, guards need to match photographs with faces. This means that either 1) you need a lot more guards at entrance points, or 2) the lines will move a lot slower. Actually, far more likely is 3) no one will check the photographs.

And it was an expensive solution for the RSA Conference. They needed the equipment to put the photos on the badges. Registration was much slower. And pro-privacy people objected to the conference keeping their photographs on file.

This year, the RSA Conference solved the problem through economics:

If you lose your badge and/or badge holder, you will be required to purchase a new one for a fee of $1,895.00.

Look how clever this is. Instead of trying to solve this particular badge fraud problem through security, they simply moved the problem from the conference to the attendee. The badges still have that $1,895 value, but now if it's stolen and used by someone else, it's the attendee who's out the money. As far as the RSA Conference is concerned, the security risk is an externality.

Bruce's point about incentive alignment is a good one, but it neglects the equilibrium analysis. Even at $1895, people will still lose their badges, so knowing that you might have to buy a complete new one diminishes the value of the initial badge by about the expected loss involved in having to buy a new one. If the loss rate is high enough, then RSA won't be able to charge as much for the initial registration.

How much does this effect matter in practice? Depends on the loss rate. But consider that most of the stuff that's being provided at RSA (access to talks in particular) is partially nonrivalrous. A lot of classrooms are partly empty (mine was) and so a few extra people coming in wouldn't have cost RSA anything. Obviously, they have to charge and have some security because otherwise nobody would pay, but the important question when deciding on this kind of investment is how much security you need to get to the point where (1) excess resource consumption is low and (2) the only people cheating are people you couldn't have extracted money from anyway? So, you have to balance money lost due to fraud against money lost due to unwillingness to pay.

It's also worth noting that the new scheme makes badge swapping between friends much easier. When there were pictures on the badges, this was harder, but now it's no problem. Badge-swapping is actually so common that most people who engage in it don't consider it fraud. After all, you're on the show floor but the person who's badge it is isn't (there are of course counterarguments which I won't get into here. I'm just talking about people's attitudes). Walk around the expo floor and you'll see plenty of people wearing the wrong badge.

Another form of fraud that's enabled by picture-less badges is "in-and-outs". Two people walk into a session (or more likely the party tonight). One stays in and the other takes both their badges out for reuse by a third person outside. This is partially mitigated by RSA scanning each badge so they could in principle notice large numbers of entries on each badge, but probably not stop low levels of fraud.

 

February 13, 2006

I'm looking for a collective term for all the varieties of junk messaging (spam, phishing, SPIT, SPIM, etc.). The generic term for the e-mail variety seems to be Unsolicited Bulk Email (UBE). By analogy, I propose Unsolicited Bulk Messaging (UBM).
 
Say we've got some condition with a base rate of X% which has a really unpleasant treatment that nobody wants. Say you've got a test for the condition which has a sensitivity of 100% (the false negative rate is zero). Unfortunately, it's also got a specificity of 100-2X, so about 2/3 of the positives are actually false positives. I.e., even with a false positive test you're more likely to not have the condition than you are. Now, if you have some kind of better followup test, you can use this kind of test as a quick and dirty screen. But if this test is the last word then you generally can't use it to start treatment.

The case of drug testing in sports potentially represents an exception to this rule in a brutal utilitarian kind of way. In the medical case, people have some disease they presumably want to have fixed and so even though the treatment is unpleasant they want to get it if they need it. By contrast, athletes who are doping don't want their "disease" cured. They want to keep cheating without being detected. This means that unlike, say, lung cancer, people have no particular incentive to avoid behaviors that lead to the condition (like buying EPO and injecting it into your ass, say) unless there's some actual deterrent. The testing and the punishment is the deterrent.

In order for the deterrent to be effective, it doesn't have to be perfect. It merely has to significantly increase the chance that you'll suffer if you cheat. If there's (say) a 100% chance of getting caught if you do cheat but only a 5% chance of getting caught if you don't, then the incentive not to cheat is very strong. Now, it's true that there are a bunch of poor innocent suckers who get punished anyway, but of course since almost everyone, innocent or guilty, denies cheating, they're serving as a useful deterrent as well.

Please note: I'm not endorsing this reasoning, which is a little too utilitarian even for my taste but it's easy to imagine a slightly less blunt version (a few false positives are worth it for the goal of eliminating drugs from sports) fluorishing in the WADA.

 
Lisa Dusseault pointed me to to an interesting in CyclingNews on EPO testing. Apparently, at least four athletes (Rutker Beke, Virginia Verasategui, Iban Rodriguez, and Bernard Lagat) have been accused of taking EPO based on positive tests and then ultimately cleared (or at least acquitted.)

The basic problem appears to be, as I indicated earlier that the Lasne technique (1-dimensional gel electrophoresis) has never been calibrated to any known accuracy rate, and indeed WADA has changed the criteria at least once:

Up until now, the urinary EPO test has been called "qualitative", but is in fact quantitative, as there has to be a minimum percentage (80% ) of basic isoforms for a sample to be classed as positive. The Chatenay-Malabry laboratory in Paris is even more stringent, requiring 85% of the bands to be basic for a positive. It should be noted that 80% is quite high when considering normal urine: someone could take EPO and still pass the test if their basic bands percentage was 79%. Thus, the test can give rise to false negatives as well.

In January 2005, WADA recommended that the 80% basic bands criterion should no longer be used, and that a more qualitative system should be used:

1. In the basic area there must be at least 3 acceptable, consecutive bands assigned as 1, 2, 3 or 4 in the corresponding reference preparation.

2. The 2 most intense bands either measured by densitometry or assessed visually in the basic area must be consecutive and the most intense band must be 1, 2 or 3.

3. The two most intense bands in the basic area must be more intense than any other band in the endogenous area either measured by densitometry or assessed visually.

None of these criteria were subject to scientific review, but were unilaterally adopted by WADA, it seems. On September 5, the President of the Spanish National Anti-Doping Commission (NAC) sent a communication to the Disciplinary Committee of the Spanish Triathlon Federation, in which they were advised that the World Anti-Doping Agency (WADA) phoned the accredited laboratory in Madrid on August 31 to communicate new instructions to modify the evaluation criteria for detection of urinary EPO. The new criteria have not been published by WADA and are therefore not known.

Obviously, the first thing you need to do before changing your criteria is to scientifically calibrate them for error rate, but there's no evidence that this has been done here.

Once you get past the methodological problem, there appear to be two technical issues that may or (may not, since we have very little data) cause false positives. These are both discussed in quite a bit of detail in an article by Dr. Inigo Mujika (Conflict of Interest alert: he's a coach for Virginia Berasetegui. However, the issues he's raising look like ones that have concerned me as well.)

The first issue is how well we understand the baseline EPO isoform mix in the control group (people who don't take EPO). This appears to be a particular issue with samples taken immediately post exercise. There are two subissues here. The first is overall high protein concentrations which you're supposed to control for before running the Lasne test but Mujika claims that labs aren't doing. The second is what the mix of isoforms is. Mujika cites a WADA-financed article by Kazlaukas et al. (Australian Sports Drug Testing Lab) that indicates that the isoform mix can be changed by exercise. The case of Rutker Beke is another piece of evidence here.

The second issue here is the specificity of the antibody you use to detect EPO. Mujika raises the question of whether it's really EPO-specific (actually, he raises the question of whether it's rEPO specific as well, but it seems to me that this is a non-issue since the whole point of the electrophoresis is to remove the concern about antibody specificity). If it's binding to other proteins, this makes the concern about baseline protein concentrations in urine even more significant.

The bottom line here is that as a layman, it doesn't appear that WADA has done enough work on the test to be able to use it to punish athletes with any confidence. I realize they're in a tight spot because they think a lot of EPO abuse is happening but they're having trouble proving it, so they wanted to roll a test out ASAP, but that doesn't make the science get done any faster.

 

February 12, 2006

Illiotibial Band Syndrome (ITBS) is one of the most common injuries for runners and cyclists. It's quite painful and can be really hard to get rid of. While doing some research, I came across this quite nice article on treatment in article The Physician and Sportsmedicine. Worth checking out if you're unfortunate enough to have ITBS.
 
Paul Hoffman, Russ Housley, and I are speaking at RSA on Tuesday about Hash functions.
Session Code:STA-101
Session Title:The Future of One-way Hash Functions in the IETF
Length:70 Minutes

This session will describe the progress of the IETF effort to look at the use of hash functions in Internet protocols, and develop evaluation criteria for new hash functions. Recent attacks on SHA-1 have focused more energy on one-way hash functions, and more information regarding the security of current hash functions may result in proposals for new hash functions.

Our panel is from 2-3:10. If you have any burning questions about hash functions this is the time to ask them (assuming you don't mind paying RSA $1895 to get in, that is...)

 
The last ditch treatment for severe acne is a drug called Accutane (isotrenetoin). The problem is that Accutane is teratogenic (the risk of some birth defect is on the order of 35%). Because so many acne sufferers are of child bearing age, the FDA requires that Accutane be prescribed under a program called iPLEDGE which requires, well, let them tell you:
As part of the ongoing risk management of isotretinoin products, it is crucial that a female of childbearing potential selects and commits to use two forms of effective contraception simultaneously for one month before, during, and for one month after isotretinoin therapy. She must have 2 negative urine or blood (serum) pregnancy tests with a sensitivity of at least 25 mIU/ml before receiving the initial isotretinoin prescription. The first pregnancy test is a screening test and can be conducted in the prescriber's office. The second pregnancy test must be done in a CLIA-certified laboratory according to the package insert. Each month of therapy, the patient must have a negative result from a urine or blood (serum) pregnancy test conducted by a CLIA-certified laboratory prior to receiving each prescription.

Each month, the prescriber must enter the female patient's pregnancy results and the 2 forms of contraception she has been using in the iPLEDGE system. The iPLEDGE system verifies that all criteria have been met by the prescriber, patient, and pharmacy prior to granting the pharmacy authorization to fill and dispense isotretinoin. The pharmacist must obtain authorization from the iPLEDGE system via the program web site or phone system prior to dispensing each isotretinoin prescription for both male and female patients.

Unsurprisingly, doctors and patients are finding all this rather onerous:

Dr. Kathleen Carney-Godley, a dermatologist from East Greenwich, R.I., said that she tried to enter her patients into the system from home one Sunday but could not because she did not have her patients' Social Security numbers.

Her partner tried to enter a patient into the system, failed, called for help, was put on hold and had time enough to excise a cancerous skin lesion in another patient before being able to talk to an operator, Dr. Carney-Godley said.

Other doctors complained of nonsensical instructions from the system — like requiring pregnancy tests for male patients — and long waits on the phone.

According to Public Citizen (who want to see Accutane taken off the market) the total number of Accutane-related birth defects in the period 1982-2002 was 162 and that in the first year of the S.M.A.R.T. program that preceded iPledge they estimate that there were 16 births with birth defects and 31 with retardation (it's not clear if these numbers match up). This is out of about 160,000 women taking Accutane a year. So, the risk of having an affected child if you are taking Accutane under the old program was about .03%. (Note that the pregnancy rate is substantially higher but a lot of women opted for abortions.)

For comparison, the risk of fetal alcohol syndrome in the population at large is about 1.25 per 1000 live births (order 4000 infants a year). Ignoring the jurisdictional issue (FDA doesn't regulate alcohol) why doesn't this motivate some controls on drinking for women of childbearing age? Public Citizen calls Accutane "one of the two worst epidemics of preventable serious birth defects ever seen in the U.S.", but this is nonsense. As I've indicated, FAS is far more common than Accutane-related birth defects, as are Neural Tube Defects, about half of which could be prevented with folic acid supplementation. Even if this were true, the comparison is absurd, spanning, as it does, two orders of magnitude.

 

February 11, 2006

From: Eric Rescorla
Date: 2006-02-10
Review of draft-merrells-dix-00.txt
 
BACKGROUND
The idea behind DIX is to have a third-party authentication system
where a user can have a preexisting relationship with a third party
that then vouches for its identity to the system site/server that it
actually wants to communicate with. This is, of course, a pretty
common desire and there are already a lot of systems that do 
it (Kerberos, PKI, etc.). The particular scheme described in
this draft is based on Web forms, Javascript, and redirection.

SUMMARY OF THE DIX SCHEME
Here's my reconstruction of how the DIX scheme works from the 
document (which, unfortunately, does not contain a helpful
summary or diagram)

The DIX scheme involves at minimum three agents:

- The user's client (browser)
- The site that the user is trying to access (called the Membersite)
- The site that authenticates the user (called the Homesite)

When the user contacts the Membersite (1), it responds with a web
page prompting the user to enter the URL of its Homesite (2). The
user then enters the Homesite URL (3). The Membersite contacts the 
Homesite (4,5) to determine whether the Homesite can provide the
appropriate kind of authentication. If it can, the Membersite
sends the client a redirect (6) (using Javascript) to the
Homesite. In some way that's not entirely clear the Homesite
validates the request and returns a ticket to the Client (8)
The Client then (via Javascript?) sends the ticket to the
the Membersite (9). The Membersite contacts the Homesite with
a digest of the ticket in order to confirm its validity (10)
If the Homesite says its OK (11), the Membersite returns OK
to the Client (12)


  Client                       Membersite                    Homesite

1 Hello -------------------------->
2       <------- Enter homesite URL
3 Homesite URL ------------------->             
4                                  Get capabilities ---------->
5                                         <------------- Capabilities
6       <------ Redirect to Homesite  
7 Get Ticket-------------------------------------------------->
8       <----------------------------------------------------- Ticket
9 Ticket ------------------------->
10                               Verify ticket -------------->
11                               <-----------------------  Ticket OK
12      <------------------------ OK


There's one more detail that I haven't mentioned: in order to
enable automatic submission of the abovementioned form (2,3), 
this document specifies a particular form for the page 
(i.e., particular names for form input fields) so that
clients can detect it and auto-fill-in.


COMMENTS
I have four major concerns about this system as described:

- Relationship to existing work
- Use of Javascript
- Method of ticket validation

The remainder of this review details these issues.


Relationship To Existing Work
There has been a very large amount of work in delegated/federated
authentication systems, ranging from RADIUS/DIAMETER to 
Shibboleth, Liberty, and SAML. It's not clear from this document or from
the charter why something new is needed here. So, I think the 
first order of business to establish what properties are required
here that present systems don't provide.


Use of Javascript
Section 5.10.2.1 reads:

   The Membersite sends a fetch-request message to the Homesite through 
   the User's client via a redirected HTTP POST to their Homesite 
   Endpoint URL using JavaScript to autosubmit the form. 

I appreciate the rationale for this: you want things to work with
dumb clients but given that Javascript isn't any kind of IETF
standard--it's hard to see how we could require it in an IETF
standard. ECMASCript, perhaps. Even then, specifying this kind
of implementation detail is the kind of thing that IETF typically
stays out of. I appreciate that this is also a wire protocol issue,
but given that there's no specification of the exact JavaScript
incantation, it's not clear it makes sense to specify only
the language.


Method of ticket validation
This draft validates the ticket by having the Membersite send a digest
to the Homesite and get an ACK. It's not clear why this is desirable.
Wouldn't it be simpler to have the Homesite digitally sign the ticket
(the key could be delivered in the initial capabilities discovery
phase) and then let the Membersite do the verification directly?
I appreciate that there's a freshness concern, but this can 
be alleviated using the usual nonce-based anti-replay techniques.

	A suggested implementation of a signature function would be to use 
	the SHA1 algorithm, which takes as input a digest of the message and 
	a secret known only to the Homesite. 

	Signature = T ( S + Digest )  

	Where, Digest is message digest (defined above), S is the Homesite 
	Secret, T is the signature generation function, and '+' means string 
	concatentation. 

The technical term for a "signature" which can only be verified by
the holder of a symmetric secret is Message Authentication Code (MAC)
and there's a standard technique for performing MACs: HMAC (RFC 2104).
 

February 10, 2006

The Chicago Lawyer's Committee for Civil Rights Under Law is currently suing Craigslist over some discriminatory housing ads that were posted on the Chicago site (press release here):
Among the housing ads cited as objectionable by the Chicago Lawyers' Committee for Civil Rights Under Law Inc. were ones that read "NO MINORITIES," "Requirements: Clean Godly Christian Male," and "Only Muslims apply."

While it remained unclear Thursday if the suit is the first of its kind, it signifies a burgeoning effort by housing watchdog groups to extend to the Internet the same legal restrictions facing those that publish print classifieds.

"Our goal is to have the Internet places like Craigslist treated no differently than newspapers and other media who have traditionally been posting real estate advertisements," said Stephen Libowsky, a counsel for the housing group. "All of the gains are going to get lost if the same rules don't apply."

...

The lawsuit seeks, among other things, to require Craigslist to report to the government any individual seeking to post a discriminatory ad and to develop screening software to preclude discriminatory ads from being published on its Web site.

Craigslist, which has 19 employees, maintains that screening its almost-nonstop classified listings would be impossible. Jim Buckmaster, its chief executive officer, said Thursday that the system is automated and that users can flag postings. If enough do, it comes off automatically. The "NO MINORITIES" ad was removed within two hours, he said.

Let's stipulate that these ads are discriminatory and violate the Fair Housing Act. Does Craigslist have any responsibility.

In order to make sense of this it's important to understand how a system like Craigslist works. Basically, it's just a big database with a Web site front-end. Users can enter ads and they are automatically added to the database and show up whenever someone does a particular kind of search. No human needs to be involved for a site like this to run at all and to a first order, many smaller sites of this type run with minimal human intervention. Contrast this with a system like newspaper classifieds where (at least until recently) you needed a human to transcribe ads and typeset them on the page.

Any system like this generally ends up with an abuse problem, and Craigslist is no exception. As indicated in the article, Craigslist deals with this by letting users report postings that are inappropriate. One natural response at this point--and one that apparently the CLCRUL has used--is to argue that sites should use automated filters, but in general this is only practical for really obvious cases of abuse such as first posters, swearing, etc. The problem here is analogous to spam--the attackers just adapt too quickly. All major sites I know of have to use human moderation to keep abuse under control at all. Typically the objective here isn't to totally prevent abuse but just to keep it to an acceptable level.

If you want to really reduce abuse below that level, pretty much the only alternative is to have some human actually examine each listing. This is obviously expensive and substantially reduces the cost and convenience advantage that an automated site offers over something where a human is involved in every transaction. Maybe that's a cost worth bearing to reduce housing discrimination, but it's something we should decide to do explicitly, not just by blindly applying rules from a radically different medium.

All that said, it's worth noting that Craigslist appears to carry advertisements for services that are clearly illegal, most notably prostitution. There's even a special forum for that, so the excuse that filtering is difficult doesn't really apply there. I'm not sure what the rationale is here (other than the obvious free speech one)--perhaps to avoid people spamming the other forums?

 

February 8, 2006

Cindy Cohn doesn't like the AOL/Yahoo pay-to-send e-mail scheme:
The justification is that if people have to pay to send email, they won't send junk email. Apparently AOL and Yahoo believe that if we "tax" speech then only desirable speech happens. We all know how well that works for postal mail -- that's why no one gets any "free" AOL starter disks, right?

I don't think this argument actually holds up that well. The volume of junk paper mail that most people get is far less than the volume of spam people get. So, while it's true that people get a lot of junk paper mail, it also seems true that the fact that there's some cost associated with it substantially reduces the amount you get. And note that I don't get any pornographic paper junk mail--unless you count the Victoria's Secret catalog. Now, you can say this is because the sender is identified and the USPS can track them down, but at least the identified part would be true in a pay-for-service system too.

Note that as I said earlier, though, this system doesn't require charging for messages because anti-spam enforcement can be by contractual mechanisms (see below). But a more any-to-any charging system could work without enforcement.

If email senders bear a burden, who gains? Not Yahoo and AOL customers, whose email boxes are being sold off. It will presumably be harder for even desired email to reach them.

This is obviously a real concern that I raised in my original message. But, then, as Kevin Dick observes, if the market is competitive, then people can switch providers to someone who doesn't charge for access to their mailboxes--or to someone who passes through the fee to them!

In return, customers probably will now get not one but two helpings of spam. For only $.0025 cent per message, Yahoo and AOL will guarantee delivery of this extra-special "certified" paid-placement mail, served alongside your ordinary spam. They'll also preserve webbugs, little privacy invaders that report back when you look at the email. Goodmail says that it will ensure that the messages aren't spam, but it's not clear how they will enforce this. After all if a foolproof way for a third-party to distinguish wanted from unwanted messages existed, we would have solved the spam problem long ago.

I don't actually agree with this last objection. It's true that we can't mechanically filter out spam, but that doesn't mean that Goodmail can't enforce that messages aren't spam. That's comparatively simple: you have some contractual standard and if they violate it (e.g., if a user sends you a spam e-mail that's from an identified customer--and remember that you probably have to authenticate these messages anyway so you have an audit trail) then you have a large penalty or cancel the contract or something.

What about phishing? Remember, the problem with phishing is that ordinary end users cannot always tell when a "certification" is real. Spoofing the appearance of Goodmail certification to end users should not be much of a problem, and all of the encryption in the world won't fix that.

I don't understand this argument at all. Yahoo/AOL, etc. control their own Web UI frames and should be able to arrange that the certified indicator only appears for legitimately certified e-mail. Remember that Yahoo and AOL like to suppress active content in un-certified e-mail, so this makes most of the spoofing mechanisms much harder to execute. And remember that if one of the contracting parties is a phisher, then we can use the penalty mechanisms I mentioned above to deal with them. This isn't a benefit of charging, of course, just of accountability.

Note that I'm not saying that charging for e-mail is a good way to suppress spam. I'm still uncertain about that myself, but I don't think it can be dismissed this glibly, either.

 

February 7, 2006

Three papers in JAMA report on a large randomized trial in 50-70 years old women that indicates that a low-fat diet has no significant effect on cardiovascular disease, colorectal cancer, and invasive breast cancer. Even if we ignore the statistical significance issues, the central estimates of the effect are quite small: with only 9% for breast cancer and basically no effect for cardiovascular disease and colorectal cancer.

Two things worth noting here. This was actually a pretty aggressive intervention: 37% Calories from fat in the control group and as opposed to 28.8% Calories from fat in the intervention group--though this fell short of the study targets. Second, only a very modest improvement in cardiovascular risk factors (a few percent) was achieved, although, as noted, this doesn't seem to translate into an improvement in actual events. The bottom line, then, is that either you need a much more aggressive intervention--and given the study data, this will be hard to obtain in healthy people--or that we need to consider giving up on the low fat project entirely.

 

February 6, 2006

One of the more interesting papers I saw at NDSS was Dagon, Zou, and Lee's Modeling Botnet Propagation Using Time Zones. In case you don't know, a Botnet is a set of computers that are all infected with some piece of malware and under the control of some bad actor (the botmaster). Botnets can be used to send spam, phishing e-mail, mount DDoS attacks, or commit click fraud, among other things.

In order for a botnet to be useful, the botmaster needs to be able to send it instructions, e.g., "Send this spam message". This is typically done by having the infected machine contact a command-and-control (C&C) server (typically a machine that the botmaster has compromised rather than his actual machine) and ask it for instructions. Dagon et al. took advantage of this technique to take over and measure the botnet. The basic idea is to collect a sample of the malware from a honeypot or an infected machine and then disassemble the binary to get the identity of the machine that's being used as the C&C server. Once you've done that you can contact the domain name holder or the registrar and get them to redirect the address to a machine that you control (a sinkhole). Once the bot connects to your sinkhole, you can control it. At minimum, this technique can be used to get an accurate estimate of the scale of the infection and of course it has the nice side effect that any bot you capture isn't being used in attacks.

One nice feature of this technique is that it's likely to have high accuracy because you're directly measuring infected machines rather than scanning or attack activity. In addition, because the authors actually completed TCP handshakes with the bots, this technique is fairly resistant to address spoofing--a machine with a simple forged address can't complete the TCP 3-way handshake. The authors report that they've seen botnets as large as 350,000 infected machines, which matches the estimates of botnet size you often see bandied about.

It's interesting to ask how you'd counter this technique. One obvious choice would be to simply hardwire the IP address of the C&C machine, but then if the owner of that machine fixes it, the entire botnet is lost. The ability to retarget is why the botmasters are using DNS in the first place. Another natural thing to do is use better obfuscation techniques to make it harder for the defender to figure out what DNS address you're looking up, but eventually your binary will be reverse engineered. Periodically downloading new binaries with different rendezvous points would presumably help here if the obfuscation were done differently each time.

What you really want is to remove the reliance on a central point of control. For instance, you could post instructions to a popular newsgroup like alt.binaries.pictures and let the bots contact Google Groups to get their instructions. You could use cryptographic techniques (digital signatures) to make it impossible for anyone else to emplace new instructions though they would still be removable, of course. Similar techniques could be used with P2P/filesharing systems. FreeNet, for instance, is designed to be hard to censor, though I don't know how true that is in practice.

 

February 5, 2006

EG reader Nagendra Modadugu pointed me to this Times article on AOL and Yahoo's plans to charge mail senders for the right to bypass their spam filters:
America Online and Yahoo, two of the world's largest providers of e-mail accounts, are about to start using a system that gives preferential treatment to messages from companies that pay from 1/4 of a cent to a penny each to have them delivered. The senders must promise to contact only people who have agreed to receive their messages, or risk being blocked entirely.

The Internet companies say that this will help them identify legitimate mail and cut down on junk e-mail, identity-theft scams and other scourges that plague users of their services. They also stand to earn millions of dollars a year from the system if it is widely adopted.

AOL and Yahoo will still accept e-mail from senders who have not paid, but the paid messages will be given special treatment. On AOL, for example, they will go straight to users' main mailboxes, and will not have to pass the gantlet of spam filters that could divert them to a junk-mail folder or strip them of images and Web links. As is the case now, mail arriving from addresses that users have added to their AOL address books will not be treated as spam.

OK, so they'll be charging, but it's a little unclear what the terms are going to be here. It seems to me that there are two possibilities:

  1. Senders of non-spam messages that might otherwise be potentially flagged as spam (e.g., opt-in mailing lists) will be able to avoid false positives.
  2. Senders of spam messages will be able to bypass AOL and Yahoo's spam filtering system and deliver their messages right to consumers.

Now, from a technical perspective these are basically identical, but not at all from a social perspective. In case (1), Yahoo and AOL are acting in the interest of their users who presumably want to receive the order confirmations and opt-in advertisements they signed up for. The payment is pure monopoly rent: it's not more expensive to bypass the spam filter, but less so, and all the receiver needs to be able to do is to verify that the sender is a non-spammer, which is a simple authentication problem. So, at most they would need to charge a simple setup fee to defray their costs. Also, as Lixia Zhang pointed out on a mailing list we're both on, there's a bit of a perverse incentive here in that Yahoo and AOL can benefit by purposely tagging legitimate messages as spam and forcing the senders to pay to have them bypass the spam filters.

In case (2), by contrast, Yahoo and AOL are taking a payment from the senders in order to do something the users wouldn't prefer--send them unwanted e-mail. The users, of course, would prefer that Yahoo and AOL do the best possible job of filtering spam. Here, too, there is an interesting incentive issue: Yahoo and AOL want to do the best possible job of filtering spam from people who haven't paid them in order to extract the maximum amount from people who have: if the spam filters are really good, then the only way to get your spam through will be to pay.

 

February 3, 2006

Here are my slides from my talk today at ISOC NDSS.
 
Received in mail today:
X-Originating-Email: [mrk700700@msn.com]
Reply-To: sgtmarkedwards@gmail.com
From: "Sgt.Mark Edwards" 
Subject: From: Sgt.Mark Edward 
Date: Fri, 03 Feb 2006 17:35:16 +0000
To: undisclosed-recipients: ;

From: Sgt.Mark Edward

The President/CEO

Dear Sir/Madam,

My name is Mark Edward, I am an American soldier serving in the
military of the 1st Armored Division in Iraq, As you know we are being
attacked by insurgents everyday and car bombs.We managed to move funds
belonging to Saddam Hussien's family.

We want to move this money to you, so that you may invest it for us
and keep our share in a safety keep.We will take 70%, my partner and I.
You take the other 30%. no strings attached, just help us move it out
of Iraq, Iraq is a warzone. We plan on using diplomatic courier and
shipping the funds out in one silver box, using diplomatic immunity.

This Transaction is risk free and has a diplomatic coverage whereby the
consignment cannot be checked in port of Entry.

Also,I regret if this email surprises you but rather I just need your
kindest of assistance.

Sincerely Yours,

Sgt.Edward
1st Armored Division in Iraq
US ARMY

If you need me, I'll be out making a profit by supporting our troops!

 

February 1, 2006

The Times reports on a deal to cut spending by 39 billion over the next 5 years:
WASHINGTON, Feb. 1 House Republicans, handing a close-fought victory to President Bush on the heels of his State of the Union address, pushed through a measure today to rein in spending by nearly $40 billion over the next five years, with cuts in student loans, crop subsidies and Medicaid, the government's health insurance program for the poor.

This comes to 8 billion a year. Shouldn't this article mention somewhere that the Federal budget for 2006 was over 2 trillion? 8 billion is .4% of that. To give you some more context, the deficit for 2005 is projected to be over 400 billion, so this represents less than a 2% cut in the deficit. For some reason the Times doesn't think this is something you need to know.

UPDATE: Brad DeLong makes the same point.