EKR: March 2005 Archives

 

March 31, 2005

It's time to reissue the contract for the .NET registry and ICANN's procedure for selecting a new registry involved having a beauty contest. They had a third party (Telcordia) evaluate the proposals and then selecting the highest-ranking proposal:
Immediately following the announcement of the evaluators' final rankings, the applicant who was ranked the highest will be invited to begin intensive and speedy negotiations with ICANN on the terms of the .NET registry operator agreement. ICANN's proposed form of agreement will be posted online on or about 31 January 2005. If the highest ranking applicant and ICANN are unable to reach a mutually acceptable agreement within two weeks following the release of the rankings, then (i) ICANN will prepare for the ICANN Board a summary of the contractual points in dispute, upon which the applicant will be invited to comment prior to its submission to the ICANN Board, and (ii) the ICANN staff will immediately begin negotiations with the next highest ranked applicant with the goal of reaching an agreement (and related appendices, as appropriate) mutually acceptable to that applicant and ICANN.

Telcordia has issued their report. Technically, VeriSign came out on top, but basically it's a tie between VeriSign and Sentan

Criterion Afilias CORE++ DENIC Sentan VeriSign
Applicant Ranking (overall) 3 5 4 2 1
High priority criteria 3 Blue 3 Blue
1 Red
4 Blue
1 Yellow
12 Blue 14 Blue
Medium priority criteria - 1 Blue - 1 Blue -
Pricing rank (medium priority) 1 4 5 2 2

Blue is best, yellow is questionable, red is bad.

Indeed, the report explicitly says that it's a tie:

Sentan and VeriSign are the leaders, Afilias and DENIC are in the second group and CORE++ is third. Within the first group, VeriSign has a small numerical edge over Sentan that is not statistically significant given the methodology used to rate the RFP responses. The stratification between the lead group (Sentan, VeriSign) and the other vendors is statistically significant.

The results of the site visits were not used to arrive at this ranking. However, in our professional judgment the results correspond to our impressions during the site visits. Sentan and VeriSign are highly professional organizations with mature quality processes. The risk to the operation of .NET is minimal if either organization is awarded the contract.

All that said, ICANN will now "promptly enter negotiations with the top-ranked applicant to reach a mutually acceptable registry agreement," which is to say VeriSign.

My take: this is a missed opportunity for ICANN to extract concessions from the vendors. If Sentan and VeriSign would both do a good job, then why pick VeriSign basically arbitrarily? Better to make both bid for the right to be the registry, either in terms of lower prices per domain or in terms of guarantees of better service. Isn't there still contention about VeriSign's sitefinder service? I'm not a lawyer, but Could ICANN require VeriSign to agree not to use SiteFinder on .net--or even .com?

UPDATE: Corrected where I said registrar where it should have been registry. Thanks to Grumpy for pointing this out.

 
Knitbot has quite an interesting article on the economics of the yarn supply chain. This helps explain why every knitter I know seems to be maintaining their own warehouse full of yarn for projects they plan to get to any day now.
 

March 30, 2005

Here's a new feature Bloglines is offering. You can now use it to track your packages via UPS, FedEx, or USPS. A small trick, but kind of nice.
 
Terence Spies pointed me to this Amazon.com guide on how to kill somebody and get away with it. The clean-up instructions are particularly good:
After you have done the job, you now have 150+ pounds of raw meat on your hands that must be inconspicuously disposed of. While many people would suggest a trash bag and the bottom of a river, this is not an appropriate avenue for the true assassin. First, you must arrange the body in a more easy-to-carry way. The human body is made to be fairly durable, so some kind of saw ('Stanley 15-113 Contractor Grade High Tension Hacksaw') will be necessary for the task. After this, moving the meat will cause it's own problems. Walking around with a trash bag over your shoulder is fairly conspicuous, so it is better to use something that will draw less attention. Personally I've found the best way to accomplish this task, particularly in a college town, is to use a backpack ('Dana Design Sluiskin 45'). The rain-proof nature of this specific pack will be useful for keeping what fluids there may be inside of the pack. You'll also want to have a few trash bags on hand, then find some kind of a bookstore or library, place the backpack inside a trash bag, and toss it into a dumpster behind this building. Provided nothing leaks out, it will be easily ignored as a bag of thrown out books.

And of course, Amazon will gladly sell you all the tools to get the job done.

UPDATE: Looks like Amazon has taken this down.

 

March 29, 2005

The reason that the Secret Service's DNA key-cracking network works is that people are encrypting their files under keys generated from their passwords. People choose easily guessable passwords, which makes them easy to attack.

There are a number of countermeasures to this kind of attack:

  1. Use a non-guessable password or passphrase. There are algorithms for generating memorable and high-entropy passwords. See, for instance FIPS 181. Unfortunately, retraining users is very hard.
  2. Slow down the password->key transformation. There are a large number of techniques available to make the process of computing the encryption key from the password slower. The general idea here is that users will accept a small delay (order 1 second) in the first decryption in order to have advanced security. If you imagine that the attacker has to try a million keys--this is a very small number--even at 1 second a key we're up to about 300 CPU hours of computer time.
  3. Make it more difficult for attackers to detect whether a given key is correct. For instance, some compression schemes would require you to decrypt the entire file, thus slowing down trial decryption somewhat.

The second strategy is probably the most attractive. It can be implemented by the software vendors and doesn't require new user behavior. If I were a criminal who expected my communications to be tapped by the FBI, I would definitely want my vendor to implement something like this.

UPDATE: Replaced FBI with Secret Service (see the original post).

 

March 28, 2005

WaPo has an interesting article about the distributed computation network (called Distributed Networking Attack (DNA)) the Secret Service uses to break encryption. There are two interesting pieces of information in this article:

First, even with a big network of computers, brute-forcing a reasonable-sized cryptographic key is totally out of reach. The Secret Service uses quite a clever dictionary attack strategy to speed things up:

In each case in which DNA is used, the Secret Service has plenty of "plaintext" or unencrypted data resident on the suspect's computer hard drive that can provide important clues to that person's password. When that data is fed into DNA, the system can create lists of words and phrases specific to the individual who owned the computer, lists that are used to try to crack the suspect's password. DNA can glean word lists from documents and e-mails on the suspect's PC, and can scour the suspect's Web browser cache and extract words from Web sites that the individual may have frequented.

"If we've got a suspect and we know from looking at his computer that he likes motorcycle Web sites, for example, we can pull words down off of those sites and create a unique dictionary of passwords of motorcycle terms," the Secret Service's Lewis said.

The second interesting thing is that the FBI isn't using a dedicated computing infrastructure; DNA runs on ordinary employee's workstations when they're idle. Obviously, that saves money, but it has interesting privacy implications. The way you run an attack like this is by doing what's called trial decryption. Each computer in the network has the ciphertext and to test a candidate key, you decrypt the ciphertext and look to see if the plaintext is plausible (e.g., it looks like it's ASCII text rather than random garbage.) But here's the thing: the workstation which actually correctly guesses the key now has the plaintext as well.1 The way you deal with this is by giving the worker machines a very small fragment of the file, say less than 100 bytes. Then, when the worker machine decrypts the chunk, they don't get the entire file. There's some indication that they do that now, but it's not entirely clear:

In the meantime, the agency is looking to partner with companies in the private sector that may have computer-processing power to spare, though Lewis declined to say which companies the Secret Service was approaching. Such a partnership would not endanger the secrecy of their operations, Lewis said, because any one partner would be given only tiny snippets of an entire encrypted message or file

This is an important consideration even if the only computers are operated by FBI employees. The "fragment" fix works fine if you have big files which you can independently decrypt and verify parts of. This is pretty much true for most modern encryption systems, but some of the techniques that crypto engineers talk about to resist brute force (e.g., compression 2) could force you to decrypt the entire file, making this a serious privacy issue. And of course these are some of the techniques you might want to use to counter this kind of search network.

1. Note that there is at least one special case. If you're encrypting something like an RSA private key, that has internal structure and you can test whether you have the correct key without ever seeing the actual plaintext.
2. Note that standard compression algorithms have a fixed header which makes detecting successful decryption easy, but you can design compression systems which don't have this property.

UPDATE: Florian Weimer points out that it's the Secret Service, not the FBI as I originally wrote. Fixed... I think...

 
We went up to the Royal Gorge X-C ski resort in Tahoe this weekend (staying at the Rainbow Lodge).

On the good side:

  • We started with a beginner's lesson (I've only been X-C skiing once before) and the instructor was particularly good: friendly and funny with a talent for hitting just the right instructional pace so you learned stuff but didn't feel overwhelmed.
  • The weather was absolutely beautiful, warm and clear---though we both forgot sunscreen the first day and some sunburn ensued.
  • The trails were well maintained and fairly well marked. The difficulty ratings were also pretty accurate, without any black diamond surprises on the intermediate trails.
  • The trails are equipped with these nice little warming huts which you can sit in and warm up or have a snack.
  • The people were generally very nice, with a few exceptions discussed below. In particular, the ski rental people were very helpful in helping us try out some of the demo skis on day two.

On the bad side:

  • The Rainbow Lodge (Royal Gorge's captive hotel) rooms were a little cramped. Ours had a queen bed with only about 3-4 feet of clearance between the bed and the wall. The toilet was literally in a closet. I guess it's part of the bed-and-breakfast feel, but I would prefer a more conventional room.
  • I get the same broken ski pole two days in a row---the handle kept coming off. I told them after the first day, but apparently they didn't fix it, which was somewhat irritating.
  • The heating system in the Rainbow Lodge was seriously hosed. Our room was probably about 80 degrees. I mentioned this to the front desk after the first night and they assured me the heat had been turned down, but there I was at 2 AM the second night trying to close the heating register (no luck) and opening the window. Even then, it was too hot on my side of the room (away from the window). And this is the super-annoying bit, when I mentioned it to the front desk they basically told me to go away and fill out a complaint form if I wanted to. Writing a letter to the owners is on my TODO list...
  • We got out of Tahoe just fine, but the traffic was bad for about 25 miles on either side of Sacramento and then it started to rain pretty heavily. Overall, it took us about 5 hours to get back. On the other hand, the drive to Tahoe was nice. We left at 8.30 or so and the roads were totally clear. (Special thanks to Cullen Jennings who lent me his car when I found out that my S4 wouldn't take chains).

If you like to cross-country ski, I can definitely recommend the Royal Gorge trails. However, I can't really recommend staying with them. There appeared to be a number of bed and breakfasts in the same general area, so there should be a bunch of other options which are equally convenient and a bit more eager for your business.

 
Larry Lessig points to the NYT's editorial on the Grokster case. I'll say this about them: they understand that it's about incentives:
But when the Supreme Court takes up the issue this week, we hope it considers another party to the dispute: individual creators of music, movies and books, who need to keep getting paid if they are going to keep creating. If their work is suddenly made "free," all of society is likely to suffer.

...

The founders wrote copyright protections into the Constitution because they believed that they were necessary for progress. Movies, music and books require investments of money and time. If their creators cannot make money from them, many will be unwilling or unable to keep producing. Or they may have to finance their work in troubling ways, like by building in product placements or taking money from donors with agendas.

This is a reasonable argument on theoretical grounds, but it's important to take a step back and look at the chain of reasoning:

  1. The amount of content purchased is strongly affected by the availability of free content.
  2. Content creator's income is strongly tied to the amount of content purchased.
  3. The amount of content supplied is fairly elastic.

Each step of this argument is theoretically plausible, but the reverse is also plausible, and we don't really have the data to be sure either way.

The research on the effect of free content on content sales is at best equivocal. We don't have a definitive answer that there's no effect, but the evidence doesn't really support the assertion that there's a definite effect. Indeed, at least one study suggests that there's no significant effect in current filesharing networks. Now, it's true that those networks aren't as usable/available as one might like, but consider that lots of content is available from AllOfMP3 and that doesn't seem to have put that big a dent in music sales.

Step 2 is also fairly questionable. It's true that if you're one of he few really successful writers or musicians, then you make most of your money directly from the sales of your content, but a lot of content creators make most of their money indirectly. For instance, if you're an author of a technical book, unless you have a real blockbuster, you probably make more money indirectly from your enhanced reputation--and the increased compensation it lets you command--than from royalties directly. I imagine that this is true for a lot of non-technical authors and musicians as well.

Finally, consider Step 3, the claim that the amount of content supplied is elastic. I don't know of any evidence that this is true. On the contrary, if you look around you'll notice that an enormous amount of uncompensated content being produced, in the form of free software, music, and blogs. To take only the case of music, realize that despite the shockingly small amounts of money that most professional musicians make, there seems to be a near infinite number of people with day jobs just waiting for their shot. Why should we expect that to change if the amount of revenue those artist make shrinks by 50?

The truth is that we don't know with any real certainty what the effect of widely available free content on the supply of new content would be. We certainly don't know that it would have a big enough negative effect to offset the deadweight loss that we know we're suffering due to restricted availability of content. Anyone who suggests differently is probably making stuff up.

 

March 25, 2005

I'm heading up to Lake Tahoe for the weekend and Highway 80 often has chain restrictions, so I went looking for chains for my 2001 S4. Unfortunately, the trip was last minute and so is my chain shopping.

Here's what the Audi owner's manual has to say:

Snow chains must not be used on the tires with 71/2J x 17 rims. See page 232 for details.

If you want to mount snow chains on your vehicle, consult your Audi Dealer for proper rim/tire combination.

Snow chains can be used on the front wheels only.

[4 more paragraphs on how to use snow chains deleted]

Now, I have no idea what kind of rims I have, so I turn to page 232. Unfortunately, page 232 is entirely about spare tires and doesn't say anything about 71/2Jx17.

Now, you can definitely get chains for my 225/45 R17 tires, but I'm paranoid so I call Audi to be sure. They look at the manual, which appears to have been translated from German. The customer service rep informs me that the 2001 S4 only comes with one kind of rim, which, you guessed it, is 71/2Jx17. Moreover, they discourage you from using chains with any kind of sport tire (which I have). So, basically, the answer is "no", though apparently it requires some kind of automotive expert to work that out for yourself, especially in view of the fact that Audi tells you how to use them with rims you almost certainly don't have.

 

March 24, 2005

From Harold McGee's On Food and Cooking, comes the following table of ice cream composition:
Style % Milk Fat % Other Milk Solids % Sugar % Yolk Solids (Stabilizers) % Water Overrun (% volume of original mix) Calories per 1/2 cup (125 ml)
Premium standard 16-20 7-8 13-16 (0.3) 65-56 20-40 240-360
Name-brand standard 12-14 8-11 13-15 (0.3) 67-60 60-90 130-250
Name-brand standard 10 11 13-15 (0.3) 64 90-100 120-150
"French" (commercial) 10 11 13-15 (0.3) 64 90-100 120-150
French (handmade) 3-10 7-8 15-20 6-8 69-54 0-20 150-270
Gelato 18 7-8 16 4-8 55-50 0-10 300-370
Soft-serve 3-10 11-14 13-16 (0.4) 73-60 30-60 175-190
Low-fat 2-4 12-14 18-21 (0.8) 68-61 75-90 80-135
Sherbet 1-3 1-3 26-35 (0.5) 72-59 25-50 95-140
Kulfi 7 18 5-15 - 70-60 0-20 170-230

I love gelato, but 18% milkfat... ouch.

 
/. is covering SecureScience's announcement of a new block cipher called CS2. Here's the claimed value proposition:
A simple, efficient and secure block cipher has been proposed. It was designed after the CS block cipher as well as the research into FPHT transforms of [5] and [7]. We feel that the design is a reasonable alternative to Rijndael for hardware platforms since it is equally as efficient and does not rely on a highly algebraic non-linear transform.

In other words, it's no faster than AES, but hasn't gone through the extensive rounds of vetting than AES has? That's really quite the compelling argument they've got there.

The recent results on MD5 and SHA-1, while not having any direct implications for AES, do reinforce the wisdom of not putting all of one's eggs in one basket, so I can see the attraction of having an alternative to AES. Here's the thing, though: there were five finalists in the AES competition: MARS, RC6, Rijndael (the selected AES), Serpent, and Twofish. All of them were quite fast and were believed by the evaluators to offer an adequate security margin. If you feel the need for a backup for AES, you should pick one of the other AES candidates rather than some entirely new cipher.

Another alternative here is to use 3DES, which people are pretty comfortable with from a security perspective. However 3DES has two significant drawbacks:

  1. The small blocksize (64 bits) means that you have to rekey relatively frequently---every 34 GB or so in CBC mode.
  2. 3DES is quite slow compared to the more modern algorithms.

This means that 3DES isn't really suitable for very high speed applications. For such applications, one of the other AES candidates is a better choice. I'm not really a cryptographer, so I can't offer an opinion about which the best choice of the remaining four is, but Twofish seems to have the biggest mindshare.

 

March 23, 2005

Due to extremely aggressive flu shot rationing, we appear to have an extra 4.5 million doses of flu vaccine, approximately 10% of the 50 million or so doses in the US. The natural reaction here is that the rationing was too aggressive, but that's not obviously true. If we hadn't rationed the shots, then they would have been fairly evenly distributed through the population. As it is, it went mainly to old people, who received more benefit from the vaccine--at least we thought at the time (though see here). On the other hand, if not that many more old people got the vaccine and the vaccine was just wasted, that's not so good. I don't have enough data to evaluate which situation this was.
 

March 22, 2005

As I predicted, the PyMusique guys have developed a new version of PyMusique that works with Apple's modified ITMS. It's really hard for Apple to win with this tactic because it's generally easy for the attackers to reverse engineer iTunes to figure out whatever secret it uses to authenticate itself to ITMS, and there's an upper limit on how often Apple can force people to upgrade to whatever new version of ITMS they have.

That said, as I understand it the issue is that ITMS doesn't add DRM itself but relies on iTunes to do it. Presumably if Apple is willing ot throw some more CPU at the problem on the ITMS side they can render this particular form of attack kind of irrelevant. Of course, one can always bypass the DRM, as is done by PlayFair, but that's a different attack.

 

March 21, 2005

Well, that didn't take long. Apple has blocked PyMusique, which allowed you to download music from the iTunes Music Store without the DRM feature. Apparently, you can now only use iTunes version 4.7 to buy stuff from ITMS. Of course, this means that everyone who was running some other version of iTunes, you need to upgrade. Outstanding!

Of course, this is a potentially an arms race: the PyMusique guys can reverse engineer iTunes 4.7 and make PyMusique emulate it, Apple can upgrade, etc., forcing inconvenience all around. It will be interesting to see how this turns out.

 

March 20, 2005

WaPo reports on the not-too-surprising result that abstinence pledges don't have any significant effect on STD rates. Students were divided into 3 groups: non-pledgers, inconsistent pledgers (who changed their status or responses) and consistent pledgers. Here's a summary table of the results:

-Non-pledgersInconsistent PledgersConsistent Pledgersp
STD rate 6.9 6.4 4.6 .150
Sex before marriage 90 79 61 ?
Condom use (first sex) 59.7 54.9 54.6 .017
Oral sex only 2 5 13 <=.000

My take: On the one hand, this suggests that abstinence pledges aren't very useful in preventing STDs. On the other hand, while there's a statistically significant difference in condom usage between the three groups, it's only about a 10% difference, which really isn't that big a deal. There appears to be a real difference in the rate of premarital sex, but I wonder how much of that is an artifact. People who don't plan to have premarital sex are probably a lot more likely to take pledges in the first place, I would imagine.

It's particularly interesting to note the substantially higher rate (6 times) of "technical virgins" (those who have only had oral sex) in the pledging group. There are (at least) two possible explanations for this: (1) people who take the abstinence pledge really do want to have sex but somehow feel committed and so take advantage of the oral sex loophole. (2) people who think of oral sex and intercourse as different are more likely to take abstinence pledges. Note that when you add up the total "sexual contact" rates you get 92, 84, and 74, respectively. Nearly 3/4 of people who pledge not to have sex have some significant sexual conduct anyway. Somehow I suspect that's not exactly the result the abstinence advocates were looking for.

 

March 19, 2005

I don't think all research into cracking DRM is pointless. In particular, work like John Haldeman's demonstration that the shift key could be used to bypass SunnComm's CD copy-protection mechanisms serves a useful purpose: deterring CD manufacturers who might otherwise have drunk the SunnComm kool-aid from inconveniencing customers without getting any real protection for themselves.

The difference is that there's basically no way for CD copy protection to actually be effective without inconveniencing far more users than the manufacturers would ever be willing to do. Pointing that out to them is a public service. Forcing Apple to tighten their DRM is not a public service because it's something that Apple is perfectly willing to do and so the only real effect is to make everyone more miserable.

And yes, I do recognize that this implies that there is a middle ground in which the manufacturers might be able to shift which region they're in by credibly committing to using DRM no matter what the cost.... Call this the "You're just making it harder on yourself" defense.

 
Jon Johansen has just released pymusique (linke /.ed, try cache) a piece of software that lets you download DRM-free music from the iTunes Music store. It seems that when you download music from ITMS, iTunes adds the DRM itself, so pymusique just doesn't add it.

I don't really understand the point of this kind of thing from the pespective of the average user:

  1. DRM always requires the cooperation of the software that the user uses to display the content.
  2. Without trusted hardware, it will always be possible to coopt that software. You're going to hear a lot in the next week or so about how Apple's design was incompetent so of course it was cracked, so it's important to bear this point firmly in mind: there is nothing Apple could have done to make cracking impossible, just more inconvenient. It's true that this probably wasn't the best design decision, but it just doesn't matter.
  3. Every time someone figures out how to coopt the client side, the manufacturers respond by changing the formats or software. (In this particular case, the obvious fix is to add DRM on the server).
  4. Every time the formats change, it's an inconvenience to the legitimate customers, who generally have to upgrade their software. (It's also an inconvenience to the people who are using the DRM-cracking software since they need to update that.)

In other words, every time Mr. Johansen or someone else figures out how to crack Apple's DRM, the main effect is to inconvenience Apple and you the consumer. Yes, yes, it demonstrates the futility of software-only DRM against a determined attacker, but so what? We all knew that already. The chance that Apple will respond by removing DRM seems slim. The chance that when they rev the format it will involve new inconvenient restrictions (whether justified for security reasons or not) is high. What's in it for me again?

UPDATE: Chris Lightfoot argues in the comments that: "By creating an inconvenience every time the DRM is compromised, the attacker creates a disincentive for people to buy from companies which use DRM."

Absolutely true, but that serves their interests, not yours. The question here is how you should react when you hear that someone has broken Apple's (or anyone else's) DRM. What I mostly hear is "Stick it to The Man!", but I suspect a more rational response would be "Those darn hackers are at it again."

 

March 18, 2005

Eu-Jin Goh pointed me to this cartoon showing how to read the author list on a paper:

When I was in Chemistry, the rule I remember being told was:

  • First author is the person who did most of the work.
  • Last author is the principal investigator in who's lab the work was done.
  • The other authors are in descending order of contribution.
This varies a bit across fields. In crypto, for instance, it's generally alphabetical, which is good news if your name is Adelman, bad news if your name is Wang.
 

March 17, 2005

Horatiu Nastase's paper on arXiv suggests that the "fireball" phenomenon observed at the Relativistic Heavy Ion Collider (at Brookhaven National Labs) is actually a black hole:
We argue that the fireball observed at RHIC is (the analog of) a dual black hole. In previous works, we have argued that the large $s$ behaviour of the total QCD cross section is due to production of dual black holes, and that in the QCD effective field theory it corresponds to a nonlinear soliton of the pion field. Now we argue that the RHIC fireball is this soliton. We calculate the soliton (black hole) temperature, and get $T=4a /\pi$, with $a$ a nonperturbative constant. For $a=1$, we get $175.76 MeV$, compared to the experimental value of the fireball ``freeze-out'' of about $176 MeV$. The observed $\eta/ s$ for the fireball is close to the dual value of $1/4\pi$. The ``Color Glass Condensate'' (CGC) state at the core of the fireball is the pion field soliton, dual to the interior of the black hole. The main interaction between particles in the CGC is a Coulomb potential, due to short range pion exchange, dual to gravitational interaction inside the black hole, deconfining quarks and gluons. Thus RHIC is in a certain sense a string theory testing machine, analyzing the formation and decay of dual black holes, and giving information about the black hole interior.

I don't understand anywhere near enough about advanced particle physics and string theory to process this, but it seems like the kind of thing that would be good for someone, somewhere to really understand.

Peter Steinberg suggests that this is actually fairly safe, at least if you're not standing right on top of it. The idea seems to be that the math for describing what's going on is the same as the math for describing a black hole in some space-time geometry that isn't the same as our space-time geometry. At least that's my extremely sketchy understanding based on Steinberg's post, my undergrad QM classes, and my recent reading of The Elegant Universe.

 

March 16, 2005

I just got my BeyondFleece Cold Fusion jacket on Monday and wanted to write a review. Since I ordered it back in November and Beyond claims to deliver your jacket in about 2 weeks you might wonder what took so long.

It's true that Beyond shipped my jacket in a week or so. Unfortunately, the jacket that they sent me appeared to have been sized for someone who weighed about 220 pounds. Since I weigh 170 pounds the fit was rather more like a burlap bag than a custom-fitted jacket. No problem, everyone makes mistakes and Beyond offers a perfect fit guarantee. I contacted Beyond and they told me to ship it back with a note. I did that around December 21st, but by the time I got back from Christmas, I hadn't heard from them and the USPS package tracking said that it hadn't been picked up. I called them and found out that they had been on vacation but would get to the jacket shortly. We talked a little bit about the fit problem and they agreed that there was some kind of systematic error and that they'd work it out.

When the new jacket showed up a week or so later, however, it was closer, but still too big. In particular, the lower torso and the arms were way too lose. I e-mailed Beyond and asked if they wanted to try to fix it or just give me my money back. They said they were willing to try again (they did agree to pay return shipping), so I sent it back with a very extensive note, including some more measurements and photos of me in the jacket. Two weeks later, the post office indicated that the jacket was in their P.O. box and hadn't been picked up. When I got Beyond on the phone, they said that they usually had people send things to their physical address--I didn't see this on their web site--and that they would go check the P.O. box. A week later I still hadn't heard anything and I called them.

When I finally checked my spam filter, I realized that Beyond customer service had told me that they hadn't seen a note in their P.O. box. However, a week later they went back and asked at the counter and found that there just hadn't been a note but the jacket was there. Customer service and I exchanged a few messages and I told them it would be nice to get the new jacket by Friday March 4 in times to take it to IETF. They asked if the 4th was OK and I said yes, but come the 4th--no joy!

The third iteration of the jacket finally arrived Monday March 14th. After all that waiting, it's basically pretty nice. The fit in the shoulders and torso is much better than any off-the-rack jacket I've seen (I looked at Arc'Teryx, Marmot, Mountain Hardwear, and REI). The sleeves are still a little loose for my taste, but they're nice and comfortable in about any arm position, so certainly within the margin of error. Overall, the jacket looks good and the general level of comfort is quite high. I haven't had a chance to take it outside in any extreme conditions so I can't say much about how it performs in the cold and wet.

My general feeling about BeyondFleece is pretty mixed. The jacket is certainly nice, but having to wait this long (and send it back twice!) was pretty annoying. The whole reason I ordered a custom jacket was that I'm an unusual shape, so it's sort of disappointing that it was so hard for them to fit me. And while the Beyond customer service people were unerringly nice, they were fairly hard to get a hold of and didn't seem that interested in turning things around quickly. In general, if one of the off-the-rack jackets fits you fairly well, you might want to stick with that, since you can get a fairly predictable result in terms of fit and delivery time.

 

March 15, 2005

Slate has an article about the chickenpox vaccine. Here's the really interesting part:
But now the questionable durability of the immunity produced by the vaccine may alter the cost-benefit calculus. Older studies have shown that immunity to chickenpox (which historically has been virtually perfect after an attack of the virus) seems to depend on re-exposure. Those findings have been borne out in Japan, where some kids are immunized against chickenpox and others are not. It turns out that the vaccinated kids keep up high levels of protection because they are exposed over and over again to unprotected kids who catch the disease and pass it on. Each time such an exposure occurs, the immunized kids get a little "boost," which stimulates their immunity. Doctors think that the same thing happens to older patients who are at risk of shingles because they once had chickenpoxevery time they're exposed to poxy youngsters, their immunity gets a kick, which helps to suppress the reactivation of the latent virus as shingles.

One of the advantages of the Sabin Oral Polio Vaccine (OPV) is that recently immunized patients shed the attenuated virus and this can stimulate immunity in people who haven't been vaccinated. This is sort of the opposite case: non-immunized people can improve the immunity of the vaccinated.

 

March 14, 2005

The main formal thing that goes on at IETF is working group meetings. This means presentations, which means PowerPoint (or Keynote or SliTeX or whatever). These presentations are mostly being given by techies, so they're typically terrible: boring, disorganized, and incomprehensible. There are an infinite number of ways to give a bad presentation, but there are a few easy mistakes to avoid.

Don't talk for too long. I know you have a lot to say, but there's a maximum amount of information that the audience can absorb and a maximum amount of time that people can pay attention. I've got a fairly low tolerance for long presentations, but in my experience other people tend to get bored at around 15-20 minutes. You can get away with longer if you're particularly entertaining, but this generally only applies to things like keynotes, invited talks, etc., not run-of-the-mill technical presentations.

Focus on the important stuff. The corollary to the previous point is that you need to pick and choose what you're going to say. Most presentations at technical conferences are about papers or standards documents. Your job isn't to convey the entire document but to convey the stuff people really need to know. If you're talking about a paper, then this means the background of the work and the main results. If you're talking about a standards document the important architectural points and covering the contentious issues.

The slides are a prop. A lot of people make the mistake of thinking that all the information they're communicating needs to be on the slides. It doesn't. If the presentation is the high points of your paper, the slides are the high points of your talk. If they contain all the information that you're going to say, then people have no reason to actually listen to you. Whatever you do, don't read your slides. I can't say this enough times: don't read your slides. This is the most fatal presentation error you can make, since it guarantees the total uselesness of listening to you speak. This also implies that your slides should not contain paragraphs of text unless you're arguing about the wording of that exact paragraph.

Anticipate the audience's questions. If something you say brings up an obvious question, then answer that question pre-emptively. Someone will ask it anyway and this makes you look prepared. Getting this right can be tricky. If you wait too long, people will want to interrupt you to ask their question. I've had this happen when giving a presentation on particularly tricky topics, and it really breaks the flow of your talk. On the other hand, if you do this too often, you start to ramble. This is an area where running through your presentation in real time helps. One compromise you can use for marginal questions that you're not sure people will ask is to have backup slides that only show if people ask you the question they answer.

Practice your presentation skills. The pacing and cadences required for giving presentations aren't really that natural. You need to know what you're going to say before you say it, and this means practice. If you have particularly good presentation skills (e.g., you've given hundreds of presentations before) then you're probably OK. Otherwise, you should practice this particular talk. This has a secondary advantage that you get to work on the pacing and flow of this particular talk. I generally find that this helps you work out the bugs. You don't need someone to watch you—though it helps—but you can easily do it in front of the mirror. Videotaping yourself can also be illuminating—by which I mean horribly depressing. I speak really quickly and tend to mumble, so this sort of practice has been very valuable for me. I'm not great, but I'm getting better.

There's a basic principle at work here: Don't waste the audience's time. If they're sitting on their hands waiting for you to change slides, that's not good. If they're waiting for you to shut up that's even worse. Before you get up at the front of the room, ask yourself whether you would want to watch this presentation. If the answer is no, you need to rework things pretty seriously.

 

March 12, 2005

One of the amazing things about Minneapolis is that while it's cold outside, it's amazingly hot inside. For some reason, hotels and restaurants seem to be heated to temperatures well above the California norm. I suppose this is just my body overreacting to how much warmer it is than outside, but I don't think so; I noticed it even after having been outside for hours previously.
 

March 11, 2005

From Interesting People:
Hi Dave,

As an attorney, practicing in the areas of international business and immigration law, it has come to my attention through discussions with other attorneys, that DHS is pulling aside "selected" aliens at entry checkpoints and bringing them into a separate room which contains a DHS computer connected to the internet. The aliens are told to bring up their various email accounts on the screen and enter their passwords. DHS then reads the emails for information pertaining to possible unauthorized work or other matters and questions the aliens on these findings. Of course, no attorney can be present at these interrogations! People travelling to the U.S. should be aware that a possible search of them by DHS now also means a search of their email accounts!

Regards,
Rose Robbins, Esq.

As Dave Farber points out, other countries may do the same thing to Americans in retaliations. I'm really looking forward to that.

 
On the way back from Minneapolis, I noticed two things:
  • The pilots seem to be even more aggressive than usual about keeping the fasten seat belt sign on, even when there was clearly no turbulence.
  • Passengers were blatantly ignoring the sign and getting up and walking around and the flight attendants weren't making any attempt to stop them.

I doubt these are unrelated.

 

March 9, 2005

I'll be talking about the status of MD5/SHA-1 at Thursday's IETF Open Security Area Meeting. Here are my slides. BTW, using LaTeX for slides is a lot less bad than I expected.

UPDATE 3/14/05: I've uploaded the slides I actually gave, incorporating some comments from Cyperpunk and Paul Hoffman.

 

March 8, 2005

USA Today reports that the National Guard is having trouble getting recruits, because potential service members seem to find the possibility of being shot, blown up, etc. to be a disincentive to joining up. That's not surprising, but in the middle of the article, there's something that is:
O'Ferrell and 4,100 other Army Guard recruiters across the country are facing their most daunting challenge since the Vietnam War, one that may define the limits of the Bush administration's use of Guard and reserve troops in the war on terrorism.

Last year, Army Guard recruiters fell nearly 7,000 short of their goal of 56,000 soldiers. This year, the Guard's recruiting goal is an even more ambitious 63,000 soldiers, in part to make up for the 2004 shortfall. But through January, four months into the recruiting year that began in October, the Guard had recruited just 12,821 new soldiers, almost 24% below its target for that period.

63,000 soldiers? 4,100 recruiters? That's 1.3 recruits per recruiter per month! Say that you only get 10% of the people you try for... that's still approximately 13 hrs/recruit. How does that work? Does the recruiter follow you around for hours at a time explaining the virtues of the National Guard? These guys must have some other job... what is it?

 

March 7, 2005

So, my trusty Timbuk2 messenger bag has finally started to give up the ghost--it's literally coming apart at the seams, though the vinyl lining is intact. Since this bag survived about 8 years of fairly brutal daily use by yours truly, I'm got a fairly positive feeling about Timbuk2, but I'm open to being talked out of it. Anyone want to tell me why I should get a Zo, Bailey, PAC, or some other bag?
 

March 6, 2005

Adam Shostack points to the Common Vulnerability Scoring System (CVSS):
Over the past several years, a number of large computer security vendors and not-for-profit organizations have developed, promoted, and implemented procedures to rank information system vulnerabilities. Unfortunately, there has been no cohesion or interoperability among these systems. Also, existing systems tend to be limited in scope as to what they cover. Finally, all of these systems tend to be Internet-centric; that is, they tend to be concerned only with vulnerabilities affecting computers connected to the worldwide Internet. The NIAC commissioned this project to propose an open and universal vulnerability scoring system to address and solve these shortcomings, with the ultimate goal of promoting a common understanding of vulnerabilities and their impact.

To get the CVSS score for a given vulnerability, you give it individual scores along a number of axes, e.g.:

  • Access vector (local, remote, ...)
  • Access complexity (high, low)
  • Integrity impact (none, partial, complete).

CVSS then specifies an algorithm to aggregate all of this individual scores into a single linear score, which presumably gives you some impression of the severtity of the vulnerability.

I certainly agree that it's useful to have a common nomenclature and system for describing the characteristics of any individual vulnerability, but I'm fairly skeptical of the value of the CVSS aggregation formula. In general, it's pretty straightforward to determine linear values for each individual axis, and all other things being equal, if you have a vulnerability A which is worse on axis X than vulnerability B, then A is worse than B. However, this only gives you a partial ordering of vulnerability severity. In order to get a complete ordering, you need some kind of model for overall severity. Building this kind of model requires some pretty serious econometrics.

CVSS does have a formula which gives you a complete ordering but the paper doesn't contain any real explanation for where that formula comes from. The weighting factors are pretty obviously anchor points (.25, .333, .5) so I'm guessing they were chosen by hand rather than by some kind of regression model. It's not clear, at least to me, why one would want this particular formula and weighting factors rather than some other ad hoc aggregation function or just someone's subjective assessment.

 

March 5, 2005

I'm at IETF 62 in Minneapolis this week, so posting may be a little spotty. Did I mention it's pretty cold here?
 

March 4, 2005

From Craigs List personals:
You are an amazingly technical geek, who is going to teach me as much as possible about computer and network security - port analysis, protocol identification, remote application discovery, remote vulnerability assessment, windows registry assessment, OS discovery, identity management, etc.
I will be your fabulous girlfriend for the weekend. We can go wherever you want, we can do whatever you want.
At the end of the weekend, I will have a deeper understanding of IT security and you will have a s*#t-eating grin on your face.
Am I a tart? Yes, but you won't have to put up with the nonsense that you usually have to go through.
Email me your credentials.
Such a deal.
 

March 3, 2005

Lisa Dusseault recently started a lifting program, but unfortunately the environment isn't necessarily that friendly:
Man: "What are you going to do with all those muscles? Beat up the boys?"
Me: "Yup."
Man: "Well then, stay away from me.

Is it any surprise that women want to go to their own gym. Here's a good rule: unless women indicate otherwise, assume that they're in the gym to work out, not to be hit on, and that they don't need your opinion of their workout plan.

 
The passive network capture system I've been working on has two features that have an interesting interaction:
  • It decrypts SSL/TLS transactions that it captures.
  • It delivers the captured data via SSL/TLS.

We had a report that the SSL delivery connection was failing with the following error:

error:0407106B:rsa routines:RSA_padding_check_PKCS1_type_2:block type is not 02
error:04065072:rsa routines:RSA_EAY_PRIVATE_DECRYPT:padding check failed

This doesn't make any sense, though, because the capture system acts as an SSL client and doesn't do any RSA decryption. Also, it only happens when we're decrypting SSL. If we're just capturing HTTP data, or you don't have the SSL keys, then the system works perfectly.

It should be clear at this point that we're getting some kind of error bleedthrough from the SSL decryption, but how? We need one more piece of information to work it out: it only happens when the delivery socket is in non-blocking mode. If it's in blocking mode, everything works great.

What's happening is this: it's a result of the way that OpenSSL handles errors. It maintains a per-thread (static in our case) error stack. When you call SSL_get_error(r,ssl) it combines the information from r,ssl, and the error stack to decide what to return. Now, here's the important point: the error stack isn't cleared automatically on the call to SSL_write().

So, here's the sequence of events:

  1. We call RSA_private_decrypt() to decrypt the connection.
  2. The RSA_private_decrypt() fails, populating the error stack.
  3. Sometime later we call SSL_write() to deliver the data.
  4. SSL_write() encounters a blocking condition. This:
    • sets errno to EAGAIN (35)
    • returns -1
    • leaves the error stack untouched.
  5. When we call SSL_get_error(), we get the error from (2) because that's what's on the error stack.
  6. Since we're getting a totally unexpected error, we do the conservative thing and abort the connection.

This doesn't happen in blocking mode because you never return an error in step 4 (unless something went really wrong internally).

This problem doesn't occur normally for two reasons. First, generally when you encounter an OpenSSL error you call ERR_get_error() to find out what went wrong. ERR_get_error() clears the error stack as a side effect. We didn't bother to call it in the RSA decryption code in step (1) because we know what went wrong—the encryption block is badly formatted somehow—and there's nothing to do about it. Second, when something goes wrong in an SSL connection, you typically just throw the connection away and when you create a new connection SSL_connect() clears the error stack as a side effect.

There's a simple one line fix: call ERR_get_error() in step 1 to collect the error and clear the error stack. As a belt-and-suspenders move, we also clear the error stack before SSL_write() by calling ERR_clear_error(), just in case there's some other place we've forgotten to collect the error.

Isn't programming fun?

 

March 2, 2005

According to NPR, DHS is testing a plan where aliens waiting for decisions on their cases will wear monitoring ankle bracelets. Obviously, this is good for people who would otherwise be put in jail, but I wonder about the equilibrium effects. DHS can only afford to put so many people in jail, but ankle bracelets are a lot cheaper, so it seems likely that at the end of the day a lot more people will be wearing bracelets than would otherwise be in jail. Whether you think this is good or bad rather depends on how important you think it is to keep close track on aliens with indeterminate status.
 

March 1, 2005

Tadayoshi Kohno, Andre Broido, and kc klaffy have an interesting paper appearing in IEEE Oakland 2005 showing how to remotely fingerprint computers by measuring the amount of clock skew. The basic idea is that you use TCP timestamps to estimate how fast or slow the remote clock is running. This doesn't give you enough information to uniquely identify the remote machine, but it does give you a way to assess whether two given machines are the same. Possible uses include determining when two machines that have the same address are in fact different machines (e.g., they're behind a NAT) or whether two machines with different IP address are actually the same machine (e.g., a honeypot). Interestingly, the clock slew measurements are quite stable even when the network path to the machine being measured changes and over long periods of time. Nice work.
 
Eu-Jin Goh pointed me to this paper by Lenstra, Wang, and de Weger, entitled "Colliding X.509 Certificates". Lenstra et al start with an MD5 collision and the first half of a certificate and generate a pair of RSA public keys that produce the same digest value. This produces a pair of certificates with the same signature. This isn't that surprising a result, since it's implicit in the fact that MD5 has collisions, but it's nicely written up and clearly explained.

From a security perspective, this isn't really so bad, for two reasons:

  1. The attacker doesn't actually control all of the first half of the certificate, so mounting the attack on a real CA is harder.
  2. The only thing that's different between these certificates is the public key. So, if we ignore point (1), an attacker would be able to get a certificate with a public key different from that he gave the CA. This isn't inherently that interesting, but an extension to have other differences besides the public key (e.g., the name) would be quite interesting, although you probably wouldn't be able to really control the name you got.
So, don't panic. The analysis I posted here is still pretty accurate.

You can find the colliding certificates and some more details here.