EKR: December 2004 Archives

 

December 31, 2004

Maybe you've noticed the complaining about the amount of aid that the US and other Western countries are providing in the wake of last week's tsunami. I suppose that that's a reasonable point, but focusing on stinginess or generosity misses the bigger structural problem: why do we fund disaster relief in this ridiculous ad hoc way in the first place?

We may not know exactly when it's going to happen or where it's going to occur, but it's a virtual certainty that in the next year something really bad is going to happen somewhere in the world. (Check out this list of disaster for some perspective). This is a situation ripe for insurance if there ever was one. Now, obviously the poor countries can't afford insurance, but that's easy to fix: have the developed countries subsidize it.

What I'm thinking about here is some sort of international relief fund which would be mandated to go assist in disasters. It would be funded by the big donor countries, but on a continuing basis, and then would decide where and when to provide assistance based on some semi-objective criteria, probably with the assent of the big donors. Not only would this allow for more equal response, it would allow it to be coordinated better and appear faster because it wouldn't depend on every single donor country individually figuring out how to move the money.

Obviously, this wouldn't serve the interests of the donor countries as well, who love to use this money for leverage or political point scoring, but it would serve the interests of the victims--and isn't that the point?

 
My friend Cullen told me the following story about Canadian airport security:

Cullen and his family are flying to the US from Calgary. At the baggage check--before they've gone through the carry-on X-ray machine--they're selected for "random" baggage screening. The security types give their luggage a hand search and then hand it back to them. Cullen takes the bags about 15 feet away (still in plain view of security) and transfers a whole bunch of stuff from his carry-on (which, remember, has never been searched) into his checked luggage and puts the checked luggage on the belt.

Outstanding!

 

December 30, 2004

/. points to a Cincinatti Post article explaining what happened in the Comair crash:
The computer system was run by SBS International, a subsidiary of Boeing.

The SBS Crew Check system tracks all the details of where each crew member is scheduled and keeps a log of every scheduling change. "This probably seemed like plenty to the designers, but when the storms hit last week, they caused many, many crew reassignments, and the value of 32,000 was exceeded," he said. The computer system was run by SBS International, a subsidiary of Boeing.

The SBS Crew Check system tracks all the details of where each crew member is scheduled and keeps a log of every scheduling change.

Tom Carter, a computer consultant with Clover Link Systems of Los Angeles, said the application has a hard limit of 32,000 changes in a single month.

"This probably seemed like plenty to the designers, but when the storms hit last week, they caused many, many crew reassignments, and the value of 32,000 was exceeded," he said.

Not exactly what I'd call a graceful failure.

 
When I was in Canuckistan for the holidays, I had a number of people ask me what exactly America's fiscal problem is. I wish I'd had this Brad DeLong post to show them. You should also read Alex Tabarrok's article from early 2003.
 
The death toll is currently running at 116,000. This is almost an order of magnitude higher than the initial 14,000 death estimates. By contrast, if you recall the initial 9/11 estimates were about 10,000 and the final number dropped to around 3,000. In 9/11, the estimates were mostly generated by figuring out how many people were potentially killed and then crossing them off the list as they turned up alive. This time, however, it seems that the estimated get updated as bodies turn up.

It would be interesting to compare initial fatality estimates that appear in the news to the final accounts. Another interesting question is whether it would have been possible to generate better estimates. It would seem that some geographic-based sampling could have gotten us within the right order of magnitude within the first day.

 

December 29, 2004

Pseudoephedrine hydrochloride, the active ingredient in the decongestant Sudafed is also a basic ingredient in the standard recipe for methamphetamine. Over the past 10 years or so, states have been making it increasingly difficult to buy pseudoephedrine. In a number of states, you can now only buy it in a pharmacy.

Naturally, this distresses manufacturers of pseudoephedrine, and Pfizer, the maker of Sudafed, is responding by marketing an alternative. The new drug, "Sudafed PE" replaces pseudoephedrine with phenylephrine. However, it's not entirely clear that it's as good. From the abstract of Leslie Hendeles's Selecting a decongestant (Pharmocotherapy, Nov-Dec 1993):

Phenylpropanolamine, pseudoephedrine, and phenylephrine are the most common decongestants. Although all are sympathomimetic amines, their efficacy varies. In particular, phenylephrine is subject to first-pass metabolism and therefore is not bioavailable in currently recommended doses. In addition, phenylpropanolamine and pseudoephedrine, but not phenylephrine, are effective decongestants.

There appear to be conflicting studies, but unfortunately, I'm working only from the abstract--the paper doesn't seem to be online.

For the moment, at least, you'll still be able to buy pseudoephedrine, but don't be surprised if the states start using the availability of phenylephrine as a reason why it's OK to make pseudoephedrine even harder to get.

 

December 28, 2004

Allan Schiffman points to an article in Slate that argues that Iraq casualties aren't "light":
Generational contrasts are implicit today when casualties in Iraq are referred to as light, either on their own or in comparison to Vietnam. The Center for Strategic and Budgetary Assessments, for example, last July downplayed the intensity of the Iraq war on this basis, arguing that "it would take over 73 years for U.S. forces to incur the level of combat deaths suffered in the Vietnam war."

But a comparative analysis of U.S. casualty statistics from Iraq tells a different story. After factoring in medical, doctrinal, and technological improvements, infantry duty in Iraq circa 2004 comes out just as intense as infantry duty in Vietnam circa 1966and in some cases more lethal. Even discrete engagements, such as the battle of Hue City in 1968 and the battles for Fallujah in 2004, tell a similar tale: Today's grunts are patrolling a battlefield every bit as deadly as the crucible their fathers faced in Southeast Asia.

The authors point out (no doubt correctly) that improvements in medical care, military technology and the way we're fighting the war account for the difference. No doubt that's true, but whether it's relevant depends on your perspective, and it seems that the authors aren't really addressing the CSBA's argument head-on.

Let's forget about Iraq for a second and consider a simpler case: farming. Improved technology has dramatically improved the past 100 years with the % of the workforce who farms going from 42% in 1900 to 3% in 1998. Now, let's say for the sake of argument that the farm accident rate/worker hasn't changed (I'm too lazy to look it up). Nevertheless, we can produce the same amount of food for less than 1/10th the cost in terms of dead farmers. Now, Phil Carter comes along and complains that if we used 1900 farming technology we would have the same death rate, but we're not using 1900 farming technology, so it really doesn't matter. The bottom line is that we're much more efficient.

Similarly, if new technology lets us perform the same kind of operation (ignoring for a moment whether Viet Nam and Iraq are equally successful) with less manpower and (more importantly) less deaths, that's a good thing, and it's perfectly reasonable to note that. If we can sustain that rate, then we really are more militarily efficient and that enhances our military capability, which is good.

That said, I think there are several reasonable take-home points from Carter's article:

  • The "intensity" of the combat may be just as high in Iraq as in Viet Nam. This is relevant to the extent to which intensity has harmful effects on soldiers, either psychologically or because the nonfatal wounds are still very damaging. If our death rate is much lower than Viet Nam but the maiming rate is much higher, that doesn't buy us much (note to self: research maiming rates).
  • Up till now we've been mostly fighting the war using the new doctrine that Carter describes (massive air power, etc.) If we have to fight a lot more Fallujah-type battles and those aren't much better than the comparable battles in Viet Nam, then we're left with just the medical and armor improvements, which appear to reduce deaths by only about a factor of two.
  • We can't use the death rate as a proxy for the success of Iraq vs. Viet Nam becuase the rates are incommensurate.

However, I don't think it's fair to complain that without the technical advancements we've made our death rate would be much higher. After all, that's why we introduced the technology in the first place.

 

December 27, 2004

I recently receive the following message from Amazon:
A shipment from the above referenced order has been returned to our fulfillment center due to unknown reason

Since this package was undeliverable, we have returned the item(s) to inventory and you will receive a full refund for the shipment. This refund should go through within the next few days and you will receive an e-mail confirmation once the refund has been completed. If, however, a replacement order was sent at no charge because the package was presumed lost, a refund will not be requested.

For your reference, the shipping address we have on file for the returned order is:

Eric Rescorla
[My address]

We ask that you verify the correct shipping address is listed for each open order you may have. We want to make sure you receive all orders. You may view your order status online by clicking the "Your Account" link at the top of our web site.

If you would still like to receive these items, we would encourage you to return to our web site and place a new order. Please know we are unable to reship packages which are returned as undeliverable.

Thank you for your understanding. We appreciate your business and hope to see you again soon at Amazon.com.

Ok, so you're thinking that I just wasn't home when UPS came by, but no... Here's the UPS tracking info:

Dec 11, 2004 03:17:00 PM HODGKINS IL US UNLOAD SCAN
Dec 11, 2004 04:22:00 AM HODGKINS IL US ARRIVAL SCAN
Dec 11, 2004 02:11:00 AM INDIANAPOLIS IN US DEPARTURE SCAN
Dec 11, 2004 01:44:00 AM INDIANAPOLIS IN US ARRIVAL SCAN
Dec 10, 2004 09:31:00 PM LEXINGTON KY US DEPARTURE SCAN
Dec 10, 2004 08:06:39 PM Lexington KY USA SHIPPED
Dec 10, 2004 05:29:22 PM LEXINGTON KY US ORIGIN SCAN
Dec 10, 2004 03:48:24 PM US BILLING INFORMATION RECEIVED

The darn thing didn't get anywhere near California. Maybe it had trouble crossing the Mississippi.

 
Yesterday Lisa and I flew Alaska/Horizon from Kelowna (YLW) to San Francisco (SFO) via Seattle-Tacoma (SEA). The flight to Seattle went fine but after clearing customs and getting to our gate we discovered that our flight to SFO was 80 minutes late. No problem: there's a flight to San Jose (SJC) that was scheduled to leave an hour ago but is delayed and is leaving at the same time as our scheduled flight, and SJC is actually more convenient for us.

But when we asked to be transferred, the first question we were asked was if we'd checked luggage. When informed that we had, we were told that they couldn't transfer us because they couldn't transfer the bags and so we couldn't change flights. What's going on here, of course, is positive bag-matching. The idea here is that people prefer not to explode and so if your bags are on the same plane as you are you're less likely to put a bomb in your bags.1.

We tried again at the customer service counter and got a similar response, complete with the explanation that they didn't have enough manpower to move people's bags. Now, realize that it's not like our bag was on the plane: our plane was still on the ground in Sacramento.2 Rather, it was in some holding room or on a conveyor belt or something. To add insult to injury, we were told we could volunteer to give up our seats, in which case they would naturally transfer our bags to the new flight. Thanks, Alaska!

1. Of course, this change was a response to 9/11, an event in which the terrorists didn't seem at all concerned about not exploding. However, that's a distinction that seems to be lost on pretty much all airport and security personnel.
2. This is something I've never understood about airlines. Most of the time when your flight is late you're told that the problem is that the plane you're supposed to be flying on is late. Now, I would understand this if you were on the second leg of a multi-leg flight (e.g. SEA-SFO-Palm Springs) and the flight really hasn't come in. But many times there's no discernable connection between the passengers on two flights. Indeed, often it's a plane that just came from where you're going! Is there some reason they can't maintain a few spare planes (remember, SEA is Alaska's hub!) and just put us on one of them instead? I know planes are expensive, but surely annoyed customers are fairly expensive as well. No doubt this is easier if you're like JetBlue and you only fly one class of plane (A320s, though they're also buying a bunch of Embraer 190s), but Alaska has 26 MD-80s and 40 737-400s, which are a pretty close seating match.

 

December 26, 2004

This is not good: as a result of magnitude 9.0 earthquake and the resulting tsunamis, 14,000 people are now dead. The truly scary part is that even with an event of this magnitude, this is 14,000 people I've never met and in less than a week it will be off the news cycle. If I went camping for a week, I'd never even know.
 
I'm back in the Bay Area. Regular blogging should resume shortly.
 

December 24, 2004

From Samuel Bowles's Microeconomics, Behavior, Institutions, and Evolution:
Consider the following case (Gneezy and Rustichini 2000). Parents everywhere are sometimes late picking up their children at day-care centers. In Haifa, at six randomly chosen centers, a fine was imposed for lateness (in a control group of centers, no fine was imposed). The expectation was that punctuality would improve. But parents responded to the fine by even greater tardiness. The fraction picking up their kids late more than doubled. Even more striking was the fact the when after sixteen weeks the fine was revoked, their enhanced tardiness persisted, showing no tendency to return to the status quo ante. Over the entire twenty weeks of the experiment, there were no changes in the degree of lateness at the day-care centers in the control group.

The authors of the study reason that the fine was a contextual cue, unintentionally providing information about the appropriate behavior. The effect was to convert lateness from the violation of an obligation that the parents were at some pains to respect, to a commodity with a price that many were willing to pay. They titled their study "A Fine is a Price" and concluded that imposing a fine labeled the interaction as a market-like situation, one in which parents were more than willing to buy lateness. Revoking the fine did not restore the initial framing of punctuality as an obligation, it just lowered the lateness to zero.

Another case in which people frame the problem this way is speeding. Many people regard speeding tickets as a tax on driving fast, not an indication that they've done anything wrong. Indeed, they often resent the "slow down" lectures provided by the police more than the fine.

For discussion: Drunk driving was once regarded in much the same way as speeding, but after a concerted PR campaign (by MADD, especially) it's now widely considered to be wrong. If you actually cared about reducing driver's speeds (I don't) would a similar campaign be likely to work better than more vigorous enforcement?

 

December 23, 2004

The main thing you need to know about Canada is that it's cold. Previous experiences running in Canada have taught me the value of good cold-weather gear and in anticipation of this year's trip I acquired one of Sporthill's X-C tops..

The X-C is a long sleeved high collared shirt with a medium-length zipper. It's made of Sporthill's 3SP polyester blend and is rated for 20-40°F. Sporthill claims that it is windproof up to 35 mph.

Last night I went for a run in 34° weather and I can report that the shirt performed as advertised. Wearing only the top, my torso stayed warm even in significant winds that cut through my gloves and pants. The moisture management is also excellent. I stayed fairly dry throughout.

The shirt itself is quite comfortable. The outer face is slicker and stiffer than your average running shirt but the inner face is brushed and very soft. Because the shirt is stiff, it tends to fit a little weird for the first few washings. I thought mine was a little big but after a single washer-dryer cycle, it fit perfectly. (Note: you're not supposed to tumble-dry them so I've been hang drying since, which seems to go pretty quickly. I don't know that tumble-drying did any damage, but since it seemed to shrink it a bit the first time...)

I highly recommend this shirt. It's too warm for ordinary use around California (I have a number of dryline shirts for that) but it's excellent for cold weather sports and definitely worth the $86 I paid for it.

 

December 22, 2004

I'm up in British Columbia for Christmas with my SO's family, so blogging will be a bit spotty. The sort-of-in-laws live on a mountain top where the best connectivity is POTS, which is unable to satisfy my appetite for bits. I managed to find an Internet cafe which is letting me jack in for a mere $2 CDN per half hour. Pointers to better net service in the Penticton/Osoyoos area welcome.
 

December 21, 2004

First it was Vioxx and Bextra, then Celebrex, and now it turns out that Naproxen increases the risk of heart problems.

My take: Why are all of these announcements happening all at the same time? These studies have an enormous lead time, so it's not like they were started in response to the Vioxx problems. These studies were mostly intended to answer other questions (the naproxen one was for Alzheimers) so I'm guessing that the researchers unblinded them early to investigate the cardiac issue. That raises the issue of positive-outcome bias: how many researchers studying the effect of NSAIDs on X unblinded their studies and didn't find any negative effects?

 

December 20, 2004

The LA Times has an interesting article about the current state of cognitive enhancers. There are already three drugs on the market that have been shown to improve cognitive performance: modafinil (Provigil), methylphenidate (Ritalin), and donepezil (Aricept). Modafinil and methylphenidate improve concentration and donepezil seems to improve general function, and there's better stuff in the pipeline. It's going to be an interesting next 10 years or so.
 
One of the nice features of TiVo is that it tries to guess what programs you'd like to watch and will automatically record them for you. Unfortunately, it can get confused. You've probably heard about the guy who's TiVo thinks he's gay.. Apparently my TiVo thinks I'm the most boring guy in the world because it's decided to record 4 hours of the cable program guide. Outstanding.
 
Tyler Cowen over at Marginal Revolution points to a New York Times article about some autistics who don't have any interest in having normal brain wiring. The arguments range from the standard "this is the way we are" identity politics (familiar from the deaf anti-cochlear implant movement) to observing that some autistics have special talents (exceptional focus, memory, lightning calculation) that can be harder for normals. (Reference: neurodiversity.com.

This argument sounds ok--just as it did 40 years ago when Szasz made it about mental illness in general--when you're dealing with someone who's high functioning and just appears quirky. It's entirely different when the person in question is completely withdrawn and can't (or won't) speak or take care of themselves.

Nevertheless, this is interesting to think about: as we get better control over our brain wiring it's going to become a lot easier for people who would like different neural architectures to have them. The obvious ones that come to mind (smarter, better memory, more focused) are basically enhancements, but that's not the only way to rewire yourself. Lots of people take drugs voluntarily. Why not cut out the middleman and rewire yourself to be stoned all the time?

Science fiction is way ahead of us here. Greg Egan is particularly on point. His "Voluntary Autists" are normal (or at least mostly normal) people who want to become autistic:

"Of course, most animals will instinctively protect their young, or their mates, at a cost to themselves; altruism is an ancient behavioral strategy. But how could instinctive altruism be made compatible with human self-awareness? Once there was a burgeoning ego, a growing sense of self in the foreground of every action, how was it prevented from overshadowing everything else?

"The answer is, evolution invented intimacy. Intimacy makes it possible to attach some, or all, of the compelling qualities associated with the ego--the model of the self--to models of other people. And not just possible--pleasurable. A pleasure reinforced by sex, but not restricted to the act, like orgasm. And not even restricted to sexual partners, in humans. Intimacy is just a belief--rewarded by the brain--that you know the people you love in almost hte same fashion as you know yourself."

The worst "love" had come as a shock, in the middle of all that sociobiology. But he'd used it without a hint of irony or self-conscious ness--as if he'd seamlessly merged the vocabularies of emotion and evolution into a single language.

I said, "And even partial autism makes that impossible. Because you can't model anyone well enough to really know them at all?"

Rourke didn't believe in yes-or-no answers. "Again, we're not all identical. Sometimes the modeling is accurate enough--as ccurate as anyone's--but it's not rewarded: the parts of Lamont's area which make most people feel good about intimacy and actively seek it out, are missing. These people are considered 'cold', 'aloof.' And sometimes the reverse is true: people are driven to seek intimacy, but their modeling is so poor that they can never hope to find it. They might lack the social skills to form lasting sexual relationships--or even if they're intelligent and resourceful enough to circumvent the social problems, the brain itself might judge the model to be faulty, and refuse to reward it. So the drive is never satisfied--because it's physically impossible for it to be satisfied.

...

He said carefully, "Many fully autistic people suffer additional brain damage and various kinds of mental retardation. In general, we don't. Whatever damage we've suffered to Lamont's area, most of us are intelligent enough to understand our own condition. We know that non-autistic people are capable of believing that they've achieved intimacy. But in VA we've decided that we'd be better off without that talent".

"Why better off?"

"Because it's a talent for self-deception."

I said, "If autism is a lack of understanding of others ... and healing the lesion would grant you that lost understanding--"

Rourke broke in, "But how much is understanding and how much is a delusion of understanding?. Is intimacy a form of knowledge, or is it just a comforting false belief? Evoution isn't interested in whether or not we grasp the truth, except in the most pragmatic sense. And there can be equally pragmatic falsehoods. If the brain neds to grant us an exaggerated sense of our capacity for knowing each other--to make pair-bonding compatible with self-awareness--it will lie, shamelessly, as much as it has to, in order to make the strategy succeed.

...

I said angrily, "Cost is the least of the issues. You're talking about deliberately--surgically--ridding yourself of something... fundamental to humanity.

Rourke looked up from the floor and nodded calmly, as if I'd finally made a point in which we were in complete agreement.

He said, "Exactly. And we've lived with decades with a fundamental truth about human relationships--which we choose not to surrender to the comforting effects of a brain graft. All we want to do is make that choice complete. To stop being punished for our refusal to be deceived.

Egan's future also includes the possibility of having your brain rewired rather than having a sex change--or even having your brain rewired to make you neurally asexual. Definitely worth checking out.

UPDATE: Fixed a typo: I'd written 'statistics' instead of 'autistics'. doh. Thanks to Chris Walsh for pointing this out.

 

December 19, 2004

The FCC is considering allowing the use of cellular telephones in airplanes. What's changed to make lifting the ban attractive now? The obvious suggestion is that cell phones use much lower power now than 10-15 years ago, but that's been true for years. A more likely explanation is social. As people got used to using their cell phones everywhere, it became clear that they weren't going to pay for the rather expensive airphone service the airlines offer. Of course, the new service will be offered through a picocell provided by the airplane. I wonder what the price for that will be.
 

December 18, 2004

I originally switched to MT3 because I was having problems with my MT2 installation locking up due to comments spam. Looks like the problems may not be solved with MT3. I guess I've just been lucky so far.
 
This post should have been posted a month or so ago, but I finally got around to finishing it...

Bob McGrew has some good commentary on the proposed RFID-readable passports. I've never seen a really good rationale for why you would need these at all. My impression is that the logic goes something like this:

  1. We want to store biometrics in passports.
  2. Biometrics are big.
  3. So we need passports that can store (and let us retrieve) a large amount of digital data.
  4. RFID tags let us store and retrieve largish amounts of digital data.
  5. Therefore we need RFID.

The problem comes between steps 4 and 5. Let's take a step back and look at the available technologies for storing and retrieving digital data in this kind of environment:

Method Capacity Dynamic? Range
Bar code ~10 bytes/inch Static Centimeters (line of sight)
Mag stripe ~125 bytes per square inch Static Contact (swipe)
2-D barcode ~1000 bytes per square inch Static Centimeters (line of sight)
Memory chip (RFID) Effectively unlimited Static Centimeters to meters
Chip + processor (RFID) Effectively unlimited Dynamic but low power Centimeters to meters
Chip + connector (USB, Firewire, etc.) Effectively unlimited Static or dynamic Contact (must be plugged in)

If you look at this chart, it becomes clear that RFID occupies a sweet spot of sorts: it provides a (mostly) unlimited amount of storage but doesn't require physical contact. But it's not the only sweet spot, for two reasons:

  • While you can do some dynamic processing with an RFID interface, you don't get much power from the probe pulse and so you can't do much processing without some sort of battery to power the system. By contrast, if you have a connector you can supply power and do plenty of processing, as with a PCMCIA card or USB stick.
  • RFID is the only technology that allows unlimited remote read. This is a bug, not a feature, for the obvious privacy reasons. Bar codes require line of sight, so you can't realistically read the passport without actually having it in your hand. The situation with connector-based systems is even better.

Based on the above tables, my impression is that you could get high enough data densities with 2-d bar codes. Iris codes are about 128 bytes and fingerprints are about 300-1000 bytes each, so you should be able to put all 10 fingerprints on the interior surface of a passport and still have some room to spare.

The big argument for RFID, of course, is that it's extensible, so if you want to store a lot more stuff on it you don't need to go making a lot of changes to the physical interface. That said, given the amount of attention the passport designers seem to be showing to privacy, it's not clear that that's a feature from the perspective of passport holders.

 

December 17, 2004

Unsurprisingly, a recent test of the "missile shield" was a failure. These are just the unit tests, folks, and they don't even work. When was the last time you saw a system that had only been unit tested work the first time you tried to run all the parts together? And considering that the first real test consists of trying to shoot down incoming nuclear missiles, I'd say we're a ways away from being ready to ship the product.
 

December 16, 2004

Oracle will acquire PeopleSoft for $26.50 a share. I can't decide if PeopleSoft's board are shrewd negotiators who extracted the maximum price or were just looking out for themselves but finally had to give in.
 
Via /., I see a story that Dan Bernstein assigned students in his MCS 494: Unix Security Holes class the project of finding 10 exploitable holes each in Unix programs. The class of 25 students found a total of 51 vulnerabilities (in 44 separate reports) which you can find here.

I've taken a preliminary look at these vulnerability reports and some observations jump out:

  1. The holes were in 42 separate programs, with only 7 programs showing more than one vulnerability and with only one showing more than 2 (CUPS with 4, 3 of which were found by the same person).
  2. There's an enormous variance in the number of holes found. Only two students, Ariel Berkman and Limin Wang, got 10 vulnerabilities. The next highest was Yosef Klein with 5. Only 17 students found any vulnerabilities at all and 9 of them found 2 or less.
  3. Only 7 vulnerabilities have more than one author listed, and those have only two authors. This tells us something about the degree of overlap between different auditors, but I'm not sure what because I don't know if the students worked completely independently. I've fired off mail to DJB but don't have a response yet.
  4. 45 of the vulnerabilities were remotely exploitable in some way. I haven't examined them too carefully, but it looks like a large number of them were some would let you more or less take over the victim's account. My impression is that a lot of these were classic memory handling errors. Note: I didn't really examine the bug reports. To a first order I just used the remote/local categorization in DJB's reports.

Points (2) and (3) are especially interesting for what they suggest about the population of vulnerabilities. It's pretty common for security types (myself included) to assume that software is so bug-riddled that any idiot can find an arbitrary number of vulnerabilities. Obviously, this was quite doable for some people, but others clearly found it very challenging. This project was 60% of the grade in the class, so they clearly had substantial incentive to find them.

On the other hand, the overlap between the vulnerabilities people found (even if we assume they worked totally independently) was quite small. Less than 15% of the vulnerabilities have more than one person listed. A small overlap is not what we'd expect if the reason it was hard to find vulnerabilities was that the total population was very small.1 The fact that some students were so successful suggests that perhaps the limiting factor in finding vulnerabilities isn't that there is a limited number but rather that they are hard to find and some people are just better than others. It would be interesting to know what the two people who found 10 vulnerabilities did differently from everyone else.

1. One caveat here is that I don't know how things were run. If, for instance, the bugs were posted somewhere as soon as they were found and you didn't get credit for finding a duplicate, then you would obviously see very little overlap. However, the fact that DJB obviously submitted the bugs all at the same time suggests otherwise.

 

December 15, 2004

If you use your TiVo a lot, you may have noticed that the clock doesn't always line up exactly with the schedule that the broadcasters use. If they're off by a minute or two, you tend to lose a minute or two at the front or the end of the program. TiVo has a sort of sloppy mechanism for dealing with this: you can tell it to record a few extra minutes. Unfortunately, it does this globally for that season pass, which interferes when there is another program that you want to record afterward--even on the same channel. It's particularly annoying when the next program is on the same station. You end up having to watch the next episode of Law & Order to get the last 2 minutes of the previous episode.

Without loss of generality, let's consider the situation at the end of a program. There are basically three situations:

  1. The TiVo isn't recording anything afterward.
  2. The TiVo is recording something on the same channel afterward.
  3. The TiVo is recording something on a different channel afterward.

Now, the first case is easy: have the TiVo automatically record a few minutes extra at the end of the program. Then, if the clock is a bit off, the rest of the program is just there. This chews up a little bit of disk space, but a 5 minute buffer would consume less than 10% on hour-long programs, which seems like a small price to pay. And of course you could make it an option for people who are really space conscious.

The second case is only slightly more difficult. The last 5 minutes of Law & Order episode 973 (from 10:00 to 10:05) is being recorded as the first 5 minutes of episode 974. Now, if we were recording onto tape or something, this would be difficult, but the TiVo is basically software. There's no reason that it can't treat that 5 minute segment as both the last 5 minutes of one recording and the first 5 minutes of the next, thus giving you the same effect as in case (1).

In the third case, of course, there's nothing you can do without having a second tuner and you just have to live with losing the few minutes that run over, but this case only occurs a relatively small fraction of the time.

Anyone know why TiVo doesn't do this? If my experience is typical, a lot of people are losing the beginnings and ends of programs and they don't have It could just be the additional implementation cost, but it seems like it ought to be pretty easy to code up. Or is this feature hiding somewhere and I don't know how to activate it?

 
I was at the CommerceNet 10th anniversary party last night and they had one of the coolest party gizmos I've ever seen: a Chocolate Fountain. Basically, it creates a waterfall of molten chocolate which you can dip stuff (marshmallows, fruit, etc.) in:

You can also check out the video.
 
Here's an excerpt from privacy policy I just received from one of my health care providers:
We are required by law to maintain the privacy of your medical information and to provide you with notice of our legal duties and privacy practices. We are required to abide by the terms of the Notice of Privacy Practices currently in effect. We reserve the right to change those terms and any changes made will be effective for all medical information we maintain. A copy of a revised notice will be available from our web site at [deleted], at any of our imaging centers, or from our Privacy Coordinator by calling [deleted] or by writing to [deleted]. You may also address questions regarding our privacy practices, your privacy rights, or requests for additional information regarding your privacy to this person. (emphasis mine).

So, they are required to abide by the terms, but they can change the terms at any time without notifying me, and they will then comply with the new terms? What's the point of them notifying me of their current terms, then? (Yes, I know it's legally mandated, but what's the point of the mandate other than to create new paperwork?).

 

December 13, 2004

The current Economist issue article on hybrid cars points out that there's an interesting small optimization:
The next step may be the plug-in hybrid, which is not the backwards step its name suggests. Unlike the electric cars of the 1990s, none of today's hybrids needs to be plugged in but if plugging were an option it would be a good idea. Andrew Frank and his team at the University of California Davis' Hybrid Electric Vehicle Centre are working exclusively on plug-in hybrids, which can operate as pure-electric vehicles over short distances (up to 60 miles, with a large enough battery pack) but can switch to a hybrid system when needed. Since the average American driver travels about 30 miles a day, plug-in hybrids could be recharged overnight, when electricity is cheaper to produce, and need never use petrol at all, except on longer trips.

Nice hack, but precisely because the cars will work fine if you don't charge it, I wonder how much incentive people will have to incur the (marginal) inconvenience of plugging them in at night.

UPDATE: Paul Hoffman asks in the comments section whether it's desirable to power one's car this way. Here's Phil Karn's analysis of his EV1 versus a conventional car. Karn estimated that the emissions from the EV1 were 3% of those from a conventional car. Now, a real hybrid in electric-only mode is more efficient than an EV1 and has an overall efficiency about 3x the 17mpg that Phil assumed for gasoline mode, but it seems at least plausible that one could obtain a substantial emissions boost, particularly if you use non-emitting electrical generation methods such as nuclear or wind.

 
In the comments on my post about redirects and Bloglines, Joe asks
Comments Could you give more details about the redirect... is this simply a meta-redirect, an .htaccess redirect or server-side?

All I did was put this in /movabletype/.htaccess:

Redirect /movabletype/index.rdf http://www.educatedguesswork.org/index.rdf
This produces the following effect when you fetch /movabletype/index.rdf:
HTTP/1.1 302 Found
Date: Mon, 13 Dec 2004 13:29:14 GMT
Server: Apache/1.3.26 (Unix) mod_macro/1.1.1
Location: http://www.educatedguesswork.org/index.rdf
Transfer-Encoding: chunked
Content-Type: text/html; charset=iso-8859-1

14b
<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<HTML><HEAD>
<TITLE>302 Found</TITLE>
</HEAD><BODY>
<H1>Found</H1>
The document has moved <A HREF="http://www.educatedguesswork.org/index.rdf">here</A>.<P>
<HR>
<ADDRESS>Apache/1.3.26 Server at <A HREF="mailto:webmaster@rtfm.com">www.rtfm.com</A> Port 80</ADDRESS>
</BODY></HTML>
Simple, eh?
 
One of the ways in which TLS breaks when run over an unreliable data network is that there are dependencies between the data records. In particular, decrypting record X+1 depends upon knowing information about record X. When you're one of the communicating peers, this isn't generally a problem because TCP guarantees the reliability of the transport, but in passive sniffing applications, it's reasonably common to lose packets. With some thought, however, it turns out that with some thought you can actually reconstruct most of the remaining data stream.

Note: the following discussion assumes you have the traffic keys.

Block Ciphers
It's easiest to recover if a block cipher is being used. Imagine that we see a run of records, then lose a packet, producing a gap of Length bytes. Call the first record after the gap Rx, the next one Rx+1, etc. SSL uses the last cipher block of record Rx-1 as the CBC IV for record Rx. Thus, if record Rx-1 is lost, we are not able to decrypt the first cipher block of record Rx. However, the rest of the record can be easily decrypted. Thus the records we didn't see are unavailable, but we can mostly access the records we managed to capture.

Obviously, because we don't have the first plaintext block of Rx, we can't verify the MAC. However, we can verify record Rx+1, because we do have the IV (the CBC residue from Rx). Unfortunately, because we don't know how many records have been lost, we don't know what the sequence number for Rx+1 is, and it's required to verify the MAC.

We can brute-force resynchronize by iterating over all possible sequence numbers until we find one where the MAC matches, at which point we must be synchronized. The question now becomes whether resynchronization can be done efficiently. The SSL sequence number space is 64 bits. Clearly, then, searching the entire space is prohibitive. Intuitively, there cannot have been more records than the number of lost bytes, so the number of sequence numbers we need search cannot be greater than the length of the missing region.

In practice, we need search far fewer than that. The maximum sequence number can be found by asking what the maximum number of records could have been contained in the missing TCP segments. Assuming that records are non-empty, the minimum plaintext size is 1 + M where M is the size of the MAC. Because M is either 16 (for MD5 or 20 (for SHA-1) and we have to pad to a block boundary, the ciphertext will be 24 bytes for DES and 3DES and 32 bytes for AES. With the 5-byte record header, the minimum record size MRS is 29 bytes (37 bytes with AES).

If we assume that the gap is smaller than the maximum SSL record, then the minimum number of missing records is 1. Thus, if we have a gap of Length bytes, this could contain anywhere from 1 to Length/29 records. If the last record before the gap has sequence number N, then Rx+1 has a sequence number in the range [N + 3,N + 2 + (Length/MRS)]. (The first missing record has sequence number N+1 and Rx has sequence number N+2). If we've lost a single Ethernet frame, this means checking no more than 50 sequence numbers for DES/3DES and 40 for AES.

In practice it's rarely necessary to try more than a few sequence numbers. Most SSL implementations use relatively consistent large record sizes so the loss of a single frame probably only means losing a single record.

In many cases, the data being transmitted will be highly structured, in which case we will have a good chance of guessing a single cipher block. If we can guess the plaintext of the leading block of the first record after the gap, we can verify it by checking to see if the MACs match. Note that it's much more efficient to resynchronize on the next record and then go back and try to verify our incomplete record: if there are n possible sequence numbers and m possible first blocks, this strategy requires m+n operations rather than the m*n operations required to resynchronize and guess plaintext blocks at the same time.1

Stream Ciphers
If the traffic is encrypted using a stream cipher, the problem becomes figuring out exactly what section of keystream to use. We know from the TCP sequence numbers how much data we have lost but because SSL record sizes are variable, we don't know exactly how many bytes of keystream have been used (recall that the record headers are not encrypted). However, we can use a technique similar to the one we used for block ciphers to recover the appropriate keystream offset.

Naively, we could simply try each potential stream cipher offset/sequence number combination until Rx verifies correctly, but this could be very expensive. Instead, we can take advantage of the fact that the keystream offset can be predicted if you know how many records you've missed. The logic is as follows: We know that Length bytes of TCP data are missing. Some of that data is un-encrypted headers. The rest was encrypted. The amount of keystream used is the size of the data which was encrypted, which is equal to Length minus the un-encrypted headers. Because SSL record headers are 5 bytes long, we get:

Offset = Length - (Records * 5)

Thus, we simply iterate over the possible values for Records until a MAC matches. Because each value of Records corresponds to one sequence number for our target record, we don't need to iterate over sequence numbers as well.

As before, we can determine the upper and lower bounds for the number of records lost. Because the records are unpadded, the minimum SSL record size when a stream cipher is used (again, assuming 1-byte long plaintext) is given by MRS = 5 + 1 + M Thus, there may have been anywhere from 1 to Length / MRS records lost. For example, if we've lost 1460 bytes, there could be anywhere from 1 to 66 records (if the MAC is MD5) and 1 to 56 records (if the MAC is SHA).

Partial Record Loss
Obviously, it's possible to lose only part of a record. With a block cipher, we can recover exactly as described above, except that we cannot verify the MAC at all on the partial record. With a stream cipher, we perform the recovery procedure described above on the next whole packet and then work backwards to figure out what the keystream for the partial packet must be.

Handshake Message Loss
In general, losing handshake messages is bad, but it's possible to recover from the loss of the Finished messages, simply by treating them as data messages and proceeding as described above. If the hello messages or the ClientKeyExchange are lost, however, we will be unable to generate the keying material required to decrypt the connection.

1. Note that TLS 1.1 will have an explicit IV so as long as you have a complete record you will be able to decrypt it without the previous record.

This material was adapted from Secure Auditing for SSL Transactions by myself and Kevin Dick.

 

December 10, 2004

Well, this is timely. John Perry Barlow just posted a long story about being arrested for controlled substances discovered in an airline security search. Apparently he's mounting a constitutional challenge to the use of the results of the search as evidence.
 
I just read an Economist article about Corestreet's interesting electronic lock technology. A basic problem with electronic locks is that you need to periodically update them when new people are added to the system--or more importantly when they are removed. This requires them to be connected somehow to a central server to get updates. Corestreet's solution is to use people's access cards to carry the updates around. Basically, you wire a small fraction of high traffic locks to the central server. Whenever someone uses one of those locks the lock writes the update to their key card. Then whenever they use a disconnected lock the lock reads the update off the card.

This is clever but I'm not sure how well it will ultimately work. One obvious problem is that it requires that every lock gets accessed by someone who also accesses a high traffic lock. I'm no expert on lock patterns, but I wouldn't be surprised if there are locks that only get accessed very infrequently by a small number of people. That's not a big problem when new users are added, but it could clearly be for revocation. Second, it seems like there's an easier solution: fit the locks with wireless and then let the locks update each other (you can ignore the security issues with both the wireless and card solutions, since there are known ways to address both). I'm skeptical that there are a lot of environments where wireless won't work but this solution will...

None of this makes it a bad idea, of course. Sometimes these clever crypto tricks pay off and even if it's not appropriate in this setting, there's probably some situation where it will be.

 

December 9, 2004

I recently had occasion to read the Marijuana Policy Project's amicus brief in Ashcroft v. Raich. If you ever wondered why the research on the medicinal uses (if any) of marijuana is so spotty, this brief makes it clear: the feds have made it incredibly difficult to do any research on the topic by denying access to the only legal source of cannabis. It's easier to do research on MDMA than it is on cannabis.
 

December 8, 2004

Now, here's something interesting. Terrapass lets you invest in greenhouse gas reduction by taking your money and subsidizing energy projects that reduce greenhouse emissions. They claim that for about $50 you can offset the carbon emissions of your average car.
 

December 7, 2004

Alex Tabarror writes about "me-too" drugs and specifically cites Nexium:
AstraZeneca introduced Nexium just as their drug Prilosec was going off patent. Nexium is very similar to Prilosec and for almost all patients it offers few additional benefits - it is widely cited as a me-too drug. The Nexium problem, however, is quite different from the gold-mine problem. The Nexium problem is, Why do people buy the expensive brand when the cheap generic would be just as good? The Nexium problem is that customers think, or act is if they think, that Nexium is not a me-too drug.

The situation with Nexium is actually quite interesting. The basic chemical in both Nexium and Prilosec, omeprazole, comes in two stereoisomers. Prilosec came first and is an equal (racemic) mixture of the R- and S- isomers. Both isomers are metabolized to the same non-chiral drug. However, it turns out that the S-form is metabolized 70-90% more effectively than the R-form. Nexium (esomeprazole) consists solely of the S-form. As far as I know, there's no evidence that exomeprazole is any more effective than omeprazole in equivalent serum concentrations.

While it's certainly fair to call Nexium a me-too drug, that doesn't mean that people are buying it because they don't know that. It's true that there are generic versions of Prilosec (prescription-only), but the more important factor is that Prilosec is now available over the counter, which means that your insurance may not cover it. Nexium is only available by prescription, so it's still covered. I'm not sure if this is so much of an issue of customer ignorance as of customer rational choice.

UPDATE: A similar situation obtains with Claritin and Clarinex. Claritin (loratadine) turns into Clariniex (desloratadine) in the body. Generic OTC loratadine is readily available. Clarinex is still under patent and prescription only.

 

December 6, 2004

While we're on the topic of consumer electronic device annoyances, has anyone noticed that their iPod has a really ferocious self-discharge rate? It seems like if I just leave it sitting for 2-3 days when I turn it back on the battery is totally drained. Mega-annoying, since it means that I need to bring the charger with me for even for short trips where I don't use the iPod very much.
 
The other day I had occasion to use Lisa's Minolta Dimage Xi to photograph a moving target (my cat) and I noticed something really annoying: the time between my pressing the button and the thing actually snapping a picture could be measured in seconds. It's bad enough that the time it takes to store a photo is like 5 seconds. I chalk that down to slow flash cards. But why is the delay so long? Is it just the minolta or are other digital cameras like this? My old Nikon SLR certainly wasn't...
 

December 5, 2004

From ABS News:
French police on Sunday issued the description of a hold-all which left Paris' main airport containing a slab of plastic explosive put there in a bungled security training exercise.

"It is a small blue case between 50 and 60 centimetres long. It is quite possible that the person who owns it has still not found the explosives," said a spokesman for the police at Charles-de-Gaulle airport.

In a routine test on Friday evening two dog-handlers placed the bar of explosives inside a random bag as it passed on a conveyor belt between check-in and the aircraft loading bays.

One dog successfully detected the item, but the other did not.

Before it had the chance to take another sniff, the bag had been whisked off towards its destination.

"The explosives are totally harmless. They cannot react to shock or fire, and there is no detonator," assured an official.

Police have no idea which of about 80 possible flights the bag was placed on, but they have informed all the relevant airlines.

So far there is no news.

An investigation is underway and the two men may face disciplinary procedures, a spokesman said.

Outstanding!

 
I see from Alan Hawrylyshen that iTunes Music Store Canada is open. Also, the price of songs is $0.99 CDN, or $.85 USD, as opposed to $0.99 USD If Apple maintains their standard practice, they'll try to block Americans from getting the $.14 discount by buying from iTMS Canada. Any Canucks want to set up a service to buy music for $.99 CDN and sell them to Americans for $.92 USD?
 
This week's WaPo "Help File" column contains a question about copying your songs off the iPod, which of course Apple won't let you do:
You can't copy those music files to your desktop or laptop using Apple's iTunes software because that program blocks iPod-to-computer song transfers. Apple sees that as an invitation to widespread copying.

But when it's your own music at stake, you shouldn't feel guilty about using third-party software. A wide variety of programs can tackle this job, but if you're using Windows, try CopyPod (www.copypod.net), which is free for 14 days and then costs $9.50. On Mac OS X, iPodRip (www.thelittleappfactory.com), $10 shareware with 10 free sessions, has many fans, but I was impressed by a free, open-source download called Senuti (wbyoung.ambitiouslemon.com). All three programs worked well, even with songs purchased from Apple's iTunes Music Store. But make sure you enable "disk use" for your iPod, which allows it to serve as an external hard drive, before using any of them.

To me, the existence of so many widely used iPod file-transfer tools is proof that Apple was wrong to thwart iPod-to-computer copying.

What's going on here, as I understand it, is that the files are all just sitting on the iPod but in a scrambled directory structure so that you can't just access them directly without special tooling. Now, I suppose I can understand why Apple won't let you copy music you bought from iTunes Music Store. But what we're talking about here is that Apple won't even let you copy the CDs you ripped yourself--the ones that were at least at one point most likely sitting on your hard drive.

I guess the threat model here is that I'm going to walk my iPod over to my friend's house and let him copy the music off it. No doubt that's a possibility but it's worth pointing out that since the iPod can also be used as a hard drive, you can simply make a duplicate copy of all your music in the partition you control. It's true that that wastes space, but given the amount of wasted storage in most people's iPods, I'm not sure how big an imposition that actually is. So, now all you've managed to do is deter (very) casual copying, at the price of inconveniencing every single iPod user. Outstanding!

 

December 4, 2004

Most of the sports I've been involved in have their benchmarks, the minimum performance you need in order to be credible. Typically, these benchmarks are somewhat arbitrary, chosen mostly for being round numbers. Here are the ones I know about:
SportBenchmark
Running40 minute 10K
Cycling1 hour 40K
WeighliftingBench your own weight

Do readers know the benchmarks for other sports?

 
Ok, so I realize that Bloglines is a free service, but that's not going to stop me from complaining. The new EG only has about 25% as many Bloglines readers as the old EG. I've asked Bloglines to adjust the feed but their response was less than helpful:
Date: Mon, 22 Nov 2004 12:58:40 -0800
From: "Bloglines Customer Support" 
To: ekr@rtfm.com
Subject: Re: [#11626] Web Form: [Other]
User-Agent: Neotonic Trakken/2.13.2
X-CRM114-Status: Good  ( pR: 4.0723 )

Hi there

The Bloglines directory lists blogs that our users have subscribed to. So
to add your blog to the directory, all you have to do is become a
Bloglines user and subscribe to your blog. The directory is regenerated
nightly, so once your blog is added, it should appear within 24 hours.

--
Kate  
kate@bloglines.com
Bloglines
http://www.bloglines.com

Tell a Friend about Bloglines! http://www.bloglines.com/sendsubs

Original Message Follows:
------------------------
From: ekr@rtfm.com
Subject: Web Form: [Other]
Date: 20 Nov 2004 05:49:41 -0000

I've moved my site, Educated Guesswork (formerly
http://www.rtfm.com/movabletype, not http://www.educatedguesswork.org/).
Is there some way for you to edit the feed to point to the new location at
http://www.educatedguesswork.org/index.rdf?

Thanks,
-Ekr

However, it turns out that there's an easy (though a bit gross) fix. Bloglines honors HTTP Redirects, so I just set the old site's RSS feed to redirect you to the new site. So, to my old readers--welcome back.

 
Lisa and I picked up her new Toyota Prius on Wednesday and I spent a while driving it around today. It seems to be perfectly nice, though (unsurprisingly) without quite the pick-up as my Audi S4. The most noticeable thing about the Prius is that it's really aggressively technical. The standard Toyota cars just get the job done quietly (Camry, I'm looking at you) and competently. By contrast, all the elements of the Prius's design, from the teardrop-shaped body to the digital displays scream "look how high-tech this is".

The coolest part of the car, actually, is the keyless entry and ignition system. You don't actually need to use the remote for anything. You just keep it in your pocket and the car doors automatically unlock as you approach. The ignition system is pushbutton. As long as you're in the car with the remote, the car will start. Turning the car off is pushbutton as well. It's actually a little unnerving combined with the fact that the gasoline the engine automatically shuts off at idle anyway. It's a bit hard to tell whether the car's off at all. The least cool part is the back windshield-wiper support bar on the hatchback. It's about 20% up the window, and has a tendency to block your rear view, at least mine.

The car drives fine. It's nothing special, but the acceleration is smooth and consistent and the handling is solid, just like you'd expect from Toyota. To my mind, this is actually the most impressive piece of technology. Without the digital display, it's basically impossible to tell whether it's the gasoline engine or the electric motor (or both) providing the power. The transition is beautifully smooth.

 

December 3, 2004

Amit Yoran says that we need to have better tools for finding bugs in code:
About 95% of software bugs come from 19 "common, well-understood" programming mistakes, Yoran said, and his division pushed for automation tools that comb software code for those mistakes.

"Today's developers ... often times don't have the academic discipline of software engineering and software development and training around what characteristics would create flaws in the program or lead to bugs," Yoran said.

Government research into some such tools is in its infancy, however, he added. "This cycle will take years if not decades to complete," he said. "We're realistically a decade or longer away from the fruits of these efforts in software assurance."

There are already a number of such tools available, including MOPS, Splint, and SWAT. These tools aren't perfect and it certainly would be nice to have better tooling, but it's worth noting that a lot of the bugs they find are the kind of thing that could be entirely eliminated if people would just program in safer languages. For instance, the buffer overflow vulnerabilities which have been so troublesome to eliminate in C/C++ code are basically a non-problem with Java.

 
The last time I flew, I was selected for secondary security screening, and I happened to strike up a conversation with the guy tasked to frisk me. He happened to mention that they find a substantial number of drug and currency smugglers. Apparently, when they find such miscreants, they turn them over to the police, who presumably arrest them.

Certainly, if the Feds set up checkpoints for the explicit purpose of search people for drugs before they got on the plane, people would be likely to object--and of course it would be a clear Fourth Amendment violation. Is it any better of the search is an incidental result of some other valid purpose (assuming for the moment that security screening is a valid purpose)?

So, what's good about this policy? Well, you catch criminals. (If you think drugs should be legal, imagine that we're catching people smuggling pirated CDs or something). Obviously this is a benefit. Now, the cost is the increased sense of fear induced in people going through the screening process.1 Now, of course, you can argue that the only people who will be afraid are criminals, but then ask yourself this: imagine there was some form of covert electronic surveillance that could detect people carrying drugs. Would you be in favor of the police being able to drive by people's houses and scan them? If not, what's the difference that makes the behavior of the TSA screeners acceptable?

1. Of course, they're not happy about it, but then they're criminals, so it's at least arguable that they don't have a right to complain. Of course, that's not an argument consistent with Cost-Benefit Analysis.

 

December 2, 2004

The standard config file works fine with a VGA display (for which you need to use the DVI-VGA dongle). However, to make it work with DVI communication to your display, you need to add:
    Option	"DigitalScreen1" "on"
To the "Device" section of your XF86Config file.

See here for notes on getting it to work on FreeBSD with a VGA display.

 
TLS is the standard approach for securing your garden variety TCP-based protocol. Unfortunately, because TLS assumes that the transport is reliable, you can't use it to secure protocols that run over datagram transports such as UDP and DCCP. This presents a problem because a number of important protocols such as SNMP and SIP are often run over UDP.1 Nagendra Modadugu, Dan Boneh, and I have been working to develop Datagram TLS (paper, Internet-Draft), which is a version of TLS that works over datagram transport.

Nagendra has developed a DTLS implementation that runs within OpenSSL and yesterday he committed DTLS support to the popular reSIProcate open source SIP stack. (Commits 3680, 3681, 3684). Nice work!

1. These protocols can also be run over TCP, but in many situations UDP is more convenient.

 

December 1, 2004

Some more thoughts on the shrimp tariffs.

In the standard free trade models, if country X can produce a product more cheaply than country Y, then it's efficient for country Y to import rather than produce locally, and tariffs in this environment are bad. Now, in this particular case the claim is that the Chinese and Vietnamese are selling shrimp below natural cost, presumably with the intention of raising prices onces the US shrimpers have been driven out of business. This tactic works best when there is a large barrier to entry for a given industry. So, if it's really expensive to build your production facilities, then a country (or manufacturer) can (at least potentially) drive their competition out of business. In this kind of scenario, it might make sense for the US to temporarily subsidize US manufacturers to prevent this kind of attack.

That said, I'm skeptical that shrimp fishing is this kind of situation. Are shrimp boats really expensive so that startup costs are really high? Are they so specialized that you can't buy another boat and convert it? Indeed, there's some argument to be made in favor of negative barriers to entry. In a lot of parts of the US, overfishing has substantially reduced the return from fishing. If shrimping is the same way, then having the US industry temporarily shut down would actually make future entries easier.