EKR: April 2008 Archives


April 30, 2008

Nick Weaver pointed me to this article in the NYT about the heparin incident. The FDA seems to be heading towards a theory that the contamination was deliberate (as I observed in my original post on this, though I'm not claiming that the idea is original to me). Here's the key bit:
"F.D.A.'s working hypothesis is that this was intentional contamination, but this is not yet proven," Dr. Janet Woodcock, director of the Food and Drug Administration's drug center, told the House Subcommittee on Oversight and Investigations in written testimony given Tuesday.

A third of the material in some batches of the thinner heparin were contaminants, "and it does strain one's credulity to suggest that might have been done accidentally," Dr. Woodcock said.


The F.D.A. has identified Changzhou SPL, a Chinese subsidiary of Scientific Protein Laboratories, as the source of the contaminated heparin. A Congressional investigator said the contaminant, oversulfated chondroitin sulfate, cost $9 a pound compared with $900 a pound for heparin.

Mr. Strunce said that his company tried to find the original source of the contamination but was stopped by the Chinese authorities.

Robert L. Parkinson, Baxter's chairman and chief executive, told the committee, "We're alarmed that one of our products was used in what appears to have been a deliberate scheme to adulterate a life-saving medication."

Chinese officials have disputed the F.D.A. contention that the contaminant caused death and injury, and they have insisted on the right to inspect American drug plants if the F.D.A. insists on inspecting Chinese ones.

Again, when you're close to some equilibrium of high compliance, inspections are an important part of maintaining that equilibrium. However, if you're far away from that equilibrium, i.e., you're dealing with people who regularly don't comply, you need an entirely different enforcement regime with much stricter checking. If there's no punishment for noncompliance (which sounds like the case here), then it's extremely difficult to make any regime work in the face of someone who's actively trying to cheat you, since there's little cost for them trying. That said, one might think that if Americans have been poisoned and the Chinese government is stonewalling the investigation, that this might be something the US government could push aggressively on.

Incidentally, I'm not sure there is a the symmetry between US inspections of Chinese plants and Chinese inspections of US plants. It's not crazy to want to inspect the plants that produce your imports (it's not scalable if everyone wants to do it, but this is presumably delegatable to some extent, as with Reg. Dept. Penna. Agr.), but that doesn't necessarily extend to a reciprocal right to inspect random plants in other countries unless you're doing a lot of importing from them. China represented about $94 million dollars in US Pharmaceutical exports in 2004 [*]. And of course this becomes more important if there's evidence that whatever mechanisms are being employed in the other country aren't working. Have any Chinese been poisoned by defective American drugs?


April 29, 2008

I'm currently working my way through Lolita (Appel annotated edition) and finding the annotation a bit heavy. Here's a not-so-randomly chosen but not-totally-unrepresentative page from the endonotes:
158/6 Christopher columbus' flagship: the zoo exists, in Evansville, Indiana. Its monkeys—kept out-of-doors on the ship from April to November—continue to be the zoo's most popular attraction.

158/7 Little Rock, near a school: rereading this passage in 1968, Nabokov called it "nicely prophetic" (the larger "row" over school desegregation, September 1957). For further "prophecy," see 226/3.

158/8 à propos de rien: French; not in relation to anything else; casually.

159/1 town... first name: "his" refers to Quilty, Clare, Michigan; an actual town.

159/2 species ... Homo pollex: H.H. combines the familiar Latin homo, "the genus of mammals consisting of mankind," with pollex, or "thumb."

159/3 viatic: H.H. sustains his "scientific" vocabulary; a coinage from the Latin root via. Viaticum is English—an allowance for travelling expenses—but H.H. has gone back to the Latin word viaticus, which specifically refers to the road.

159/4 priapically: from Priapus, the god of procreation.

159/6 man of my age...face à claques: Quilty, with "a face that deserves to be slapped; an ugly mischevious face." For an index to his appearances see 31/9.

159/6 concupiscence: lustfullness.

159/7 coulant un regard: French; casting a sly glance.

This is a bit less than one page of endnotes1 (I've omitted a note about Burma Shave2, and that reference XXX/Y means "note Y on page XXX", so this represents about half the notes on two pages of the text, since 158 has 5 notes which I haven't transcribed. You can of course ignore all these notes and just read the text, but if you're interested in a careful reading, you may well want to read them, with the concommitent risk of Wallacitis 3. The problem here is that while these notes are indicated in the text in the same way (with numbers in the margin), they're actually of quite different types:

  • 158/6 and 158/7 are sort of irrelevant asides that don't add much to the text.
  • 158/8 and 159/7 are translations from French.4
  • 159/1 and 159/6 indicate references to Clare Quilty of which there are a huge number.
  • 159/2 and 159/3 are translations from Latin.
  • 159/4 and 159/6 are simply explanations of English words you might have found difficult.

So, we have at least three categories: (1) translations of language you might find difficult (2) explanations of subtle allusions in the text [Quilty] and (3) more or less irrelevant asides that you might be interested in. If, for instance, you knew that reference 158/8 was just a translation from French, and you already knew what à propos de rien meant, you wouldn't need to go look it up in the endnotes at all, but as it is your reading flow is totally broken up while you flip to the back of the book.

The natural fix here is to have multiple types of annotation in the main text so you can tell at a glance what you're working with. Foster Wallace5 attacks this problem by using the notation IYI to indicate that a note is parenthetical, but this is not wholly satisfactory because the notation appears in the note and so your flow is already broken (though the fact that Wallace uses footnotes as opposed to endnotes does help). Given the exemplars above, we might do something like:

  • Translations/definitions: no notation but they're explained in notes if you flip to the back.
  • Subtle allusions: numbers as superscripts on the main text.6
  • Irrelevant asides: numbers in the margin.

The point of all this is to let you ignore the notes that you want to.7 This isn't wholly satisfactory, since we either have to intermix the allusions and asides at the end of the book (though of course you should be using footnotes) or have two separate sets of notes, both of which are clumsy (even if you have the allusions as footnotes instead of endnotes). Another possibility with a high enough note density is to put them on the facing page, but this chews up a lot of real estate if the note density is sufficiently low or highly variable.8

This is of course one of the cases where technology could really help. If you had an e-book, you could stop worrying about how the note text (as opposed to the indicator in the main text) was rendered. And if notes simply popped up when you selected them instead of taking the full context switch of a new page, you could minimize the flow interruption. Also, you could presumably program the e-book to display only notes you were interested in,9 while eliding the ones you don't care about. Of course, this would require there to be enough customers for e-books to bother giving them a treatment more sophisticated than just re-rendering the manuscript as it was typeset on the paper.

1. For more on endnotes see Rescorla 07
2. Famous for progressive road advertising signs, see 1925-1963.
3. After David Foster Wallace; observation due to Hovav Shacham.
4.159/6 is also a translation, but the primary purpose of the note is to point us at Quilty.
5. Everything and More: A Compact History of Infinity.
6. Given the particular nature of many of these allusions, it might make sense to mark Quilty references with a symbol rather than a number.
7. But of course this creates a hierarchy that's fixed in the text. This is sort of inherent in the fact that things are printed on paper, unless you want to have them printed in color/somehow plane polarize and wear filters on your glasses or something.
8. None of this applies to a book like Pale Fire where the notes are part of the text; Shacham again.
9. Note that you could also use colors, but many e-paper displays, such as the Kindle, don't have color displays, and since such a small fraction of the text will be color, this would add significantly to the cost of goods.


April 28, 2008

In the April 22 PNAS, Coates and Herbert report on a study of the correlation between testosterone/cortisol levels and performance by traders:
Little is known about the role of the endocrine system in financial risk taking. Here, we report the findings of a study in which we sampled, under real working conditions, endogenous steroids from a group of male traders in the City of London. We found that a trader's morning testosterone level predicts his day's profitability. We also found that a trader's cortisol rises with both the variance of his trading results and the volatility of the market. Our results suggest that higher testosterone may contribute to economic return, whereas cortisol is increased by risk. Our results point to a further possibility: testosterone and cortisol are known to have cognitive and behavioral effects, so if the acutely elevated steroids we observed were to persist or increase as volatility rises, they may shift risk preferences and even affect a trader's ability to engage in rational choice.

I don't have access to the paper (it's behind the PNAS paywall), so I don't know if they address the obvious correlation/causation issues. If it's just the case that better results result in increased testosterone levels, that's not very interesting.

What's more interesting is the suggestion that there's some set of cognitive enhancements that would make you a a better trader. One interesting question is whether these traders are outperforming the market (contra the efficient market hypothesis) or just themselves. Even more interesting would be the (implied) claim that performance increases because of more risk-taking behavior. As I understand it, the general on conventional gambling is that it's not really to your benefit to get more aggressive and/or risk-taking.


April 22, 2008

Nalgene is announcing they are going to phase out their polycarbonate bottles:
ROCHESTER, N.Y. (April 18, 2008) - In response to consumer demand, Nalgene will phase out production of its Outdoor line of polycarbonate containers that include the chemical Bisphenol-A (BPA) over the next several months, it announced today. Nalgene's existing product mix, including the recently launched Everyday line, already features a number of containers made from materials that do not contain BPA.

"We have always been focused on responding to the needs and concerns of our customers," said Steven Silverman, general manager of the Nalgene business. "With 10 different product lines in several different materials, we have the largest bottle offering on the market today. By eliminating containers containing BPA from our consumer product mix, our customers can have confidence that their needs are being met."

The company recently unveiled its Everyday line, an assortment of bottles manufactured with Eastman's Tritan copolyester. The line includes favorites such as the OTG ("On the Go"), the iconic 32-ounce Wide Mouth and the Grip-N-Gulp sippy cup. Tritan is impact resistant, withstands a wide range of temperatures and does not contain BPA. The new Everyday products are already available in stores and will be available through www.nalgene-outdoor.com next month.

I guess once you have an alternative, it's pretty easy to get rid of the offending product. I wonder if Nalge will start lobbying for a ban on BPA now.


April 21, 2008

In any cost/benefit analysis of vulnerability policy, we have to factor in the impact of exploitation that results from fixing the vulnerability. In particular, if you provide a full description of the vulnerability at the same time as you patch it, then it's generally easy for an attacker to construct an exploit. Since patch distribution and installation can take between hours and weeks, this gives the attacker a significant window of opportunity to mount attacks before people patch their machines.

A natural response to this is to simply release patches but not descriptions of vulnerabilities, on the theory that the patches disclose less. It's obvious that this isn't true with open source systems, since it's trivial to examine a given change and determine what attack it's designed to stop, but there have also been reports that attackers reverse engineer binary patches (in some cases within hours) to construct exploits. In this year's USENIX, Brumley et al. take this to its logical conclusion and describe a technique for automatically generating exploits based on patches. This doesn't really change the situation much as far as I can tell; it was widely believed this was possible, and while this tool takes seconds to minutes instead of hours, it was never plausible that you'd get complete patch deployment inside of 12-24 hours anyway, so shaving a few hours off the attacker time may not make much of a difference.

The authors describe a number of techniques (obfuscation, encrypted patches, P2P patch distribution) one might imagine using to reduce the impact of fast attack generation, and conclude (correctly IMO) they're not that likely to work. As I understand it, the critical path item in patch installation for important systems isn't obtaining the patch but testing it on sacrificial systems to make sure it doesn't introduce instability, and that creates an inherent lag that probably can't be removed with a new distribution method.

Another alternative (though it goes against the trend in recent practice) is to be less aggressive about releasing patches for vulnerabilities that haven't already been disclosed. The faster that attackers can respond to new vulnerabilities by comparison to defenders, the more that fixes released in an orderly fashion look like zero-day vulnerabilities and so the less attractive it looks to fix vulns that aren't generally known (Res04 has some analysis of this issue.)


April 20, 2008

Voltage (full disclosure: I have a number of friends there and I'm on their TAB) have released a technology they call Format-Preserving Encryption (FPE). The basic technology here is fairly old and is described in a paper by Black and Rogaway, but as far as I know, this is the first attempt to try to put it together in a single commercial package. Below I attempt to describe some of the relevant technical issues, which are sort of interesting.

Why FPE?
The use case for FPE is simple: say you have a database that contains information with multiple levels of sensitivity. So, for instance, if you're Amazon you might have a customer database that any employee can access but you'd like the credit card numbers to be accessible only to employees that really need it.

The classic approach here would be to use database access controls. This works well as long as you trust the DB server, but if, for instance, you want to send a copy of the DB to someone else, then you may not be able to trust their server, so you need to redact the database, which can be a pain. Another problem here is that sometimes sensitive information like CCNs is used for customer identification, which means you can't just redact the CCN. Rather, you need to replace it with something that's unique but doesn't leak the CCN itself. And of course, if someone compromises your database server, then all bets are off.

The problem with simple encryption
The natural alternative is to use encryption. Encrypting the whole database doesn't help, because you want users to have access to most of the database, just not to the sensitive fields. So, what you need to do is encrypt just the sensitive fields. This turns out to be trickier than it looks.

For example let's say we want to encrypt the social security number 123 45 6789 using AES-ECB. So, we might do:

  • Encode into ASCII to give 31 32 33 34 35 36 37 38 39.
  • Pad with 00 to give 31 32 33 34 35 36 37 38 39 00 00 00 00 00 00 00.
  • Encrypt with AES to give 77 6e 2c a5 02 17 7a 5b 19 e4 28 65 26 f3 7e 14

This kind of sucks. Not only have we managed to start with a 9 digit string and end up with a 128-bit random-appearing value, none of the bytes of the output are ASCII digits. So, if our database or database software is expecting to have values for this field that look like SSNs, we've just broken that invariant.

The source of the problem, of course, is that we're using a block cipher in ECB mode, and most block ciphers come in a small number of sizes (64, 128, and 256 bits are the standard ones). A block cipher just randomly maps the input space onto the output space, so ECB mode encryption effectively selects a random b-bit value (where b is the block size). The smaller the fraction of the possible values that are valid, the higher the probability that the output will be invalid. To take the specific case of SSNs, there are approximately 2^{30} valid values (if we think of the trailing zeros as not counting), so the chance of producing a valid value by random chance is vanishingly small (order 2^{-98}).

One thing you might think would make sense would be to use a different mode than ECB, say counter. The problem with counter mode in this case is that you need to use a different section of keystream (or a different key) to encrypt each value to avoid easy cryptanalytic attacks. So, you need some per-value distinguisher that gets carried along with the ciphertext, which expands the amount of storage you need for the encrypted values, even as it keeps the ciphertext small.

As noted above, our big problem is our block size is too large. As noted above, even though SSNs are 9 digits long, they are sparsely packed (for instance letters aren't allowed), so there are approximately 2^{30} valid SSNs, as long as we use a better mapping than straight 1-1 digit correspondence. For instance, think of the 9 digit SSN as a value from 1 to 999,999,999 (not all 9-digit numbers are valid SSNs, but for simplicity, let's pretend they are.) We can represent that in binary as a 30 bit quantity. If we had a 32 bit block cipher, we could encrypt this value with less than 10% expansion, which might be OK under some circumstances (we'll describe how to do better below).

Ordinary block ciphers have blocks much larger than this, of course, but it turns out that there's a generic technique for making block ciphers of arbitrary size (actually, even values only), called Luby-Rackoff (L-R) . The nice thing about L-R is that it's a general construction based on a pseudorandom function (PRF), which we know how to build with standard cryptographic techniques.

Cycle Walking
We can use L-R to build a block cipher with a block size of any number of bits we want, but this still means that our function produces 2^b possible values where b is the block size, but this generally won't line up perfectly with the set of values we want to encipher. To return to our SSN example, we have 10^9 possible values, which means we need a block size of 30 bits, which implies a set size of 2^{30} = 1073741824. So, for any given input value, there's about a 7% chance that it will encrypt to an invalid SSN (greater than 10^9). If the database (or software) is really aggressive about validity checking, then you'll have an unacceptable rejection rate.

To deal with this issue, Black and Rogaway describe a technique they call "cycle-walking". The idea is that we start with an initially valid value (1-999,999,999) and then encrypt it. If the ciphertext is also valid, we stop and emit it. If it's invalid (greater than 999,999,999), we encrypt again, and repeat until we have a valid value. This gives us an encryption procedure that is guaranteed to produce an in-range output. Decryption is done in the same way.

Bottom Line
So, why can't we just use cycle-walking? Because it only works well if the block size is approximately right—if the size of the valid set is a lot smaller than the block size of the cipher, then you have to do a lot of iterations in order to get an in-range result. So, you can't use a 64-bit block cipher in order to encrypt an SSN because you end up having to do a prohibitive number if iterations; you need to use L-R to construct a block cipher of approximately the right size and then use cycle-walking to shave off the last few values.

UPDATE: Paul Hoffman pointed out to me privately that it's not clear how this all relates to FPE. Basically, FPE means the combination of L-R plus cycle walking. This lets you do one-to-one and onto encryption for most set sizes. If the set size is really small, there's another technique (also due to Black and Rogaway): you encrypt all possible input values and then sort the ciphertexts. You then use the index of the ciphertext in the sorted list as the encrypted value. This is obviously prohibitively expensive unless the number of possible values is small because it requires encrypting all possible values and then keeping a very large mapping table.


April 19, 2008

Opinion has been shifting against polycarbonate plastics for a while now, and now Canada has decided to ban polycarbonate plastics for baby bottles:
OTTAWA -- The Canadian government moved Friday to ban polycarbonate infant bottles, the most popular variety on the market, after it officially declared one of their chemical ingredients toxic.

The action, by the departments of health and environment, is the first taken by any government against bisphenol-a, or BPA, a widely used chemical that mimics a human hormone. It has induced long-term changes in animals exposed to it through tests.


The health minister, Tony Clement, told reporters that after reviewing 150 research papers and conducting its own studies, his department concluded that children up to the age of 18 months were at the most risk from the chemical. Mr. Clement said that animal studies suggested "behavioral and neural symptoms later in life."

Clement claims that adults aren't at significant risk (note: I haven't really reviewed the literature myself at all), but MEC and Patagonia have already pulled polycarbonate drinking bottles (aka Nalgene bottles) off their shelves, and Nalgene has already introduced a new line of bottles called "Everyday" which aren't based on BPA but on Eastman's Tritan, which is supposed to be comparably tough to polycarbonate. Also, according to this article, Charles Shumer has introduced a bill to ban the use of BPA-based polycarbonates in food and drink applications. Industry has been pretty actively opposing this kind of regulation, but given that alternatives are starting to appear, I suspect we've reached an inflection point where they'll just start replacing polycarbonate in most applications instead.


April 18, 2008

At the FCC hearing yesterday, there was a lot of talk about metered Internet. (See also Rob Malan Rob Malan arguing that net neutrality legislation will inevitable result in metered Internet service). It's certainly true that metered Internet is one possible outcome of network neutrality regulation, but I'm not convinced that it's the only one. It's interesting to note that at the same time as we're arguing about this, all the major wireless service providers (which have historically been incredibly stingy about per-minute charging) have recently rolled out unlimited voice offerings.

Now, you could certainly argue (as Rob's argument implies) that there's an upper limit on how much bandwidth can be used by voice and so as the technology has improved, it's become cost effective to offer unlimited voice service, but that the bandwidth consumed by video will be much greater. On the other hand, just last year Apple pushed Cingular/AT&T into offering an unmetered data plan for the iPhone, and apps like YouTube for iPhone clearly encourage users to consume larger amounts of bandwidth than they would if just checking their email. Now, obviously the cell providers have a lot of latitude to manage the data portions of their network, but it still seems to me that there are a lot of factors in play here and that metered Internet is not the only possible outcome, even in a more highly regulated regime.


April 15, 2008

Jelly Belly has just brought out their BeanBoozled product:
BeanBoozled jelly beans come in 20 flavors, 10 weird and wild flavors matched up with 10 look-alike tasty flavors. Is the black jelly bean Licorice, or is it Skunk Spray? Perhaps the blue bean is Toothpaste flavor, or maybe it's delicious Berry Blue. Think you can tell them apart? We dare you!

You might not know when you will be bamboozled by a weird flavor. A key on the back of each box gives clues to the surprises found inside, but the beans look so similar, every bite will be a surprising dare.

I actually already have this problem with Jelly Belly bulk packs. Mrs. Guesswork buys these ginormous tubs of mixed jelly beans at Costco and I already find a pretty substantial fraction of the beans (cafe latte, cappuccino, a&w cream soda, licorice, ...) revolting, and they look a lot like other flavors that I like, so I have to be on my guard anyway.

On a related topic, why when you go to Costco do they insist on selling you mixed packs of cliff bars, power bars, etc. Are there really people who like vanilla crisp power bars, or is this just some scheme to get you to throw away 1/3 of the bars so you buy more that much sooner?"


April 14, 2008

Joe Hall asks why one would want a GPS-enabled watch. Roughly speaking, there are three features I want:
  • Altitude measurement (though note you can get sports watches with a barometric altimeter, which is actually more accurate, at least when you want to measure elevation gain;/lost).
  • Speed and distance. It's nice to be able to get some sense of how fast you're running and I find the GPS more convenient and comfortable than the foot pod pedometers that are the alternative.
  • Performance comparison. For my money, the coolest feature of a GPS sports watch is that you can get real time display of where you stand compared to a previous performance on the same course, which is a lot easier than remembering your time at multiple checkpoints. I can't figure out whether this is really useful—in fact I suspect it encourages you to push your workouts too hard to beat your previous pace—but it's still pretty sweet.

In principle a gizmo like this might be useful for getting you un-lost, but the fact that you don't have a real map, just a view of where you've been, makes it pretty hard to use for anything other than backtracking. If, for instance, you're doing a loop and there are multiple trails but not a dense enough network that you can just vector in on your start point directionally, than without a trail map a GPS is pretty useless. Pretty good for out and back trips, though.

Can someone explain to me why, whenever you go to a Mexican restaurant they short you on tortillas? I doubt it's cost because tortillas are incredibly cheap (less than $.04/ea retail for your standard corn tortilla), and these self-same places will gladly provide you with a mountain of tortilla chips whether you ask for them or And yet time after time I find myself sitting at my table waiting for some server to notice that I was out of tortillas and bring me some more so I can assemble my fajitas. I suppose it's possible that I use more tortillas than average, but really, who can assemble a whole plate of fajitas with 3 tortilla? Baffling.

April 13, 2008

I already own the Garmin 305 and I'm reasonably pleased with it, though it's still fairly clunky. Garmin just brought out the new model, the 405, which looks to be quite a bit sleeker, a tiny bit smaller, and about 25% lighter. Not enough to justify buying a new unit, but a bit slicker if you're in the market.


April 12, 2008

Schneier expresses concern about the tire pressure monitoring system (TPMS). The way TPMS works is that each wheel contains a pressure sensor and a radio transmitter which transmits pressure data to a receiver in the car, which somehow alerts you if the pressure is too low. The alleged problem is that in order to allow distinguishing wheels from each other (and from those in adjoining cars), each wheel has a unique identifier, raising the possibility that one could build a radio receiver which would listen for these transmissions and track your car.

Obviously, this isn't that attractive a feature, as Hexview observes:

What problems exactly does the TPMS introduce? If you live in the United States, chances are, you have heard about the "traffic-improving" ideas where transportation authorities looked for the possibility to track all vehicles in nearly real time in order to issue speeding tickets or impose mileage-adjusted taxes. Those ideas caused a flood of privacy debates, but fortunately, it turned out that it was not technically of financially feasible to implement such a system within the next 5-10 years, so the hype quickly died out.

Guess what? With minor limitations, TPMS can be used for the very purpose of tracking your vehicle in real time with no substantial investments! TPMS can also be used to measure the speed of your vehicle. Similarly to highway/freeway speed sensors that measure traffic speed, TPMS readers can be installed in pairs to measure how quick your vehicle goes over a predefined distance. Technically, it is even plausible to use existing speed sensors to read TPMS data!


As every other tracking technology, the TPMS was introduced as a safety feature "for your protection". One might wonder why NTHSA (a government agency) would care so much about a small number of accidents related to under-pressurized tires. And why would it choose to mandate TPMS and not run-flat technology? Are we being tracked already? I hope not.

It's absolutely true that NHTSA required TPMS. It doesn't look to me, however, like NHTSA required this particular implementation, or any particular implementation. They just required that the car be able to detect that the car be able to detect loss of pressure by more than 25%. As Hexview observes, there is a simple implementation that dramatically reduces the privacy problem: encrypt the sensor readings, and as far as I can tell this would be quite compatible with the NHTSA requirements (this doesn't totally reduce the problem because of radio fingerprinting, but this is harder than just reading the ID out of the air). The good news is that since there's no need for my car to be able to read your car's tire pressure, it's quite possible for manufacturers to do the right thing without any kind of new standard.

Hexview implies that NHTSA may have required TPMS in order to enable them to monitor your whereabouts, but I find that somewhat unlikely. Certainly, when I was involved with DSRC/WAVE, privacy was foremost on everyone's minds, so it would be strange of NHTSA were to deliberately attempt to violate driver privacy. That said, the manufacturers were also pretty concerned about privacy, so if they have rolled out a system that enables tracking, that's a little surprising.


April 11, 2008

I recently found myself in the market for a new water filter to replace my venerable Katadyn Hiker (don't ask what happened, suffice to say it wasn't Katadyn's fault). Anyway, I cruised over to REI and after a bunch of dithering let the sales guy talk me into an MSR Sweetwater. In theory the Sweetwater has some advantages over the Hiker:
  • The handle gives it more mechanical advantage so it's supposed to be easier to pump.
  • The fitting for the nalgene bottle screws on so there's less risk that it pops off and lands in the contaminated water you're filtering out of.
  • You leave the tubes permanently connected and the pump has grooves to let you wrap the tubing around it. There's a cap that screws onto the nalgene fitting so you don't need to bag it to protect it from the contaminated hoses.
  • There's an overpressure port that's supposed to squirt out water when the filter clogs, giving you warning.

That's the theory. The practice was a little less overwhelming. First, the pumping action promises to be convenient but I actually found it quite awkwared and worse than the Hiker. Worse, I hadn't pumped 10 liters of clean water before water started spurting out of the overpressure port. I first attributed this to a tight seal with the container I was pumping into, but even after I vented the container (which seemed to help some), it still seemed to be spurting. When this happens you are supposed to scrub the inside of the filter. This seemed to help temporarily, but later in the day when I was forced to pump from some murkier water, it clogged again and worse yet the output seemed to be slightly green. We were able to scrub the filter and get OK-looking output from a cleaner stream later, but this did not leave me feeling very warm about the whole thing, and seeing as I originally bought it from REI, I returned it at the conclusion of our trip. It's surprising, actually, since I've had other MSR gear (including the classic Whisperlite stove) and had found it to work pretty well.

At this point, I'm trying to choose between another Hiker. I've had several and they're quite solid. There's also the brand new MSR Hyperflow. It's only about 2/3 the weight of the Hiker or Sweetwater, and pumps twice as fast. It's a different filter technology than the Sweetwater so there's no reason to think I'll have the same clogging problem. On the other hand it's absolutely brand new, so I'm tempted to wait for others to gain some experience with it before forking over my money. The one bad thing I've heard so far is that if you let the filter freeze it destroys it, so for cold weather camping you'd need to sleep with it, which is kind of lame.


April 10, 2008

UK ISPs say that the the BBC's iPlayer video service is "threatening to bring the network to a halt":
The success of the BBC's iPlayer is putting the internet under severe strain and threatening to bring the network to a halt, internet service providers claimed yesterday.

They want the corporation to share the cost of upgrading the network - estimated at £831 million - to cope with the increased workload. Viewers are now watching more than one million BBC programmes online each week.

The BBC said yesterday that its iPlayer service, an archive of programmes shown over the previous seven days, was accounting for between 3 and 5 per cent of all internet traffic in Britain, with the first episode of The Apprentice watched more than 100,000 times via a computer.


The whole reason I get Internet service is so I can download stuff off the Intertubes, which means that whenever I find something cool and want to suck it down I don't want my ISP complaining that I'm using too much bandwidth. Now, it's true that it's true that when there is a lot of data flowing from a server on ISP A to ISP B, it's a bit confusing whether A should pay B, B should pay A, or whether money should change hands at all. Note that these aren't networking issues, but pure economic issues—issues like this have been relevant ever since the days of paper mail. (In the Internet world, a lot of this is social. The amount of money you have to pay in settlements sort of scales with bandwidth until you get big enough that people want to peer with you, at which point you don't pay). And I'm not saying there isn't a need for the ISPs to upgrade their infrastructure and figure out who's rates to jack up in order to pay for it. But these issues aren't really effectively solved by every ISP trying to hold up any service that gets popular.


April 9, 2008

It's well known among authors, it's incredibly dangerous to look in your book once it's published—you're sure to find some embarassing error as soon as you open it. (Don't get me started about some of the errors in SSL and TLS). In that vein, I recently picked up Matthew Yglesias's, Heads in the Sand: How the Republicans Screw Up Foreign Policy and Foreign Policy Screws Up the Democrats. Yglesias is famous for writing quickly and having numerous typos, homonym mixups, etc. in his posts. Sure enough, I hadn't gotten past page xviii when I discovered he had misspelled the name of his roommate, Kriston Capps, except Yglesias spells it "Krison." It's reasonably interesting otherwise, though.

April 8, 2008

A group in Colorado recently reported that Comcast had introduced a new traffic shaping mechanism which was blocking even non-P2P TCP connections:
Recently, it has been observed that Comcast is disrupting TCP connections using forged TCP reset (RST) packets [1]. These reset packets were originally targeted at TCP connections associated with the BitTorrent file-sharing protocol. However, Comcast has stated that they are transitioning to a more "protocol neutral" traffic shaping approach [2]. We have recently observed this shift in policy, and have collected network traffic traces to demonstrate the behavior of their traffic shaping. In particular, we are able (during peak usage times) to synthetically generate a relatively large number of TCP reset packets aimed at any new TCP connection regardless of the application-level protocol. Surprisingly, this traffic shaping even disrupts normal web browsing and e-mail applications. Specifically, we observe two different types of packet forgery and packets being discarded.


We synthetically generated TCP SYN packets at a rate of 100 SYN packets per second using the hping utility [3]. The packets were destined for the reserved IP address, on which no host is present. We simultaneously collect network traces using tcpdump [4]. This data collection process was repeated at various times throughout multiple days. In addition, we could monitor a destination host to determine if outgoing packets reached their destination, and to determine if responses are generated by the destination host or by a third-party. Finally, this data collection was conducted from multiple Comcast accounts, all within close geographical proximity.

My initial reaction was that 100 SYN packets per second is a pretty pathological load, especially when these packets are being sent to a bogus net block (2/8 is unallocated by IANA) and that this sounded a lot more like some sort of anti-DoS measure, not really traffic shaping. The answer seems out to be even simpler:

A note regarding our findings: Further experiments have led us to believe that our initial conclusions that indicated Comcast's responsibility for dropping TCP SYN packets and forging TCP SYN, ACK and RST (reset) packets was incorrect. Our experiments were conducted from behind a network address translator (NAT). The anomalous packets were generated when the outbound TCP SYN packets exceeded the NAT's resources available in it's state table. In this case, TCP SYN, ACK and RST packets were sent. We would like to thank Don Bowman, Robb Topolski, Neal Krawetz, and Comcast engineers for bringing this to our attention. We sincerely apologize for any inconvenience that this posting may have caused.

This sounds pretty plausible to me; NATs often have fairly small state tables and 100 cps could definitely overload a residential NAT pretty quickly, and it's certainly easy enough to forget that the NAT box you're sitting behind is anything other than transparent. One thing that surprises me a bit is that the researchers claim to be seeing two kinds of anomalies: RSTs and spoofed SYN/ACKs. One would generally expect to see only one kind of response to overload. Maybe they have more than one NAT, though. It would certainly be interesting to hear what exact network gear they are using.


April 7, 2008

Went by the UPS store to mail something today. I had a prepaid shipping label so it was just a matter of slapping a pouch on the box and shoving in the label and dropping it off. Turns out they don't actually have pouches and they usually charge you a dollar to tape it on, though the clerk waived the fee. According to her "We're not UPS, though a lot of people think so." I can't imagine why anyone would think "The UPS Store" was UPS. As I understand the situation, The UPS Store is the new name for Mail Boxes etc., and they're franchised, which, I guess, is why they want to sell you supplies rather than give them to you for free like FedEx (and as I recall regular UPS locations) do.

April 6, 2008

A treasure trove of Tom Lehrer viseos.

More Tom Lehrer: Silent E and LY from the Electric company.

Chris Sharma doing the first ascent of Dreamcatcher, 5.14c/d in Squamish, BC.

Title: On the security of traffic school After a recent ticket (received while on a conference call, believe it or not), I opted for traffic school. In Santa Clara County, if you want Web traffic school, you need to take it from DriversEd.com. Luckily, as of 2008 you no longer need to go in and take a final exam in person; you can do the whole thing online, which makes the process comparatively painless.

Unsurprisingly, DriversEd.com has some features designed to ensure you receive the full educational value of the traffic school experience:

  • Timers that require you to stay on a page for a specified amount of time.
  • Intermediate tests.
  • Security questions (e.g., what's the last four digits of your SSN).

Of course, as a security guy the first thing I think about is how to bypass this stuff. The timers are easy: they're in JavaScript. If you run your browser without JavaScript they go away, so you can in principle zip through the pages as fast as you want. I didn't see any evidence this was enforced on the server side.

Of course, then you're not paying attention, so you may have some trouble with the intermediate tests. Luckily, if you get an answer wrong, they give you the right answer and then give you a slightly different selection of questions, but with a lot of overlap with the previous ones. I wasn't brave enough to try this, since there might be some limited try feature, but it looks like you could just fail your way to having all the answers. And, of course you could Google the answers. So, clearly, one could just zip through all the pages and then flail through the self-test.

The security questions are obviously designed to stop you from outsourcing the task of taking the class to someone else. You'd need to give them some personal information. Most of it (weight, DL#, height, DOB, zip code) isn't really private. You might not want to give your contract click monkey your SSN, though. Weirdly, the registration program prompts you for this stuff, even though a lot of it is on your license. I wonder if you could just type fake answers, in which case you would presumably be OK with having someone do it for you.

If you were willing to do some programming work, you could probably just screen scrape the pages, clicking through the instructional pages, picking out the self tests and answering them by random guessing + corrections (nicely highlighted in red and green), and then answering the security questions. With some luck, I suspect I could do this in about 20% more time than it would take to just go through the class the old fashioned way. That's what a real programmer would do.


April 5, 2008

In the comments section Brian Korver points me to Priority Start, which is a third party gizmo that claims to detect excessive battery drain and disconnect the battery, thus avoiding complete drain. Looks pretty slick. Anyone have experience with one of these?
Computerworld has a good article about hard drive failures. The bottom line here is that (unsurprisingly) real world drive failure rates far exceed the failure rates (MTBF and AFR) reported by manufacturers. This won't surprise anyone who has operated systems with a reasonable number of disks.

Fundamentally, though, what's annoying about disk drive failures isn't that they happen but that they're unpredictable. After all, the gas in my car keeps breaking—every three hundred miles or so I need to put more in—but that's not a big problem because I have a gauge that tells me when I need to refill the tank. If hard drives behaved the same way, you could just treat them as a consumable. The problem is that (as Pinheiro et al. report), disk drive failures are random and the SMART diagnostics don't provide reliable warning. Instead, you're left with failures as surprise events requiring emergency recovery. Even if you have backups, this kind of failure isn't fun, coming as it always seems to right when you're about to go home for the weekend.

The standard answer here is to use RAID and then swap the drives whenever one fails, but my experience (and that of other home users I've talked to) is that RAID systems fail to recover often enough on drive failure that you not infrequently end up with something that looks more like a backup and restore than an emergency replacement.


April 4, 2008

Ed Felten has another post up about the discrepancies in the Sequoia machines used in the New Jersey primary. At the moment, I don't have much to add to Ed's analysis of the situation (except to say that this sure looks like some corner case where some counter is being incremented where it shouldn't or not incremented where it should), but check out the image of the summary tape he posts:

Note the empty space where the number of the seal is supposed to be written down but isn't. That's not really good.

The background here is that there are lots of parts of the voting machine which (e.g., the ability to open the case or swap memory cards), if accessed by the wrong people, could lead to various forms of attacks. It's hard to build the system to guarantee that nobody could obtain access, and what's really needed is to limit access to authorized personnel, which is even harder. Instead, the systems are generally designed so that a temper evident seal can be placed in such a way that you need to breach the seal in an obvious way order to obtain access. In practice it turns out that this isn't always true, both because the seal points aren't always placed correctly and because it's actually known to be possible to open a lot of seals without creating evidence (see the California TTBR reports for more on this.) But that's the theory.

However, there are a lot of seals and they need to be broken and replaced pretty frequently (e.g., to insert and remove the memory cards before and after the election) and seals are available on the open market. So, it's not just enough to have a seal, you need to be able to detect when the seal is replaced with another, similar seal. Unsurprisingly, this is done by having each seal have its own serial number. Every time a seal is breached or placed you're supposed to record the seal number (that's what that space on the summary tape is for) and part of the job of verifying a seal is checking that the number is what it's supposed to be. If you don't record the seal numbers, then the system falls apart, since anyone who has access to any tamper seals at all can just break the seals, do whatever they want, and replace them with their own seals. And since a lot of the security of the current systems depends on the seals working (for instance, in many of the systems the seals cover access points which would allow complete subversion of the device; see the TTBR reports again) this is fairly serious.

Now, I don't know that much about the ES&S machines or the chain of custody procedures in NJ. It could easily be that NJ has some procedural control that renders this particular seal unimportant. But absent any further information, this seems like not it might not be a particularly great practice.


April 3, 2008

A month or so ago, Mrs. G. bought us one of these microplane cheese graters. It's pretty cool technology, actually. They're chemically dissolved rather than stamped. It goes through parmesan like butter. Unfortunately, it also goes through knuckles like butter. Just FYI.
Martin Rex points to this whitepaper about a claimed security flaw in RFC 3280 (the RFC for X.509/PKIX certificates). The issue is that certificates can contain a variety of URLs, including:
  • Intermediate certificates (for the signer of this cert).
  • Pointers to the certificate policy statement
  • Pointers to where you can get a CRL
  • Pointers to an OCSP server

When your client goes to validate the certificate, it may (automatically) try to retrieve what's on the other end of the URL. Arguably, this is bad:

However, elegant does not usually mean secure. The problem in this case is that until the certificate chain is verified, the user sending the certificate is usually untrusted. This means that the specified URI has to be treated as potentially evil input from an unauthenticated user. This simple fact is missing from the "Security Considerations" section of the RFC and thus implementors have gotten it wrong.

When implemented naively, this means that an unauthenticated user can embed arbitrary URIs within a certificate and can thus force the verifier to send out arbitrary HTTP requests on its behalf -- for example to networks formerly unreachable to an attacker. The response itself is not forwarded to the attacker, so he is limited to blind attacks. A specific case of this can be used to gain information about the verifier -- for example whether he has opened a certain email or office document. As more than one URI can be embedded in the certificate, it would also theoretically be possible to gain information on internal networks using timing information. For this, one would create a certificate with one URI controlled by the attacker, one URI internal to the attacked, one URI controlled by the attacker and measure the timing distance between the two accesses to the attacker-controlled URIs. In practical experience, this is still theoretical, though.

This certainly is technically true, but it's unclear how serious this issue is. After all, if your mail client is willing to read HTML mail (and many are) and if it automatically retrieves inline images (many do), then it's pretty easy for the sender to verify that you read a given message, and potentially mount the timing attacks these researchers describe (though it's an open question whether this would work.) There are actually a number of protocols that have automatic URL dereference built in.

There's actually something more interesting, though tricky to exploit (and harder to deal with) here. The white paper talks about probing internal servers, but depending on what services are running there and what ports they're running on, there's some possibility that the client (presumably behind the firewall) could talk to the internal server and do more than detect it. Theoretically, it might be able to give it instructions that it would follow. How well this actually works depends a lot on what the internal service you're talking to is. The attacker doesn't get much control of the message it sends to the server, it just gets to embed the URL in some HTTP GET request. Obviously, since the client is talking HTTP, it would be most convenient if you were talking to an HTTP server, since you'd be protocol compliant. In theory, HTTP GETs are not supposed to have side effects on the server, but of course that happens all the time.

If the server isn't HTTP, then you have to get pretty lucky. You need to be able to encode a meaningful protocol request in the URI and the server needs to be resilient enough to nonconformant traffic that it's willing to ignore the bogus HTTP request wrapper and process whatever request is embedded in the URI. This obviously isn't superconvenient for the attacker, who would like a much finer degree of control of the protocol messages, like you'd get with a client-side program (hence the browser same origin policy).


April 1, 2008

Went to start our Prius today and the door wouldn't unlock. I thought it might be the key fob but when I manually unlocked it, put the fob in the ignition and got inside, none of the console lights would come on. It turns out that we'd left the dome light on and run the battery down to zero. How can this be, you ask? It's a hybrid car with a 6.5Ah battery. Surely, it should be able to power a crappy roof mounted lightbulb more or less indefinitely. It turns out, however, that that battery just runs the drive system. There's a dinky little lead acid battery that's used to run the onboard electrics and (presumably) start the gas engine. This is easy to run down. Or at least so says the guy who came by to jump start it. Sure would be nice to have some gizmo that detected when you were running down the battery and shut down the internal lights. Not exactly a complicated piece of science to add...