EKR: July 2009 Archives

 

July 29, 2009

Wired reports on Apple's response to EFF's proposed DMCA exception for iPhone jailbreaking. I'm not qualified to have a position on the legal arguments Apple is advancing, but check out their technical arguments:
More generally, as Mr. Joswiak testified at the hearings in Palo Alto, a critical consideration in the development of the iPhone was to design it in such a way that a relationship of trust could be established with the telecommunication provider (AT&T in the case of users in the U.S.). Before partnering with Apple to provide voice and data services, it was critical to AT&T that the iPhone be secure against hacks that could allow malicious users, or even well- intentioned users, to wreak havoc on the network. Because jailbreaking makes hacking of the BBP software much easier, jailbreaking affords an avenue for hackers to accomplish a number of undesirable things on the network.

For example, each iPhone contains a unique Exclusive Chip Identification (ECID) number that identifies the phone to the cell tower. With access to the BBP via jailbreaking, hackers may be able to change the ECID, which in turn can enable phone calls to be made anonymously (this would be desirable to drug dealers, for example) or charges for the calls to be avoided. If changing the ECID results in multiple phones having the same ECID being connected to a given tower simultaneously, the tower software might react in an unknown manner, including possibly kicking those phones off the network, making their users unable to make phone calls or send/receive data. By hacking the BBP software through a jailbroken phone and taking control of the BBP software, a hacker can initiate commands to the cell tower software that may skirt the carrier's rules limiting the packet size or the amount of data that can be transmitted, or avoid charges for sending data. More pernicious forms of activity may also be enabled. For example, a local or international hacker could potentially initiate commands (such as a denial of service attack) that could crash the tower software, rendering the tower entirely inoperable to process calls or transmit data. In short, taking control of the BBP software would be much the equivalent of getting inside the firewall of a corporate computer - to potentially catastrophic result. The technological protection measures were designed into the iPhone precisely to prevent these kinds of pernicious activities, and if granted, the jailbreaking exemption would open the door to them.

This is an odd set of arguments: if what I want to do is bring down the cell network, I've got a lot of other options than hacking my iPhone. For instance, I could buy a less locked down phone or a standard programmable GSM development kit on the open market. In general, GSM chipsets and radios just aren't a controlled item. Second, if a misbehaving device is able to bring down a significant fraction of the cellular system, then this represents a serious design error in the network: a cell phone system is a distributed system with a very large number of devices under the control of potential attackers; you need to assume that some of them will be compromised and design the network so that it's resistant to partial compromise. The firewall analogy is particularly inapt here: you put untrusted devices outside the firewall, not inside. I'm not an expert on the design of GSM, but my impression is that it is designed to be robust against handset compromise. The designs for 3GPP I've seen certainly assume that the handsets can't be trusted.

That leaves us with more mundane applications where attackers want to actually use the iPhone in an unauthorized way. Mainly, this is network overuse, toll fraud, etc. (Anonymous calling isn't that relevant here, since you can just buy cheap prepaid cell phones at the 7/11. You'd think someone at Apple would have watched The Wire.) As far as toll fraud goes, I'm surprised to hear the claim that hacking the iPhone itself lets you impersonate other phones. My understanding was that authentication in the GSM network was primarily via the SIM card, which is provided by the carrier and isn't affected by phone compromise. [The GSM Security site sort of confirms this, but I know there are some EG readers who know more about GSM security than I do, so hopefully they will weigh in here.] It's certainly true that control of the iPhone will let you send traffic that the provider doesn't like, and the phone can be programmed to enforce controls on network usage, so this is probably getting closer to a relevant concern. On the other hand, controls like this can be enforced in the network in a way that can't be bypassed by tampering with the phone.

While I'm not that concerned about jailbreaking leading to the parade of horrors Apple cites here, it's arguable that Apple's insistence on locking down the platform has made the problem worse. What people want to do is primarily: (1) load new software on the phone and (2) unlock the phone so they can use it with other carriers. However, because Apple won't let you do either of these, a lot of effort has been put into breaking all the protections on the phone, which naturally leads to the development of expertise and tooling for breaking the platform in general. There's an analogy here to the observation (I think I heard Mark Kleiman make this) that minimum drinking ages lead to the development of a fake ID industry, which then makes it easier for criminals and terrorists to get fake IDs.

 

July 27, 2009

LaTeX is great at generating math symbols, but it gets hard to remember whatever bizarre character code the LaTeX guys thought was appropriate. Luckily Joe Hall recently pointed me to Detexify a tool that lets you draw the symbol you want and then tries to guess what the character is and then gives you the appropriate codes. It generally gives you a bunch of options and often some of them are pretty comically wrong, but so far it's always given me the right one as well, so that's good.
 

July 26, 2009

Hovav Shacham just alerted me to an Internet emergency: AT&T is blocking 4chan. I don't know any more than you, but I think it's probably time to upgrade to threatcon orange.
 
The NYT's somewhat overwrought article about the putative future of AI includes the following egem:
Despite his concerns, Dr. Horvitz said he was hopeful that artificial intelligence research would benefit humans, and perhaps even compensate for human failings. He recently demonstrated a voice-based system that he designed to ask patients about their symptoms and to respond with empathy. When a mother said her child was having diarrhea, the face on the screen said, "Oh no, sorry to hear that."

A physician told him afterward that it was wonderful that the system responded to human emotion. "That's a great idea," Dr. Horvitz said he was told. "I have no time for that."

Maybe I'm just too close to the problem, but I pretty regularly get apologizies from pieces of machinery and I don't find them satisfying at all. For instance, nearly every phone tree in the universe apologizes for you having to wait, and United's IVR apologies for not understanding you. Maybe the first time you get this it's a surprise, but it doesn't take long to realize it's the same insincere recorded voice and then it's just "Must. Control. Fist. Of. Death." Also, anger doesn't help the IVR understand you.

 

July 25, 2009

The Times reports that German theater security is using night vision goggles in an attempt to detect people pirating the new Harry Potter movie:
Keep your hands where we can see them! Warner Bros. Pictures is resorting to drastic measures to prevent unauthorized video recordings of its newest Harry Potter epic. Security guards in Germany have been using night vision goggles in theaters running Harry Potter and the Half-Blood Prince to find camcorders that might be otherwise hard to spot once the theater lights are off.

...

Warner has since officially acknowledged the use of the surveillance gear. The company said that it was restricted to 10 theaters that have been known to be visited by pirates armed with camcorders before. Security guards don't take any video recordings of the audience, and theaters clearly warn customers about the measures, it told the German press. A theater owner told reporters that Warner threatened to stop the distribution of any future titles to her theater if she hadn't agreed to the measure, according to a report by Die Welt.

Seems like being a theater employee in Germany is a lot cooler than it was when I was a kid. As far as privacy goes, I seem to remember that movie theaters are not infrequently used as make out venues. That might make things a bit more interesting...

P.S. This seems like totally reasonable law enforcement practice.

 

July 24, 2009

As you may have heard, Palm and Apple are currently in an arms race over whether the Palm Pre can sync with iTunes. When the Pre first came out, it synced with iTunes. Apple recently released a patch to block it, and Palm released an update to the Pre that counters Apple's blocking. The current round centers on USB vendor IDs. USB devices have a vendor id which identifies who makes the product. iTunes apparently checks for Apple's vendor ID. Palm is impersonating it, so the Pre appears to be an iPod.

It should be readily apparent that there's no technical way for Apple to prevail with this kind of strategy; as long as there is a single fixed string that a valid device emits, Palm just needs to get a copy of that string and send it to iTunes (communications security people call this a replay attack). That doesn't mean that Apple can't win, of course. For instance, they could convince the USB Implementor's Forum that Palm is violating the rules (Palm has already complained about Apple). I don't know what, if any enforcement powers USB-IF has, but if they have any, Apple might conceivably convince them to stop Palm. [Question for any lawyers: does this change by Palm "circumvent a technological measure that effectively controls access to a work protected under this title" in the sense of the DMCA?] Another way to get past the technical replay problem is to make the replayed string something that Palm can't legally replay, like a random section of the iPod firmware.

Even if we limit ourselves to technical approaches (which is much more fun) there are straightforward technical measures by which Apple could have built the system to make what Palm has done essentially impossible. For instance, they could have given every i{Pod,Phone} an asymmetric key pair and certificate and forced each device to authenticate prior to syncing. This would have made Palm's job very hard: even if they were to recover the keys from some devices, Apple could quickly blacklist those devices—including having an online blacklist which iTunes checks. Since the whole point of the exercise is to make things easy for the user, forcing them to constantly download fresh keys to their Pre seems like a real imposition.

However, it seems that Apple hasn't built anything like this into their systems and it's a bit of a challenge to do it now; we somehow need to initialize each device with a key and a certificate. There's of course no problem in loading new firmware and having it generate a key pair, certificate signing request, etc., and having it signed by Apple. But of course the Pre can do the same thing, so we've reduced it to a previously unsolved problem. One could imagine that Apple could force the key generation/certification process to happen online and torture the device with a bunch of forensics. Palm can of course try to defeat those, but since Apple just needs to change their servers which they can do rapidly, this makes Palm's job somewhat harder. And of course if we're willing to allow legal measures, Apple could force you to click through some license attesting that you have an Apple device, maybe check your serial number, etc. Ultimately, though, I'm not sure you can get past this bootstrapping problem with purely technical measures.

 
Ed Felten writes about the economic forces that drive cloud computing, arguing that a prime driver is the desire to reduce administrative costs:
Why, then, are we moving into the cloud? The key issue is the cost of management. Thus far we focused only on computing resources such as storage, computation, and data transfer; but the cost of managing all of this -- making sure the right software version is installed, that data is backed up, that spam filters are updated, and so on -- is a significant part of the picture. Indeed, as the cost of computing resources, on both client and server sides, continues to fall rapidly, management becomes a bigger and bigger fraction of the total cost. And so we move toward an approach that minimizes management cost, even if that approach is relatively wasteful of computing resources. The key is not that we're moving computation from client to server, but that we're moving management to the server, where a team of experts can manage matters for many users.

This certainly is true to an extent and it's one of the driving factors behind all sorts of outsourced hosting. Educated Guesswork, for instance, is hosted on Dreamhost, in large part because I didn't want the hassle of maintaining yet another public Internet-accessible server. I'm not sure I would call this "cloud computing", though, except retroactively.

That said, the term "cloud computing" covers a lot of ground (see the Wikipedia article), and I don't think Felten's argument holds up as well when we look at examples that look less like outsourced applications. Consider, for example Amazon's Elastic Compute Cluster (EC2). EC2 lets you rapidly spin up a large number of identical servers on Amazon's hardware and bring them up and down as required to service your load. Now, there is a substantial amount of management overhead reduction at the hardware level in that you don't need to contract for Internet, power, HVAC, etc., but since you're running a virtualized machine, you still have all the software management issues Ed mentions, and they're somewhat worse since you have to work within Amazon's infrastructure (see here for some complaining about this). Much of the benefit of an EC2-type solution is extreme resource flexibility: if you have a sudden load spike, you don't need to quickly roll out a bunch of new hardware, you just bring up some EC2 instances. When the spike goes away, you shut them down.

A related benefit is that this reduces resource consumption via a crude form of stochastic multiplexing: if EC2 is running a large number of Web sites, they're probably not all experiencing spikes at the same time, so the total amount of spare capacity required in the system is a lot smaller.

Both of these benefits apply as well to applications in the cloud (for instance, Ed's Gmail example). If you run your own mail server, it's idle almost all the time. On the other hand, if you use Gmail (or even a hosted service), then you are sharing that resource with a whole bunch of different people and so Amazon just needs enough capacity to service the projected aggregate usage of all those people, most of whom aren't using the system very hard (what, you thought that Amazon really had 8G of disk for each user?). At the end of the day, I suspect that the management cost Ed sites is the dominant issue here, though, which, I suppose argues that lumping outsourced applications ("software as a service") together with outsourced/virtualized hardware as "cloud computing" isn't really that helpful.

 

July 20, 2009

The NYT reports that 94% of Americans are gullible enough to believe that NASA really landed on the moon.

In an interview, Mr. Sibrel said that his efforts to prove that men never walked on the Moon has cost him dearly. "I have suffered only persecution and financial loss," he said. "I've lost visitation with my son. I've been expelled from churches. All because I believe the Moon landings are fraudulent."

Ted Goertzel, a professor of sociology at Rutgers University who has studied conspiracy theorists, said "there's a similar kind of logic behind all of these groups, I think." For the most part, he explained, "They don't undertake to prove that their view is true" so much as to "find flaws in what the other side is saying." And so, he said, argument is a matter of accumulation instead of persuasion. "They feel if they've got more facts than the other side, that proves they're right."

Mark Fenster, a professor at the University of Florida Levin College of Law who has written extensively on conspiracy theories, said he sees similarities between people who argue that the Moon landings never happened and those who insist that the 9/11 attacks were planned by the government and that President Obama's birth certificate is fake: at the core, he said, is a polarization so profound that people end up with an unshakable belief that those in power "simply can't be trusted."

What I find more interesting than the elaborate explanations that people come up with here is the intensity of their belief. This is especially true with the moon landing, since basically nothing rides on the question of whether it happened or not. I mean, say you had definitive proof that the moon landing was faked, what then? You'd basically succeed in embarassing a lot of people who are mostly either very old (Armstrong is 78) or very dead. On the other hand, if Obama was really not a US Citizen, you might be able to change who was president, and if 9/11 was really a government conspiracy that would presumably have a fairly significant political impact.

So, the bottom line here is that you believe that you have a an obvious line on the truth and more or less everyone else is delusional. Equally obviously, everyone else thinks you're crazy and they don't want to hear about it. But of course the same personality type that lets you believe everyone else is crazy appears to preclude you just feeling quietly superior.

 

July 19, 2009

Unsurprisingly, defenses of Amazon's behavior in the Kindle affair have started to emerge. I ran into this argument from Peter Glaskowsky today (original source: Hovav Shacham):
The listing for the illegal copy of "1984" is still present on Amazon, though it can no longer be purchased. The page for "Animal Farm" from the same publisher still appears in Google's listings, but is no longer available on Amazon--though another pirated copy is still listed but not purchasable. (I'm not sure these are exactly the same copies at issue in this case, but at least that copy of "1984" was yanked in the same way, according to an Amazon customer discussion.)

Note the caveat placed on the 1984 page by the publisher:

"This work is in the public domain in Canada, Australia, and other countries. It may still be copyrighted in some countries. The user should determine whether the work is in the public domain in their own country before using it."

But of course, verifying the copyright status of a book isn't just the user's responsibility. It's the publisher's, too, and Amazon's.

When Amazon discovered these unauthorized sales, it did the right thing: it reversed them.

The police would do the same thing if they discovered a stolen car in your driveway: just take it away. You never owned it.

First, this argument elides the difference between actual theft and copyright infringement. You'd hardly think this would need to be pointed out, but if I steal your car, that pretty much precludes your driving it. If I violate copyright on a book you wrote, at most I've deprived you of whatever revenue you would have made had I bought it instead. That's a pretty significant difference.

Even if we ignore that, this is a pretty tendentious analogy: Amazon is not the police. Say that the original vendor had stolen a box of copies of 1984 and was selling them on Amazon marketplace. If I bought one and then Amazon later determined that they were stolen, it's not like they would be allowed to break into my house and repossess it, even if they gave me my money back. The original owner might be able to call the police and arrange for return of the property, but that's a pretty different story.

And of course, this isn't a case of theft, but rather (alleged) copyright infringement. I don't even know what the case law is in terms of the original rights holder repossessing material where the person in possession of material acted in good faith, but it's not clear to me it's that straightforward a proposition.

 

July 18, 2009

As you may have heard, Amazon recently decided that they shouldn't have sold electronic copies of two George Orwell novels and deleted them from people's Kindles (found via TGDaily):
In George Orwell's "1984," government censors erase all traces of news articles embarrassing to Big Brother by sending them down an incineration chute called the "memory hole." On Friday, it was "1984" and another Orwell book, "Animal Farm," that were dropped down the memory hole - by Amazon.com.

In a move that angered customers and generated waves of online pique, Amazon remotely deleted some digital editions of the books from the Kindle devices of readers who had bought them.

An Amazon spokesman, Drew Herdener, said in an e-mail message that the books were added to the Kindle store by a company that did not have rights to them, using a self-service function. "When we were notified of this by the rights holder, we removed the illegal copies from our systems and from customers' devices, and refunded customers," he said.

Amazon effectively acknowledged that the deletions were a bad idea. "We are changing our systems so that in the future we will not remove books from customers' devices in these circumstances," Mr. Herdener said.

Customers seem pretty surprised that Amazon has this capability, and I admit that I'm a little surprised that they have it as a built-in feature, but a Kindle isn't like a PC, or even an iPhone: it's basically a device that Amazon controls that you just happen to have in your hands. Here's how software updates from Amazon happen:

All Kindles are designed to automatically check for and download updates when one is available. If an update is available, your Kindle will download and install the update the next time the wireless connection is activated and Kindle goes into sleep mode.

During the update, you'll see screens that show the update progress. The update should take less than 10 minutes and is complete when Kindle displays the Home screen. Do not power off or reset your Kindle until the update is complete.

So, even if the current software load doesn't include remote control features, tomorrow's load could, and you don't really have the option of refusing the update.

Of course, this is just a generalization of what digital rights management software has always done: outsourced control of some of the functions of your computer to whoever (allegedly) has copyright over the contents you're displaying. With a typical DRM scheme this just extends to stopping you from making copies, maybe exporting to untrusted devices, etc., but you still generally have control of your own computer, and the terms don't suddenly change in unexpected ways after you've bought the thing. In principle, of course, Microsoft or Apple or whatever could force new updates on you, but in practice they always seem to ask you whether you want to install an update. But in the case of a Kindle, Amazon controls it more or less completely. As you've just seen, we don't have any real idea of what Amazon can do at the moment and as I said they can change the terms at any time.

Addendum: This is twice in a week that Amazon has had to walk back some customer-unfriendly move (the first was cracks in the case due to Amazon's protective cover, where Amazon was initially going to charge $200 to fix the screen). The general pattern from Amazon and companies in general (think Apple's $200 price cut on the first generation iPhone), it seems like the vendor generally starts by ignoring fairly obvious customer dissatisfaction and then having to fold due to bad PR. Any readers have a sense of the cost/benefit analysis here? Do companies consciously decide to blow the customers off and figure they'll just weather the bad press or is it one of those things where they just have lousy customer service policies and it doesn't get escalated to a high enough level until after the PR situation has gotten pretty bad?

 

July 17, 2009

First reports on today's bombings in Indonesia are that they were suicide bombings. But here's the confusing part:
At around 7:47 am local time (0:47 UTC) on 17 July 2009,[4] the Marriott and Ritz-Carlton Hotels in Jakarta, Indonesia, were hit by separate bombings five minutes apart.[1][5] Nine fatalities, including four foreigners were reported. Among the foreigners were one person from Australia and one from New Zealand.[1][6] More than 50 others were injured in the blasts.[2][6][7] Both blasts appear to have been the work of suicide bombers, who may have smuggled the bombs into the hotels by checking in as paying guests several days earlier.[8]

Maybe I'm missing something, but if you've managed to smuggle yourself into the hotel and assemble your bomb, why bother to make it a suicide bombing? Just assemble it, put it on a timer, and then head out for a drink. Wouldn't that be a lot more convenient? When I was in Bali a few years ago, it didn't seem like the hotel bothered to search your room other than whatever searching they do incidental to doing housekeeping, and it's not like you need to get very far, so you don't need a lot of lead time.

There's obviously some strategic benefit to being willing to do a suicide bombing, since it requires somewhat less sophistication (no remote detonators or timers or whatever) and it's harder to stop someone who is willing to die. But that doesn't mean that you have to actually die in the attempt if it's not absolutely necessary. Perhaps there's some signaling benefit in the occasional suicide bombing even if it's not strictly necessary just to preserve a level of strategic uncertainty, but that seems like a pretty high price to pay.

 

July 16, 2009

Jennifer Granick writes about EFF's concerns over the use of GPS measurements for insurance pricing. (EFF's comments are here.) The background here is that your risk of an accident is correlated to the amount you drive which is why your insurance company asks how many miles you drive a year and occasionally asks what your odometer reading is. The proposed new regulations include a "price by mile" option, in which insurance rates would be much more tightly coupled to your driving than the current low/high mileage setup. They also include a provision to allow "verified actual mileage" via "technological devices provided by the insurer or otherwise made available to the insured that accurately collect vehicle mileage information." EFF's concern is that these devices may collect a lot more than mileage (e.g., where you drive, how you drive) and that that could be used by insurance companies to make policy decisions.

As EFF observes, cars are already fitted with a device for accurately measures how far you've driven, the odometer. It's worth asking what advantages a new device offers. There are a number of possibilities:

  • Remote read/Timeliness— The insurance company only gets your odometer reading very infrequently (yearly or so). One could imagine adding a new device which would regularly report back to the insurance company (e.g., via the cell network) so they would know how much you had driven each month.
  • Unforgeability— There's nothing really stopping you from lying about your odometer reading. The proposal includes a clause about having a verifiably data photograph of your odometer, but it's not like that wouldn't be easy to photoshop. In principle, one could imagine an external device using cryptography to verify its results. Of course, that device would then need to be attached to your car in a way that prevented it from being removed and left at home while you drove to Vegas.
  • Accuracy— Odometers aren't really that accurate since they just count wheel revolutions. Tire size, inflation, etc. can produce errors. Also, they're often not that well calibrated to start with. That said, however, GPS mileage readings aren't really that accurate either, especially if there's a lot of interference in "urban canyon" type environments. My GPS routinely misreads by a factor of 5-10% on backpacking trips, which is about what I remember hearing for odometers.
  • Richer data collection—A GPS device offers the possibility of collecting a lot more data, including where and how you drive. Obviously this might be useful for actuarial purposes.

From a technical perspective, then, the major advantage of a new device is precisely the one that offers the most privacy threat. In particular, one could imagine adding remote read and unforgeability to an odometer-based device, without any GPS at all. If the insurance companies insist they need to add their own device, it's certainly reasonable to ask exactly what data they are collecting.

 

July 15, 2009

This Slate article reports on the publishing industry's attempt to jack up e-book prices on Amazon. Executive summary: Amazon charges a more-or-less fixed $9.99 for Kindle books regardless of the selling price (books which go for less than $9.99 get discounted more). The publishers want more pricing flexibility. Jack Shafer argues that the publishers are likely to price themselves out of the legitimate market and create a black market for bootleg e-books:

Right now, the electronic-book market finds itself roughly in the same place the market for MP3s was in 1999, the year after the release of the first portable MP3 player. First adopters of e-books, who are filling their devices with content and proselytizing to their friends, have it better than the early MP3 users. The iTunes store, which was established in early 2003, was among the first online sites where music fans could easily buy music files, a la carte, from a huge selection. The other commercial sites, wrote the New York Times, were "complex, expensive and limiting" and "failing because they were created to serve the interests of the record companies, not their customers." Basically, before iTunes arrived, if you wanted portable tracks, you had to rip your own, borrow collections from friends, or grab "free" tunes from the "pirates" at Napster or other file-sharing sites.

It doesn't make me a defender of illegal file-sharing to say that the music industry goofed by waiting until 2003 to agree to sell individual tracks for the reasonable price of 99 cents. Its absence from the electronic-music market in those early years allowed illegal file-sharing to take root and spread, and it helped shape the perception, especially among younger consumers, that music "should" be free.

...

No title is safe from file-sharing. As the Instructables Web site detailed a couple of months ago, a do-it-yourself, high-speed book scanner can be made for about $300. The file for a hefty book like Gödel, Escher, Bach: An Eternal Golden Braid is about the size of a five-minute MP3 and can be downloaded in a couple of minutes. Does the book industry want to join the digital flow, the way the TV industry has with Hulu and TV.com? Or by its obstruction does it intend to encourage the establishment of a Bookster?

I'm not a huge fan of Amazon's pricing strategy; you get a pretty big discount (~60%) on hardcover books, but the discount on paperbacks is pretty marginal (~20%). E-books certainly are nice in some respects, but given that they're not infinitely portable between devices and that Amazon has restrictions on the number of devices you can download to, I think there's an at least an argument that e-books are less valuable than paper books. Of course, I don't download MP3s either, so maybe I'm not the target audience here, but it seems to me that people aren't going to be excited to pay $19.99 for an electronic version of the next Dan Brown book.

However, I'm not sure I find Shafer's argument about the imminent Napsterization of the publishing industry that convincing. First, it's a lot less convenient to rip books than it is CDs. Pretty much every computer comes with a CD reader, which more or less makes it a convenient ripping platform. If you want to scan books, you're going to need to lay your hands on a book scanner. Even if we stipulate that you're going to build a $300 scanner for yourself (and it doesn't look incredibly simple), that's pretty different from having it bundled with your PC. In addition, this kind of scan is a pretty inferior alternative to a professionally produced e-book: you get scanning/OCR artifacts, page layout issues, etc., which need to be corrected semi-manually. By contrast, a ripped CD is effectively a perfect copy (it's the popular compression schemes which are lossy), and if you want you can make a more or less identical copy of the CD any time you want.

More importantly, a redistributable copy of your music is a natural side effect of something you probably want anyway: your music on your computer or iPod. Most people listen to a lot of their collection semi-regularly and having your music all in one compact form is so much more convenient that I suspect that people who have CDs would be happy to copy everything onto their computer/iPod. The consequence is that as soon as you turn on sharing, it's natural to share all your stuff. But people read books differently than they listen to music; even if you're a very active reader you probably are only working on a few books at a time, so the value of having all your books ripped is a lot lower, which means that there's less raw material for broad scale sharing of people's naturally acquired collections, as opposed to people who deliberately set out to develop a large corpus for the purpose of sharing it.

That said, I do actually want to have an electronic copy of a single book which isn't available from Amazon. If anyone out there has a book scanner they'd let me use, it wouldn't be unappreciated.

 

July 12, 2009

It's reasonably common for MMA fights to be stopped due to excessive bleeding by one of the fighters. In fact, in some cases fighters will deliberately try to open up a cut on their opponent in order to get a stoppage. Apparently, some fighters are more susceptible to cuts than others. The NYT has an interesting article about plastic surgery to make them more resistant to bleeding:
So last summer, Davis, 35, contacted a plastic surgeon in Las Vegas. He wanted to make his skin less prone to cutting.

The surgeon, Dr. Frank Stile, burred down the bones around Davis's eye sockets. He also removed scar tissue around his eyes and replaced it with collagen made from the skin of cadavers.

There appear to be two claimed underlying problems: (1) sharp bone ridges in the skull which result in cuts when strikes to the face force the skin against the bone and (2) poor treatment of cuts in the ring resulting in "unstable scar tissue" which is thus more likely to result in a propensity to future cuts.

As usual with medical procedures applied to athletes, we are immediately faced with the question of whether this is simple treatment or an enhancement. To the extent to which you're fixing incompletely healed injuries, that certainly looks like medical treatment. The bone shaving, on the other hand, starts to look more like enhancement. On the other hand, I guess you could think of sharp bones the same way you would think of, say, asthma, in which case treatment starts to look appropriate. On the third hand, I think we can agree that implanting a plastic plate over your forehead, while an effective anti-cut measure, would probably be outside the rules. All this just reinforces that these distinctions are basically arbitrary; if we ban this kind of surgery, it's an advantage to people with good bone structure. Contrariwise, if we allow this kind of surgery, people who formerly had the advantage of good bone structure lose that advantage.

Of course, all this assumes that the surgery actually works. But if it doesn't, likely something that works will eventually come along.

 

July 11, 2009

Miguel Helft has a somewhat confusing/confused article in today's NYT about Google's ChromeOS and moving applications into the cloud. In order to make sense of this, it's important to understand what we're talking about and what its properties are.

Your typical computer (say, a Windows machine) is loaded with an OS, whose job it is to let you run applications, talk to the various devices on the system, mediate access to shared resources, etc. Most of the work is done by a variety of applications that run on top of the operating system. Almost everything you ordinarily use (Word, IE, Firefox, Safari, Exchange, ...) are applications that run on top of the OS They rely on the OS to let them talk to all those resources, including the network, the disk, and to some extent the system memory. What we're talking about with a Web-based system is something where the operating system has been stripped down and only really runs one program: the Web browser. In the case of ChromeOS, the operating system is Linux and the browser is Chrome.

With that in mind, let's take a look at this article.

PCs can be clunky and difficult to maintain. They're slow to start up and prone to crashing, wiping out precious files and photographs. They are perennially vulnerable to virus attacks and require frequent upgrades. And if you lose your laptop, or worse, it's stolen, you lose not only your machine but everything stored in its circuitry that's not backed up - those files, contacts, e-mail messages, pictures and videos.

But what if a computer were nothing more than an Internet browser - a digital window pane onto the Web? You could forget about all the software that now powers your computer and everything it does. All of those functions would be done through the Web. And none of the information that's now inside your computer would be there anymore. Instead, it would all be on the cloud of connected computers that is the Internet.

There are a number of points here, so let me take them in turn: maintainability, availability, and security.

It's certainly true that PCs are a pain to maintain. This is especially true if you're running a PC with software from a variety of manufacturers, since any of those packages can need upgrading. However, it's not like you won't need to upgrade your ChromeOS machine as well: both Linux and Chrome will require periodic upgrades. Now, if Google does a good job, then those upgrades will be automatic and relatively painless—both Microsoft and Apple already do this, though I guess it's a matter of opinion how painless they are and of course it remains to be seen if ChromeOS's mechanisms are any better. The good news with a Web-based system is that the number of pieces of software you're running is strictly limited and that new functionality can be implemented by software that runs inside the browser, so you don't need to download or manage it.

As for the reliability/availability of your data, having your PC crash is of course a not-infrequent occurrence, and data certainly does get lost. If your data is stored on some machine on the Internet, then failures of your PC don't cause data loss. But the flip side of this feature is that if you don't have access to the Internet, then you may not be able to get at your data at all. It's possible to design networked systems that depend on remote storage but cache data on your local PC so you can work on it when you're disconnected—I routinely use systems like this for collaboration—but it's hard to make that sort of thing work purely in the browser context, since the whole point is that the software resides on the Web server, not on your local machine.

I'm less convinced by the security story. The basic way that your computer gets infected with malware is that a program with a vulnerability processes some malicious data supplied by an attacker. On a typical PC, there are only a few programs which an attacker can send data to: primarily the Web browser, mail client (unless you use Webmail), IM client (unless you use Web-based IM), word processor (unless you use Google docs), maybe a PDF previewer, spreadsheet, etc. Note how prominently the browser appears here; a putative web-based operating system will presumably be running a standard browser, so vulnerabilities in the browser still potentially lead to viruses. It's possible to run the browser in an isolated environment which resets itself after each invocation (think VMware here), but you could at least in principle do the same thing on a commodity PC. Fundamentally, the security of a system like this depends on the security of the browser, which is to a great extent the situation with commodity PCs as well.

Speaking of security, I should mention that the following seems pretty implausible:

Any device, anywhere - from a desktop PC to a mobile phone - could give users instant access to all their files and programs so long as it had a Web browser. At the same time, new kinds of devices would be possible, from portable computers that are even lighter than today's thinnest PCs, to, say, a Web-connected screen in a hotel room that would give guests full access to their digital lives.

Now, obviously it's technically possible to build a Web-based system where you can remotely access all your data from a machine in your hotel room, but that's not really something you would want; remember that you have no idea what software is running on the PC in your hotel room, so you have no way of knowing that it's not just pretending to run ChromeOS (or whatever), but actually stealing your password and all your data. [Technical note: it could also be virtualized, running a keylogger, or an inline keyboard logger, etc.] I can see that you might want to have a very lightweight machine that you carry around and that does most of its thinking in the cloud—to some extent that's what an iPhone already is—but it really needs to be a device you control.

Moving on...

In the past few years, phones have started to act more like computers, and devices like the iPhone have whetted consumers' appetite for a combination of power and simplicity. Now that power and simplicity could migrate from the phone to the PC.

"The experience that we have on our iPhones and smart phones prefigures what the PC will become," said Nicholas Carr, the author of "The Big Switch," a book about cloud computing.

This is a particularly odd argument. Originally the iPhone was precisely this sort of Web-based system: you couldn't load your own software and the only extensibility point was that you could write new web applications. It quickly became clear that due to intentional restrictions in the browser environment (largely intended as security features) this was a really inadequate way of adding new functionality, which was one of the major original motivations for people to jailbreak the iPhone. Then, of course, the app store became available and now all sorts of new functionality is added by loading new programs onto the iPhone operating system, just like you would if you were running a regular PC (except, of course, from having to get all the software via Apple). If anything the iPhone seems like an argument against this concept, not for it.

 

July 10, 2009

If you have any interest in the Bush Administration's warrantless wiretapping program, you should read the report prepared by the Office of the Inspector General of the DOD, DOJ, CIA, NSA, and ODNI. This is the unclassified summary of a somewhat longer classified report, but nevertheless there's some interesting information here. The high points include.
  • The President's Surveillance Program (PSP) comprised the Terrorist Surveillance Program (TSP) and still classified Other Intelligence Activities (OIA).
  • The TSP program appears to have included surveillance of "communications into and out of the United States where there was a reasonable basis to conclude that one party to the communication was a member of al-Qa'ida or related terrorist organizations. ... The Attorney General subsequently publicly acknowledged the fact that other intelligence activities were also authorized under the same Presidential Authorization, but the details of those activitied remain classified."
  • The program was periodically reauthorized and prior to each reauthorization, the NCTC would prepare a threat assessment justifying the need to reauthorize it:
    NCTC personnel involved in preparing the threat assessments told the ODNI OIG that the danger of a terrorist attack described in the threat assessments was sobering and "scary," resulting in the threat assessments becoming known by ODNI and IC personnel involved in the PSP as the "scary memos."
  • The Administration's legal justification for these activities relied heavily (it seems almost exclusively) on an analysis by John Yoo arguing that FISA couldn't constitutionally restrict the president's Article II wartime intelligence gathering activities and that these activities didn't violate the 4th amendment.
  • After Yoo left DOJ, new DOJ officials Jack Goldsmith, Patrick Philbin, and James Comey became concerned about the adequacy of Yoo's analysis. The timeline here is complicated but ultimately a standoff ensued between DOJ and the White House with the White House on the side of continuing the PSP. This was ultimately resolved, as far as I can tell, by the White House effectively telling the DOJ that the President had determined the position of the executive branch. Here's Albert Gonzales:
    Your memorandum appears to have been based on a misunderstanding of the President's expectations regarding the conduct of the Department of Justice. While the President was, and remains, interested in any thoughts the Department of Justice may have on alterneative ways to achieve effectively the goals of the activities authorized by the Presidential Authorization of March 11, 2004, the President has addressed definitively for the Executive Branch in the Presidential Authorization the interpretation of the law.

  • Despite the above, the administration ultimately modified the program, presumably along lines more acceptable to DOJ.
  • It's extremely hard to assess the extent to which the PSP was at all useful The OIG reports people fram various agencies calling it useful, but mostly as one tool among many, and there doesn't seem to have been any real attempt to quantify the importance of the program.

The second and sixth points will be especially familiar sounding to people who remember the extensive debate about controls on cryptography: extensive claims about how the dire consequences of not being able to listen to everyone's communications coupled with extremely limited evidence that that capability was actually that important. I'm not qualified to assess the legal questions about whether this program complied with FISA and/or the Constitution. However, obviously this program does have some impact on the privacy of US Citizens (and "reasonable basis" is a pretty low standard), so it would be nice if there were somewhat more evidence that that was a tradeoff worth making.

 

July 9, 2009

Andy Zmolek of Avaya reports on VoIP security research company VoIPshield's new policy requiring vendors to pay for full details of bugs in their products. He quotes from a letter VoIPShield sent him:
"I wanted to inform you that VoIPshield is making significant changes to its Vulnerabilities Disclosure Policy to VoIP products vendors. Effective immediately, we will no longer make voluntary disclosures of vulnerabilities to Avaya or any other vendor. Instead, the results of the vulnerability research performed by VoIPshield Labs, including technical descriptions, exploit code and other elements necessary to recreate and test the vulnerabilities in your lab, is available to be licensed from VoIPshield for use by Avaya on an annual subscription basis.

"It is VoIPshield's intention to continue to disclose all vulnerabilities to the public at a summary level, in a manner similar to what we've done in the past. We will also make more detailed vulnerability information available to enterprise security professionals, and even more detailed information available to security products companies, both for an annual subscription fee."

In comments, Rick Dalmazzi from VoIPshield responded at length. Quoting some of it:

VoIPshield has what I believe to be the most comprehensive database of VoIP application vulnerabilities in existence. It is the result of almost 5 years of dedicated research in this area. To date that vulnerability content has only been available to the industry through our products, VoIPaudit Vulnerability Assessment System and VoIPguard Intrusion Prevention System.

Later this month we plan to make this content available to the entire industry through an on-line subscription service, the working name of which is VoIPshield "V-Portal" Vulnerability Information Database. There will be four levels of access (casual observer; security professional; security products vendor; and VoIP products vendor), each with successively more detailed information about the vulnerabilities. The first level of access (summary vulnerability information, similar to what's on our website presently) will be free. The other levels will be available for an annual subscription fee. Access to each level of content will be to qualified users only, and requests for subscription will be rigorously screened.

So no, Mr. Zmolek, Avaya doesn't "have to" pay us for anything. We do not "require" payment from you. It's Avaya's choice if you want to acquire the results of years of work by VoIPshield. It's a business decision that your company will have to make. VoIPshield has made a business decision to not give away that work for free.

It turns out that the security industry "best practice" of researchers giving away their work to vendors seems to work "best" for the vendors and not so well for the research companies, especially the small ones who are trying to pioneer into new areas.

As a researcher myself—though in a different area—I can certainly understand Dalmazzi's desire to monetize the results of his company's research. One of my friends used to quote Danny DeVito from Heist on this point: "Everybody needs money. That's why they call it money." That said, I think his defense of this policy elides some important points.

First, security issues are different from ordinary research results. Suppose, for instance, that Researcher had discovered a way to significantly improve the performance of Vendor's product. They could tell Vendor and offer to sell it to them. At this point, Vendor's decision matrix would look like this:

Not BuyBuy
0V - C

Where V is the value of the performance improvement to them and C is the price they pay to Researcher for the information. Now, if Researcher is willing to charge a low enough price, they have a deal and it's a win-win. Otherwise, Vendor's payoff is zero. In no case is Vendor really worse off.

The situation with security issues is different, however. As I read this message, Researcher will continue to look for issues in Vendor's products regardless of whether Vendor pays them. They'll be disclosing this vulnerabilities in progressively more detail to people who pay them progressively more money. Regardless of what vetting procedure Researcher uses (and "qualified users" really doesn't tell us that much, especially as "security professional" seems like a pretty loose term), the probability that potential attackers will end up in possession of detailed vulnerability information seems pretty high. First, information like this tends to leak out. Second, even a loose description of where a vulnerability is in a piece of software really helps when you go to find it for yourself, so even summary information increases the chance that someone will exploit the vulnerability. We need to expand our payoff matrix as follows:

Not BuyBuy
Not Disclose0V - C
Disclose-D?

The first line of the table, corresponding to a scenario in which Researcher doesn't disclose the vulnerability to anyone besides Vendor, looks the same as the previous payoff matrix: Vendor can decide whether or not to buy the information depending on whether it's worth it to them or not to fix the issue [and it's quite plausible that it's not worth it to them, as I'll discuss in a minute.] However, the bottom line on the table looks quite different: if Researcher discloses the issue, then this increases the chance that someone else will develop an exploit and attack Vendor's customers, thus costing Vendor D. This is true regardless of whether or not Vendor chooses to pay Researcher for more information on the issue. If Vendor chooses to pay Researcher, they get an opportunity to mitigate this damage to some extent by rolling out a fix, but their customers are still likely suffering some increased risk due to the disclosure. I've marked the lower right (Buy/Disclose) cell with a ? because the costs here are a bit hard to calculate. It's natural to think it's V - C - D but it's not clear that that's true, since presumably knowing the details of the vulnerability is of more value if you know it's going to be released—though by less than D, since you'd be better off if you knew the details but nobody else did. In any case, from Vendor's perspective the top row of the matrix dominates the bottom row.

The point of all this is that the situation with vulnerabilities is more complicated: Researcher is unilaterally imposing a cost on Vendor by choosing to disclose vulnerabilities in their system and they're leaving it up to Vendor whether they would like to minimize that cost by paying Researcher some money for details on the vulnerability. So it's rather less of a great opportunity to be allowed to pay for vulnerability details than it is to be offered a cool new optimization.

The second point I wanted to make is that Dalmazzi's suggetion that VoIPshield is just doing Avaya's QA for them and that they should have found this stuff through their own QA processes doesn't really seem right:

Final note to Mr. Zmolek. From my discussions with enterprise VoIP users, including your customers, what they want is bug-free products from their vendors. So now VoIP vendors have a choice: they can invest in their own QA group, or they can outsource that function to us. Because in the end, a security vulnerability is just an application bug that should have been caught prior to product release. If my small company can do it, surely a large, important company like Avaya can do it.

All software has bugs and there's no evidence that it's practical to purge your software of security vulnerabilities by any plausible QA program, whether that program consists of testing, code audits, or whatever. This isn't to express an opinion on the quality of Avaya's code, which I haven't seen; I'm just talking about what seems possible given the state of the art. With that in mind, we should expect that with enough effort researchers will be able to find vulnerabilities in any vendor's code base. Sure, the vendor could find some vulnerabilities too, but the question is whether they can find enough bugs that researchers can't find any. There's no evidence that that's the case.

Finally, I should note that from the perspective of general social welfare, disclosing vulnerabilities to a bunch of people who aren't the vendor but not the vendor seems fairly suboptimal. The consequence is that there's a substantial risk of attack which the vendor can't mitigate. Of course, this isn't the researcher's preferred option—they would rather collect money from the vendor as well—but if they have to do it occasionally in order to maintain a credible negotiating position, that has some fairly high negative externalities. Obviously, this argument doesn't apply to researchers who always give the vendor full information. There's an active debate about the socially optimal terms of disclosure, but I think it's reasonably clear that a situation where vulnerabilities are frequently disclosed to a large group of people but not to the vendors isn't really optimal.

Acknowledgement: Thanks to Hovav Shacham for his comments on this post.

 

July 6, 2009

I know it's a mistake to try to make sense of Sarah Palin's registration speech, but even amidst the general incoherency, the following struck me:
And so as I thought about this announcement that I wouldn't run for re-election and what it means for Alaska, I thought about how much fun some governors have as lame ducks... travel around the state, to the Lower 48 (maybe), overseas on international trade - as so many politicians do. And then I thought - that's what's wrong - many just accept that lame duck status, hit the road, draw the paycheck, and "milk it". I'm not putting Alaska through that - I promised efficiencies and effectiveness! That's not how I am wired. I am not wired to operate under the same old "politics as usual." I promised that four years ago - and I meant it.

Huh?

Maybe I'm missing something, but as far as I can tell the reason that a politician who is a short-timer has trouble being effective isn't because they don't have to run again—that's mostly empowering because you don't need to worry about consequences—but because they aren't going to be around long enough to reward or punish you. But Palin's term has two more years to run, not two months. That's plenty of time for others to have to worry about doing what you want. And of course Palin is (or at least before this was) likely to remain politically powerful even after leaving office, so this seems like less of a concern for her than the average politician.

 

July 5, 2009

The problem with climbing grades is that unlike running, cycling, lifting, etc. there's no objective measure of difficulty. Routes are just graded by consensus of other climbers, in this case the gym's routesetters. As a result, some routes are easier than others—and of course since different climbers have different styles, which routes are easiest depends on the climber as well—and as a practical matter some routes are really harder or easier than their rated grade.1 Of course, given that there's no objective standard, you could argue that this isn't a meaningful statement, but that's not really true: a difficulty grade is really a statement about how many people can do a route, so if you have a bunch of routes which are rated at 5.10 and I can't climb any of them but I jump on a new route rated 5.10, and race up it with no effort, that's a sign it's not really a 5.10. This is actually a source of real angst to people just starting to break into a grade—at least for me—since if I can do it, I immediately expect that the rating is soft.

It would be nice to have a more objective measurement of difficulty. While we can't do this just by measuring the route (the way we can with running, for instance) that doesn't mean the problem is insoluble; we just need to take a more sophisticated approach. Luckily, we can steal a solution from another problem domain: psychological testing. The situations are actually fairly similar: in both cases we have a trait (climbing skill, intelligence) which isn't directly measurable. Instead, we can give our subjects a bunch of problems which are generally easier the higher your level of ability. In the psychological domain, what we want to do is evaluate people's level of ability; in the climbing domain, we want to evaluate the level of difficulty of the problems. With the right methods, it turns out that these are more or less the same problem.

The technique we want is called Item Response Theory (IRT). IRT assumes that each item (question on the test or route, as the case may be) has a certain difficulty level; if you succeed on an item, that's an indication that your ability is above that level. If you fail, that's an indication that your ability is below that level. Given a set of items of known difficulties, then, we can can quickly home in on someone's ability, which is how computerized adaptive tests work. Similarly, if we take a small set of people of known abilities and their performance on each item, we can use that to fit the parameters for those items.

It's typical to assume that the probability of success on each item is a logistic curve. The figure below shows an item with difficulty level 1.

Of course, this assumes that we already know how difficult the items are, but initially we don't know anything: we just have a set of people and items without any information about how good/difficult any of them are. In order to do the initial calibration we start by collecting a large, random sample of people and have them try each item. You end up with a big matrix of each person and whether they succeeded or failed at each one, but since you don't know how good anyone is other than by the results of this test, things get a little complicated. The basic idea behind at least one procedure, due to Birnbaum, (it's not entirely clear to me if this is how modern software works; the R ltm documentation is a little opaque) is to use an iterative technique where you assign an initial set of abilities to each person and then use that to estimate the difficulty of each problem. Given those assignments, we can re-fit to determine people's abilities. You then use those estimates to reestimate the problem difficulties and iterate back and forth until the estimates converge, at which point you have estimate of both the difficulty of each item and the ability of each individual. (My description here is based on Baker).

As an example I generated some toy data with 20 items and 100 subjects with a variety of abilities and fit it using R's ltm package. The figure below shows the results with the response curves for each item. As you can see, having a range of items with different difficulties lets us evaluate people along a wide range of abilities:

Once you've done this rather expensive calibration stage, however, you can easily calculate someone's abilities just by plugging in their performance on a small set of items. Actually, you can do better than that: you can perform an adaptive test where you start with an initial set of items and then use the response on those items to determine which items to use next, but even if you don't do this, you can get results fairly quickly.

That's nice if you're administering the SATs, but remember that what we wanted was to solve the opposite problem: rating the items, not the subjects. However, as I said earlier, these are the same problem. Once we have a set of subjects with known abilities, we can use that to roughly calibrate the difficulty of any new set of items/routes. So, the idea is that we create some set of benchmark routes and then we send our raters out to climb those routes. At that point we know their ability level and can use that to rate any new set of climbs.

There's still one problem to solve: the difficulty ratings we get out of our calculations are just numbers along some arbitrary range (it's conventional to aim for a range of about -3 to +3 with the average around 0), but we want to have ratings in the Yosemite Decimal System (5.1-5.15a as of now). It's of course easy to rescale the difficulty parameter to match any arbitrary scale of our choice, but that's not really enough, because the current ratings are so imprecise. We'll almost certainly find that there are two problems A and B where A is currently rated harder than B but our calibrated scale has B harder than A. We can of course choose a mapping that minimizes these errors, but because so many routes are misrated it's probably better to start with a smaller set of benchmark routes where there is a lot of consensus on their difficulty, make sure they map correctly, and then readjust the ratings of the rest of the routes accordingly.

Note that this doesn't account for the fact that problems can be difficult in different ways; one problem might require a lot of strength and one require a lot of balance. To some extent, this is dealt with by the having a smooth success curve which doesn't require that every 5.10 climber be able to climb every 5.10 route. However, ultimately if you have a single scalar ability/difficulty metric, there's only so much you can do in this regard. IRT can handle multiple underlying abilities, but the YSD scale we're trying to emulate can't, so there's not too much we can do along those lines.

Obviously, this is all somewhat speculative—it's a lot of work and I don't get the impression that route setters worry too much about the accuracy of their ratings. On the other hand, at least in climbing gyms if you were able to integrate it into a system that let people keep track of their success in their climbs (I do this already but most people find it to be too much trouble), you might be able to get the information you needed to calibrate new climbers and through them get a better sense of the ratings for new climbs.

Acknowledgement: This post benefitted from discussions with Leslie Rescorla, who initially suggested the IRT direction.

1. This seems to be especially bad for very easy and very hard routes. I think the issue with very easy routes is that routesetters are generally good climbers and so find all the routes super-easy. I'm not sure about harder problems, but it may be that they're near the limit of routesetters abilities and so heavily dependent on whether the route matches their style.

 

July 2, 2009

You may have heard that California has started paying some of its bills by issuing IOUs, which are sort of weak-looking bonds which pay 3.75% annual interest:
A registered warrant is a "promise to pay," or an IOU, that is issued by the State when there are not enough funds to pay all of its General Fund obligations. Registered warrants bear interest and are redeemable by the State Treasury only when the General Fund has sufficient money. If the Legislature and Governor fail to enact budgetary solutions that provide enough cash for the State to pay all of its bills by July 2, the Controller will begin issuing registered warrants. Assuming there is adequate cash in the Treasury, those warrants may be redeemed on October 2, 2009. Both the issue and the maturity date will be printed on the warrant. If the Pooled Money Investment Board (PMIB) determines there is sufficient cash available for redemption at an earlier date, the warrants may be redeemed earlier than October 2, 2009.

Now, ordinarily, if you had an instrument like this (say, a t-bill), it would be at least semi-negotiable and you'd have a chance at selling it to someone else; and since the state will be paying over face value you'd have some shot at not taking too big a hit. Unfortunately, as California looks like it's well on its way to involvency, with a bond rating of A, and potentially going down to B, [*], I suspect you'd need to accept a real discount, even if there were a secondary market for these instruments, which, as far as I'm aware, there isn't. And of course there's no fixed maturity date: California can pay them off early or late, depending on the state of its finances, so that makes them harder to price.

The good news is that if you have an account at BofA or Wells Fargo, they will cash these IOUs for you. Even then, it appears that you may be subject to fees if California fails to pay up. [*]. On the other hand, if you don't have such an account you're pretty much SOL. Of course, California might regard this as a virtue: if people have to hold these IOUs for months at a time, a certain percentage of them will get lost, which means California gets to keep the money a while longer, if not indefinitely.

 

July 1, 2009

I noticed the other day that if I'm driving my car on the freeway and close the sunroof my ears pop. After a bit of thinking, I concluded that what was going on was the Bernoulli effect: the air flowing over the sunroof lowers the pressure of the interior of the car. Then when you close it you get a sudden pressure change back to ambient pressure.

Initial experiments confirm this: my Polar 625SX has a built-in barometric altimeter. I repeatedly opened and closed the sunroof and watched the altimeter and readings seemed to consistently differ by about 75 feet. Obviously, there's some uncertainty here because the road isn't totally flat; if you wanted to be really sure you'd go over the same sections of the road again and again with the sunroof open and closed and measure the difference. Still, since I'm not exactly publishing this in Nature, it seems good enough for now.