January 2009 Archives


January 31, 2009

A while back I wrote about Blizzard's suit against MDY, which produces a WoW bot called Glider. Blizzard sued MDY and the judge in the case just ruled that MDY violated the DMCA. (Ars Technica article here; ruling here, link thanks to Joseph Calandrino). I'm not a lawyer but as far as I can tell from reading the ruling, the reasoning is that the visual and audio elements that emerge from the act of playing WoW constitute a copyrighted work, the warden (WoW's anti-bot measure) controls access to that copyrighted work, and Glider allows you to circumvent that access control, hence it violates the DMCA anti-circumvention provisions.

It's interesting to ask how far you could extend this reasoning. Consider this alternate design for a WoW bot: you run WoW in a VM and then have your bot interact with the VM to scrape the screen, simulate key and mouse presses, etc. [This was originally suggested to me by Terence Spies.] The warden can't detect your bot because it's shielded by the VM (it might detect the VM, but there are legitimate reasons to run WoW in a VM). The VM itself isn't a DMCA violation because it has significant legitimate uses. The bot doesn't have to specifically have any anticircumvention measures to avoid the warden; it just processes the video output and simulates user input. Would the same reasoning still apply in this case?


January 30, 2009

While listening to KQED's latest pledge drive, I noticed something funny about their thank you gift schedule. This time, they offered the option to have you not take any gift but instead donate it to the SF Food Bank.. The schedule looks like this:

Donation ($) Meals
40 2
60 5
144 33
360 180

This seems strangely non-linear, which suggests something interesting, namely, that the fraction of your pledge that KQED uses to pay for thank you gifts as opposed to using to fund their operations. There's way too few points here to do a proper fit but I can't help myself. Playing around with curves a bit, a quadratic seems to fit pretty well, with parameters: Meals = .0014 * Donation^2 + 1.2. It's not just the $360 data point that throws it out of whack, either. There's apparent nonlinearity, even in the first three points. (Again, don't get on me about overfitting: with only four points there's only so much you can do.)

I'm not sure what this suggests about their business model. Naively, I would have expected the fraction of your donation that goes to gifts to go down as your gift went up. Indeed, you might have thought that they would take a small loss on the smallest pledges just to get people involved and then move to the upsell at some later date. Thinking about it some more, I guess the natural model is that KQED as trying to extract money from you up to the point where the marginal dollar they extract from you costs them a marginal dollar in gifts (or in this case food bank donations) at which point they stop. So, as people's marginal utility of having given something, anything, to KQED declines, they need to keep jacking up gift quality faster than the size of the donation to keep extracting your cash. Other theories are of course welcome.


January 29, 2009

I can't say I'm that enthusiastic about what's starting to look like a trend of national governments requiring ISPs to cut off Internet service to alleged pirates:

To try to curb unauthorized file-sharing, which the music industry blames for its woes, the report recommends requiring Internet service providers to send warning letters to persistent pirates.

Some British Internet providers have already sent such letters under a voluntary agreement. Under the proposal outlined Thursday, they would be required by law to do so. Internet providers would also be required to turn over personal details of repeat offenders to rights holders, like music companies, so that the offenders could be sued.

The music industry, however, is increasingly reluctant to pursue file-sharers through the courts, fearing a backlash from listeners. The Recording Industry Association of America, which represents the major record companies, moved to end a multiyear legal campaign against file-sharers, for example.


In France, legislation that would require service providers to disconnect pirates is working its way through Parliament.

I can certainly understand why this is what the music industry would like. No doubt they'd prefer if they could just fine you directly without going through the hassle of suing you or anything like that. I doubt I'd like that very much, however. I, on the other hand, would prefer to have music shipped to my house over the Internet for free, which I doubt they would like very much. What's a lot less clear is why it's good for society to put its thumb on the scales in the music industry's favor. It's not like there's no chance for collateral damage here; Internet service is pretty important to a lot of people and having it cut off is a pretty substantial punishment to incur on the say so of a party who it should be obvious isn't completely disinterested.


January 28, 2009

I'm trying to puzzle out what this NYT article is about. I get the problem: you've got digital records and you want some way to establish their provenance and contents for posterity. What I don't get is what the claimed solution is:
Designing digital systems that can preserve information for many generations is one of the most vexing engineering challenges. The researchers' solution is to create a publicly available digital fingerprint, known as a cryptographic hash mark, that will make it possible for anyone to determine that the documents are authentic and have not been tampered with. The concept of a digital hash was pioneered at I.B.M. by Hans Peter Luhn in the early 1950s. The University of Washington researchers are the first to try to simplify the application for nontechnical users and to try to offer a complete system that would preserve information across generations.


After capturing five gigabytes of video in 49 interviews, the group began to work on a system that would make it possible for viewers to prove for themselves that the videos had not been tampered with or altered even if they did not have access to powerful computing equipment or a high-speed Internet connection.

Despite the fact that there are commercial applications that make it possible to prove the time at which a document was created and verify that it has not been altered, the researchers wanted to develop a system that was freely available and would stand a chance of surviving repeated technology shifts.

At the heart of the system is an algorithm that is used to compute a 128-character number known as a cryptographic hash from the digital information in a particular document. Even the smallest change in the original document will result in a new hash value.

It doesn't help here that the Times's doesn't actually link to the relevant site. They have links all right, but they go to the Times's explanation of the relevant terms, not to the site you want to. So the above is based purely on the Times article. If someone has a pointer to the project site, I'd be interested.

Anyway, using cryptographic hashes for document integrity like this is a pretty standard technique, and there's plenty of free software for it; it's a builtin on most machines. The difficult problem isn't actually establishing integrity though, it's establishing authentication and (more importantly in this case), time of creation. To see why, consider the threat model. Someone gives you a recording that they claim represents a historical event that was observed by someone long-since dead. Classic public key cryptography isn't a complete solution here: how do you validate the original creator's public key and even if you had it, how would you know that they didn't tamper with the data themselves some time years afterwards? What's more useful is to know that that recording existed at (or before) the time it was allegedly made. Hashes can help here, but what you need is to have some independent channel that carries the hash along with some sort of evidence of when it was made. So, for instance you might print a hash of your recording in the newspaper classified section and then anybody who could lay hands on the paper could independently verify the recording. [Technical note: there are some cooler techniques using chains of hash functions but this is the basic principle.]

Note that it doesn't help to have the hash just attached to the document without some other form of cryptographic protection (e.g., a digital signature.) This doesn't buy you any protection against attackers, because they can change the document and the hash to match. The way to think about this is that hashing is a technique for bootstrapping your confidence in a small data value (the hash) into confidence in the entire data object that was hashed. But you still need a secure channel for the object.

With that in mind, I don't really understand how the live CD thing works either. Just like you need the hash to be carried independently, you also need an independent code base to do your own hash computation.


January 27, 2009

Maybe it's because I don't watch broadcast TV at all, but I just can't wrap my head around all the angst about the DTV drop dead date. According to Nielsen (via NYT), there are 6.5 million households who can't receive digital TV. Presumably I'm one of them. That said, if I wanted a digital tuner (or converter or whatever), I'd go buy one. For what fraction of people who actually watch broadast TV and don't have a converter or digital tuner already is $40 a real hardship?

To give you a sense of perspective, if every single one of those households got two coupons for a $40 converter, that would represent $520 million, less than .1% of the proposed stimulus package. I guess nobody wants to be known as the Senator who took away the people's bread and circuses...


January 26, 2009

Rep. Peter King (R-NY) has introduced the Camera Phone Predator Act that would require camera phones to emit an audible indication whenever a picture is taken:
Congress finds that children and adolescents have been exploited by photographs taken in dressing rooms and public places with the use of a camera phone.

(a) Requirement- Beginning 1 year after the date of enactment of this Act, any mobile phone containing a digital camera that is manufactured for sale in the United States shall sound a tone or other sound audible within a reasonable radius of the phone whenever a photograph is taken with the camera in such phone. A mobile phone manufactured after such date shall not be equipped with a means of disabling or silencing such tone or sound.
(b) Enforcement by Consumer Product Safety Commission- The requirement in subsection (a) shall be treated as a consumer product safety standard promulgated by the Consumer Product Safety Commission under section 7 of the Consumer Product Safety Act (15 U.S.C. 2056). A violation of subsection (a) shall be enforced by the Commission under section 19 of such Act (15 U.S.C. 2068).

OK, so the value proposition for this is something like "protects children (think of the children!) from surreptitious photography". Except that it doesn't, because the bill doesn't apply to non-camera phones, which can be made just as small as camera phones, so if you're willing to plonk down $150 or so for a compact camera, you can evade this restriction and get much higher quality pictures. So, we need to sharpen the value proposition somewhat, to something like "protects children from surreptitious photography by people without digital cameras."

And of course, despite the "no disabling" provision, it's not like the tone is an essential function of the camera like the sound of a physical shutter release, it's just a speaker. So, unless you're going to totally redesign the phone, the miscreants can just open the phone, disable the speaker, and go to town. It's true this does render your phone useless as a phone, but seeing as used Motorola Razrs (remember, you don't need to connect it to the network) go for $30 or so on eBay, this isn't much of a problem. We need to revise the value proposition yet again to something like "protects children from surreptitious photography by people without digital cameras or who don't have $30 and a screwdriver."

Actually, it's even worse than that, since newer camera phones will do video recording, it's going to be pretty unacceptable to have it making an annoying noise the whole time it's being used. So, now we've got something like "protects children from surreptitious photography by people without digital cameras or who don't have $30 and a screwdriver, and whose camera phones don't take video." And let's not even talk about people who are willing to replace the software on their phones.

Other than that, this seems like a great idea.

Acknowledgement: I borrowed this argument technique from Allan Schiffman.

I'm not a music guy but even I can appreciate this. Apparently Microsoft has developed some new software called Songsmith which will take a vocal track and automatically generate backing music. Clearly, the most useful thing to do here is to take existing songs, strip out the vocals, run them through Songsmith, and post the rather suboptimal results to YouTube. Examples include: Tom Sawyer, White Wedding, Toxic, and Eye Of The Tiger. Really, the only one of these that's even halfway tolerable is I Heard It Through The Grapevine, which may have something to do with Marvin Gaye, unlike many rock musicians, actually being able to sing.

January 25, 2009

Brad DeLong reports his own coyote sighting. This isn't much of a picture but it's better than anything I have from my recent Rancho sighting.

For some reason I see dramatically more wildlife running through local parks and open space areas (at Rancho alone: tons of deer, turkeys, several bobcats, and a coyote) than I've ever seen on a backpacking trip even though backpacking trips are far more remote and I've spent plenty of time in bear country. I don't have a good explanation for this. Is it because of the high population density—the animals get used to people and so unafraid so I see them more?


January 24, 2009

  • 2:43???? Woah this is really a long movie.
  • Say what you want about Oliver Stone's politics—or whatever it is—the man can definitely shoot a movie.
  • Al Pacino is overacting at about 1.2 Shatners.
  • Nobody does asshole like James Woods.
  • Being tackled looks extremely unpleasant.
  • Football seems to involve a lot of shouting.
  • Wait, after losing four games in a row I need to listen to some clown give a post-game sermon?
  • OK, so there's a lot of guys pushing and shoving each other, but what's the score?
  • I'm trying to figure out if there's any character in this movie I don't hate. Dennis Quaid maybe... There's still another 100+ minutes left, though, so I'll probably hate him soon enough.

January 23, 2009

Occasionally programmers want to get a backtrace (the set of function calls that the program is currently in: A calls B, which calls C, which calls D, etc.) This is particularly useful for writing various kinds of instrumentation like memory checkers, profiling tools, etc. This is straightforward with a debugger, but that's not really convenient if there are a lot of events you want to get backtraces for. It's also possible to do this nonportably: if you know the way the compiler lays out the stack, you can grovel through memory and dig out the stack trace yourself, but that's a pain and requires reading a lot of documentation about the memory layout, so people typically don't bother.

Conveniently, if the program was compiled with GCC there's a much easier way. GCC has a builtin primitive that lets you get the frame pointer or return pointer address of any function in the call stack (documentation here.) Obviously, this is just the address, but you can then use dladdr() to extract the name of the function. Put together, you've got a backtrace. Note that this only gives you the function name and (with some work) the offset into the function. You'd need to grovel through the symbol table or something to actually figure out what line of source you were on. Still, that's information you didn't have before and is enough for a lot of purposes.

On FreeBSD, there's actually a library in ports called libexecinfo, which implements this trick. That's where I learned about it. Glibc also includes this as a native facility. I assume it works like this, but haven't bothered to examine the source.


January 22, 2009

I recently did an email interview (well, he sent me questions, I sent answers) with a reporter from the Buffalo News the other day and it's up on the Web here. Lemme tell you, it looks a lot cooler in print, since the lede at least was on the front page. Since it was in email, I seem to have managed not to say anything too stupid, at least not that made it into the article.
"The effect of the Internet is to make it much cheaper for scammers to send out solicitations. This means that scams that formerly were unprofitable because the response rate was so low are now profitable," Eric Rescorla, chief scientist at Network Resonance and an adviser for Voltage Security, said in an e-mail interview Thursday.


Users have to be more cautious and treat someone they talk to online the same way they would treat someone in real life. "Many of these scams work because users are insufficiently skeptical of e-mail they receive via the Internet. If users are a little more worried and it causes them to be more careful about trusting e-mail, that's a good thing," Rescorla said.

Obviously, I spouted a bunch more stuff that didn't make it in, probably deservedly so, but here's one point I am a bit fond of:

With regard to emails that purport to be from your bank, I suspect the problem is simple: people don't understand that it's easy to forge mail and that just because a site looks like your banking site, that doesn't mean it is. This is a case where your real-world intuitions fail you: there aren't a lot of fake brick and mortar bank branches floating around, but it's trivial to set up a fake site that looks like your bank's site.

Of course, this isn't unique to the Internet: you can't trust people who call you on the phone either, even if their Caller-ID information looks OK, but the visual cues on the Web really are good for suckering people in. It's just natural to say to yourself "no criminal would have a site that looks this good, it must be my bank", but of course that's totally wrong.

Thanks to Wasim Ahmad and Stephanie Mode for making the contact and Hovav Shacham and Terence Spies for looking over my answers before I shipped them. The point about it would be good to be more skeptical is Hovav's.


January 21, 2009

I've written before about Kentucky's attempt to seize a bunch of domain names from gambling sites. They prevailed at the trial level and were able to take control of the names, but just lost at the appellate level. From the Register article:
The lower-court ruling rested on Franklin County Circuit Judge Thomas Wingate's highly specious finding that internet casino domain names constitute "gambling devices" that are subject to the state's anti-gambling statutes. Tuesday's decision disabused Wingate of that notion in no uncertain terms.

"Suffice it to say that given the exhaustive argument both in brief and oral form as to the nature of an internet domain name, it stretches credulity to conclude that a series of numbers, or internet address, can be said to constitute a machine or any mechanical or other device ... designed and manufactured primarily for use in connection with gambling," they stated. "We are thus convinced that the trial court clearly erred in concluding that the domain names can be construed to be gambling devices subject to forfeiture under" Kentucky law.

(Decision here. BTW, can you believe that in 2008 they're still distributing documents as scans turned into PDFs? Clearly this was sourced on a computer, so what's the problem?)

While I agree that it doesn't make a lot of sense to view domain names as a gambling "device", I'm not sure that this is quite as broad a ruling as I would have liked. As far as I can tell, this is just a ruling that this particular Kentucky law is inapplicable, but it's not clear what would stop Kentucky from passing a law explicitly giving them the right to seize domain names used in gambling, which would put us right back where we started. The problem here isn't so much the overreach of the particular Kentucky law, but rather with the potential for a situation where every political unit has joint universal jurisdiction over DNS entries just because the owners of the domain names exchange traffic with people in that political unit. It's understandable that the court didn't want to address that when it could find on narrower grounds, but presumably we'll eventually run into a case where the applicability of the local laws is clearer and we'll have to take the major jurisdictional issue head-on.

Thanks to Hovav Shacham and Danny McPherson for pointing me to this ruling.


January 20, 2009

It's amazing how fast things go from "cool toy" to "something you depend on". I first got a Roku a little over six months ago. The other day it crapped out for about an hour, leaving me sitting around wondering what to watch. Six months ago if I'd wanted to sit around watching something I would have ordered something from my Netflix queue or gone to the library, but it's so much more convenient to just stream from Netflix that I've more or less stopped doing that, so I was stuck. Luckily, Netflix seems to have fixed whatever was wrong on their end (I'm assuming it was that since my Internet was working fine and when I tried to call support I got a busy signal, which is usually a reliable sign of an outage) and now I can get back to vegging.

January 19, 2009

Mrs. G. and I were up in San Francisco last weekend and while on our way to Fog City News we ran into someone we knew. This was sort of surprising, so I got to thinking about how probable it was (or wasn't). Grossly oversimplifying, my reasoning goes something like this:

The population of San Francisco is about 800,000. Let's call it 10^6. I know perhaps 100 people in the city at any given time. There are maybe 20-50 people on any given stretch of city block. Say I walk for an hour at 3 mph and that the average block is 100m long, so I walk for 50 blocks in that time and pass on the order of 10^{3} people. If we assume people are randomly distributed (this is probably pessimistic, since I know that I spend most of my time in SF in a few places and I assume my friends tend to be somewhat similar) then I have a .9999 chance of not knowing any given person I run into. If we assume that these are independent events then I have a .9999^{1000} chance of not knowing any of those people [technical note: this is really (999900/1000000) * (9998999/999999) * ..., but these numbers are large enough and we've made enough other approximations that we can ignore this.] .9999^1000 = .90 so if I walk around the city for an hour, I have about a 1/10 chance of meeting someone I know. That doesn't sound too far out of line.


January 18, 2009

Sorry it took me so long to get back to this topic. In previous posts I started talking about the possibility of replacing DNSSEC with certificates. Obviously, this can be done technically, but is it a good idea? The basic argument here (advanced by Paul Vixie but also others) is that putting keys in the DNSSEC would better than the existing X.509 infrastructure.

From a cryptographic perspective, this is probably not true: DNSSEC uses generally the same cryptographic techniques as X.509, including supporting MD5 (though it's marked as non-recommended). It may actually be weaker: one potential defense against collision attacks with X.509 is to randomize the serial number, but it's not clear to me that there's something as simple with DNSSEC RRSIG RRs though presumably you could insert a random-looking record if you really wanted to. Cryptographic security isn't the main argument, though. Rather, the idea is that DNSSEC would be more secure for non-cryptographic reasons.

I can see two basic arguments for why DNSSEC would be more secure, one quasi-technical and one administrative. The quasi-technical argument is that there is a smaller surface area for attack. Because a Web client (this is more or less generally true, but Web is the most important case) will accept a certificate from any CA on the trust anchor list, the attacker just needs to find the CA with the weakest procedures and get a certificate from that CA. This works even if the target site already has a certificate from some other CA, because the client has no way of knowing that. By contrast, at least in theory only one registrar is allowed to authorize changes to your domain, the attacker needs to target that registrar. That said, domain name thefts have occurred in the past, and so it's not clear how good a job the registrars actually do. Moreover, it wouldn't be that hard to import this sort of semantics into X.509; the CAs would need to coordinate with the registrars to verify the user's identity, but that's not impossible. Another possibility would be to have a first-come-first-served type system where CA X wouldn't issue certificates for a domain if CA Y had already done so without some sort of enhanced verification. Obviously, this is imperfect, but it would at least make hijacking of popular domains difficult.

The second argument is purely administrative, namely dissatisfaction with CA operational practices. Here's Vixie: "frankly i'm a little worried about nondeployability of X.509 now that i see what the CA's are doing operationally when they start to feel margin pressure and need to keep volume up + costs down." That said, it's not clear that registrar procedures will be any better (again, there have been DNS name thefts as well), and it's quite possible that they would be worse. Note that in some cases, such as GoDaddy, they're actually the same people. Moreover, there needs to be some coordination between the registrars and the registries, and that's a new opportunity for a point of failure. Anyway, this seems to me to be an open question.

The most difficult problem with this plan seems to be deployment. So far, DNSSEC deployment has been extremely minimal. Until it's more or less ubiquitous, users and clients need to continue trusting X.509 certificates, so adding DNSSEC-based certificates to the mix marginally increases the attack surface area, making the situation worse (though only slightly), not better. And as usual, this is a collective action problem: as long as clients trust X.509 certs there's not a lot of value to server operators to transition to DNSSEC-based certificates, which, of course, means that there's not a lot of incentive for clients (or more likely implementors) to start trusting them either. It's not really clear how to get past this hurdle in the near future.


January 17, 2009

As you may have heard, President George W. Bush is disappointed that the US didn't find WMDs in Iraq:
There have been disappointments. Abu Ghraib obviously was a huge disappointment during the presidency. Not having weapons of mass destruction was a significant disappointment. I don't know if you want to call those mistakes or not, but they were -- things didn't go according to plan, let's put it that way.

John Stewart complains here that "disappointment" is the wrong word "TiVo not recording the Project Runway Finale is a disappointment." I tend to agree with Stewart, that "disappointment" is the wrong word, but the problem is the direction of the vector, not the magnitude (would very disappointed be better?). So, you invade and you don't find any WMDs. How should you feel? I can imagine a bunch of reactions.

Narrow Disappointment
OK, this is bad. After all, you predicated a major policy initiative on the basis of something that wasn't true. Now you look stupid, or like liars, or both. Plus, we burned money and lives for no reason.

Cautious Optimism
That's small picture thinking. It's true that some set of government officials look stupid, but look, our purpose was to get rid of WMDs and they're gone. Better yet, if there had been a WMD program, we probably wouldn't have found everything and so there would potentially be terrorists with WMDs floating around. So, all in all, this is actually better than if there had been WMDs, though of course, the best world would be if there hadn't been WMDs and we hadn't invaded. Note: this analysis relies to some extent on your original belief that there were WMDs. If you didn't believe that and were just using that as an excuse to invade, then the above analysis doesn't really apply.

Broad Disappointment
OK, so it's good news that there weren't any WMDs, but what does this say about our decision making process and in particular our intelligence apparatus if they could be that wrong? Even if you think that the Bush administration deliberately mislead Congress and the US, many people clearly did believe that Iraq had WMDs and this says something bad about our decision making process that it could get an issue like that so wrong.

Yeah, so our intelligence apparatus/decision making sucks, but it doesn't matter much whether or not there were WMDs: decisions were made based on the information we had, and whether those were good decisions or not is contingent only on the data, we had, not on how it worked out in the end.

General Equilibrium (Negative)
The real problem here isn't that we screwed up but rather that we got caught. This seriously damages the US government's credibility, so the next time we want other countries to fall in line behind us on something, other countries won't trust us without much stronger evidence.

General Equilibrium (Positive)
Wait, that's not bad, that's good. This acts as a restraint on US unilateralism, which has not always been employed in the best way possible.


January 16, 2009

I was pretty surprised to hear about the successful A320 landing in the Hudson. My impression had always been that water landings generally weren't survivable, and the whole "your seat cushion can server as a flotation device" thing was theater, but apparently not:

In all cases where a passenger plane has undergone an intentional water landing or ditching, some or all of the occupants have survived. Examples of water landings in which passengers survived after a planned and intentional water landing after an in-flight emergency are:
And then goes on to list a bunch of them.

A number of these incidents (e.g., knee-deep water) seem like you probably don't need your life jacket, but others seem like some sort of flotation device would be in order. Here's one example:

The aircraft remained relatively intact after the water landing, but sank after the accident in about 5,000 feet of water, and was never recovered. The accident resulted in 23 fatalities and 37 injuries, with three additional uninjured survivors. Both pilots survived. The injured survivors waited for hours in the water to be rescued.

I'm a pretty good swimmer, but I don't think I'd want to jump out of a potentially burning plane and then have to tread water for hours until being picked up by helicopter.

þ James Wimberley pointed out that the Wikipedia page on survival of water landings.


January 15, 2009

OK, so while it's sort of ironic that the guy who will be in charge of the IRS screwed up his taxes, but seriously, Geithner is an economist, not a CPA. Sure they both involve money, but that's about it. Expecting him to understand the intricacies of tax law is kind of like expecting me to be able to build a semiconductor fab. Sure, they're both computer related, but otherwise there's not much of a connection. That's not to say that he did or did not deliberately underpay (I have no knowledge one way or the other), but according to the NYT it's a pretty common mistake and it sounds like his actual accountant signed off:
Mr. Geithner fully paid his state and federal income taxes. In failing to pay his payroll taxes, he in effect kept the money the I.M.F. had contributed toward his liability. However, Mr. Geithner's accountant told him he was exempt from self-employment taxes, according to Obama transition officials.

As Obama officials pointed out, and I.R.S. documents attest, the failure to pay Social Security and Medicare taxes is common among Americans who work for international organizations, including foreign embassies. A 2007 I.R.S. notice reported that up to half of such employees incorrectly file their tax returns.

That said, you'd think that the IRS could manage to design some set of forms that avoid this mistake. Geithner was issued a W-2 which pretty clearly shows that no FICA was withheld (it states "NONE"). How hard would it be to put something in that field that triggered your tax software to suggest you file a Schedule SE? "FILE SE", for instance? Or maybe "PAY UP"


January 14, 2009

As I mentioned earlier, the IETF managed to pass copyright terms that more or less precluded the preparation of revisions of any existing standard. Opinions differ about whether this was understood before it was passed, (see the comments on the linked post), but it seems clear that many IETFers didn't understand the implications of the new requirements when they were published. As far as I can tell, potential submissions fall into three categories:

  • Documents which contain all new text and which can be safely submitted.
  • Documents which contain at least some old text but the new contributors aren't paying attention and submit them anyway.
  • Documents which contain at least some old text and are being held because they can't be safely submitted.

In principle, there might be a fourth category: documents which contain old text but where the contributors have obtained licenses from all the copyright holders. Unfortunately, the form that the IETF meant to provide for this purpose is, uh, broken so you're kind of on your own, unless, that is, you can convince people to sign a blank signature page. I'm not aware of any documents that fall into this category, but maybe there are some. In any case, I know a number of authors who are holding back documents because they don't believe they can obtain the necessary rights.

The current state of play is that the IETF Trustees have proposed some new boilerplate that will go onto submissions that more or less disclaims the 5378 rights grants. Unfortunately, the current text is inadequate and it's not clear when new text will be posted, let alone approved. IETF San Francisco (March) may turn out to be pretty interesting.


January 13, 2009

This morning my copy of Safari stopped working with Gmail. Apparently I'm not the only one; it appears to be some kind of bug in 10.5.6, though reading the threads doesn't give that clear a picture of what the actual problem is. I see a lot of different explanations and a number of different reported workarounds: some people report Firefox works and some that it doesn't. Odd. Anyway, at least if you see this you know you're not going nuts.

Thanks to Hovav Shacham for pointing me at the relevant threads.


January 12, 2009

After my experience with the Dyson Handchopper, I was interested to check out some of the other 2nd generation hand dryers. This weekend I was in the Irish Bank at SF and noticed that they have Excel Dryer's Xlerator dryer:

I'm glad to report that while no more effective than the Dyson it's about 82% less scary and doesn't seem to have the problem of recontaminating your freshly washed hands as you attempt to dry them—try not to think about the other surfaces in your average San Francisco bar; at least people drying their hands probably made some attempt to wash them, which is more than you can say for the table you're sitting at. [*]


January 11, 2009

Ken Hirsch rightly nails me on the topic of Uncle Ben's:
All you really need to know here is that Uncle Ben's is owned by Mars, the company that makes M&Ms.
That actually doesn't tell you anything about the taste or nutrition of Uncle Ben's. You usually don't make such careless statements.

Your "converted" link goes to the "instant rice" wikipedia page, but the "parboiled rice" page is more appropriate. Converted rice has more nutrients than white rice. And, although converted rice is what made Uncle Ben's famous, they sell other kinds, too.

Fair enough. I was mostly trying to be clever, but I agree that it's not much of an argument. I generally find Uncle Bens "Original" to be tasteless and insipid, whereas a good basmati or jasmine is, as you know, a joy to eat. I've never had the other versions, except for I think the minute, which is ghastly. You're totally right about the wikipedia link. That's where it took me when I entered "converted" rice and I didn't check any further. I didn't even know about the nutrient thing. My bad!

I just looked in our kitchen and we have two kinds of basmati (one from India and one from Lundberg in California), three kinds of brown rice, and a 12-pound sack of Uncle Ben's Original, which my wife bought a couple of months ago when she was worried that the food distribution system might collapse at any moment.

I fully endorse this use of Uncle Ben's Original. I too would eat it after the apocalypse, probably after the canned tuna and freeze dried camping meals ran out but before emergency rations, MREs, and my neighbors.


January 10, 2009

Joe Hall is a man of good taste but his recommendations for making rice omit some important information:
  • Buy good rice. Depending on taste, you can go with either jasmine rice or basmati rice. Do not get Uncle Ben's, which has been "converted" so it cooks faster. All you really need to know here is that Uncle Ben's is owned by Mars, the company that makes M&Ms.
  • Buy a rice cooker. You can make perfectly good rice in a pot, but it requires a bit of attention to take it off the stove when it's done. Rice cookers automatically shut off when the rice is finished [technical note: the way this works for cheap rice cookers is simple and elegant, and makes use of the fact that a boiling liquid stays at the boiling point. There's a thermostat which automatically shuts off the heat when the temperature goes above 100C, indicating the water has boiled away]. A good rice cooker will then go into a warming mode. You don't need something expensive here. I think I paid $25 for mine.
  • You can make a variety of cheap and easy dishes by mixing them into the rice while it cooks. For instance, coconut rice by substituting some of the coconut milk for water, lentils or beans and rice by mixing in the add-in with the rice and adjusting the water accordingly. Lentils, rice, and a little spices (you can basically just throw them into the rice cooker; no doubt there's a more high-end prep, but this works fine) makes a fairly complete meal with about 5 minutes of prep time.
  • If you live in an area with minimal rice choices, look for an ethnic (Chinese, Indian...) food store, which will often have cheap, high quality rice in big bags.

That is all.


January 9, 2009

I disclosed responsibly. You published without warning. He leaked a zero-day.

January 8, 2009

I recently had to send my Macbook Air in for repair and as a precaution I burned most of my data off the hard drive. Of course, when it came back I wanted to restore some but not all of my data. I have Time Machine backups, but since I actually only want some of the data and I want new versions of the software, I decided to treat this as a new machine install. Naturally, I figured I could just use iTunes to sync my data off my iPhone. In principle this should work fine. In practice, not so much.

The first problem I had when I plugged things in and pressed sync is that iTunes decided not to actually copy my calendar, contacts, etc. off. After a few minutes I remembered that you have to actually frob some button in a dialog for each of these. Once I had that figured out I tried to sync it again and after asking me if I realized that I was massively changing the data on my computer (which, remember, knows nothing at this point) it just popped up the useless progress bar and spun. And spun. For hours. I tried this a few times with the same results. At this point I figured I had been bitten by the dreaded slow iPhone sync, and was all ready to start trolling my disk for crash files to delete, slaughter a rubber chicken, etc. when somehow I noticed that hiding under all my other windows was a dialog saying "hey, you know you're replacing the contacts list on your computer with the one on your iPhone, right". Clicking that dialog made everything proceed in a few seconds. Remind me again why that dialog isn't modal? The spinning progress bar isn't exactly the UI indicator you would ordinarily use to indicate that I needed to click on some button. For that matter, why do I need to click this dialog at all? I just installed the operating system: why wouldn't I want to copy stuff off my phone onto my totally empty calendar and contact list?


January 7, 2009

Pete Lindstrom points to an article about what went wrong with Twitter. The short story: one of the admins had a weak password and Twitter has no limited try lockout on their system, so the attacker was able to mount an online dictionary attack. He wasn't even trying to crack an admin account; he just got lucky. Outstanding!

UPDATE: Fixed Pete's name. I'd thinkoed it...


January 6, 2009

So, I promised to write about the security issues of DNSSEC versus certificates but I realized that it helps to have a sense of how the DNS registration system works, so this post is about that. I'll get back to the topic soon, I promise.

The important thing to remember is that DNS is a distributed system, where authority can be delegated. So, when I want to get the address of www.educatedguesswork.org, something like the following happens:

  1. Go to the root servers and ask who is responsible for .org. The answer turns out to be a bunch of servers operated by Afilias.
  2. Go to one of the Afilias servers and ask who is responsible for educatedguesswork.org. The answer turns out to be a bunch of servers operated by Dreamhost.
  3. Go to one of the Dreamhost servers and ask for the record for www.educatedguesswork.org
[Technical note: I'm massively oversimplifying here, but this is close enough to give the big picture.] If I'm an attacker and I want to hijack traffic to www.educatedguesswork.org, I can manipulate any of these records. Let's ignoring tampering with the records for .org, which is probably a bigger job. First, I can arrange to have the Afilias servers point to somewhere other than Dreamhost for educatedguesswork.org, i.e., to some server I control and which will give out any result I want. Alternately, I can have the Dreamhost servers give out a different address for www.educatedguesswork.org. Either of these will direct traffic where I want it.

What's often surprising (and confusing) to people is that there are potentially at least three different organizations involved here. Two of these are obvious:

  • The registry (Afilias), who is responsible for the whole top level domain (.org).
  • The DNS server operator (Dreamhost) who actually serves the records for my zone.

The third is less obvious: many (most?) users don't deal directly with the registries. Rather, they deal with a registrar which stores their domain name data and then transmits updates to the registry. In my case, Dreamhost is also my registrar (or at least is frontending for my registrar), but there's no logical connection between the two functions it's performing. Indeed, back in the old days people often ran their own DNS servers but they still had to deal with a registrar. Dreamhost is just providing one-stop shopping. Any registry may be served by multiple registrars, but any given domain is associated with a single registrar. A user can transfer their domain between registrars (though not registries, since those are defined by the delegation structure), but it's a bit of a pain, since the registrars obviously want to keep your business.

So, ignoring tampering with the DNS protocol itself (remember, DNSSEC is supposed to secure that), you could imagine trying to attack any of these entities: you could try to convince the DNS server operator to change the victim's records, the registrar to point to a server of your choice, or the registry to hand the records over to a different registry which you could control somehow. If you could manage to do any of these, you could arrange to serve up any records you wanted with the result that traffic got redirected to you rather than the owner of the domain.

Obviously, the victim has an account with both the registrar and the DNS server operator, so you can mount all the standard attacks on the account, login system, password recovery system, etc. For instance, Dreamhost's password recovery feature is of the classic "email confirmation" type. Remember: the operators are not in the business of making your life miserable, but rather of providing service. This makes it hard for them to enforce really rigid recovery policies. You might imagine, of course, that you could have an arrangement with your registrar/operator where you told them that they needed to enforce a much stricter policy, but I don't know if anyone does that now. I do imagine that Google's registrar would think twice before accepting my emailed request to my request for password recovery, but I doubt that would apply to requests for educatedguesswork.org. You can also use ordinary social engineering mechanisms to do fraudulent transfers of a domain name to yourself. This sort of thing has actually happened. Finally, you could imagine fraudulently moving a domain from one registrar to another and then changing the information. This has happened as well, though there are mechanisms you can use (e.g., "registrar locking") to protect your domain from this to some extent.

While I've been primarily talking about addresses above, it should be clear that more or less everything I've just said applies just as well to keying material (public keys, certificates), stored in the DNS. Next: comparing DNS to the certificate infrastructure.

In the wake of the recent attack on MD5 in certificates (see my summary here), the topic of replacing X.509 certificates with DNSSEC has come up.
i wasn't clever but i was in that hallway. it's more complicated than RFC 2538, but there does seem to be a way forward involving SSL/TLS (to get channel encryption) but where a self-signed key could be verified using a CERT RR (to get endpoint identity authentication). the attacks recently have been against MD5 (used by some X.509 CA's) and against an X.509 CA's identity verification methods (used at certificate granting time). no recent attack has shaken my confidence in SSL/TLS negotiation or encryption, but frankly i'm a little worried about nondeployability of X.509 now that i see what the CA's are doing operationally when they start to feel margin pressure and need to keep volume up + costs down.

The basic observation is as follows: the role of certificates in SSL is to bind domain names (e.g., www.educatedguesswork.org) to public keys. But this creates a parallel authentication hierarchy; the CA is attesting to your ownership of some DNS zone, but you also need to be able to control the zone itself, or people won't be able to talk to you. It's natural, then, to think about just stuffing your public key in the DNS. Unfortunately, you can't trust the DNS, at least not without DNSSEC. However, if you had DNSSEC (so the argument goes) you could stuff public keys into the DNS and people could then trust them and skip the whole certificate step. This is a fairly natural idea and has been proposed many times over the years (I remember proposing it back in 1996 or so).

It's not clear that this is really as useful as it sounds, for reasons I'll get into in a subsequent post. However, right now let's assume it's useful and explore the design space a bit.

Mostly when this topic comes up, there is a lot of discussion of encoding. In particular, there are two big issues:

  • Should we store certificates or keys?
  • What resource record type should we use?

Certificates versus keys
The argument for keys rather than certificates is straightforward: certificates are redundant. Most of the stuff in the certificate is various kinds of control information (issuer, serial, the signature, etc.) that isn't relevant here because the DNSSEC record is itself a quasi-certificate. And since this stuff is big and we'd like DNS records to fit within 576 bytes, it's naturally to strip things down as much as we can. There's an RR type for this for IPsec, but not SSL.

The counterargument here is that most of our software is already designed to deal with certificates and it's a pain to make it use raw public keys. For instance, if you're doing SSL, the server is going to send you a certificate and you can just memcmp() it against the certificate you get out of DNSSEC. Note that you don't need to use a real certificate here. Self-signed works fine. This also has the advantage that it's a a useful transition mechanism in settings where you don't have DNSSEC; if you have a mechanism for putting certificates in DNS, potentially you could use it as part of a mechanism to reduce client-server traffic (the client tells the server "I think I have your cert, don't send it to me if it has hash X") which doesn't rely on the DNS to provide authenticated data. Oh, adn btw, there already is an RR for doing this as well.

There's actually a third possibility: have the DNS store a digest of the certificate. This lets the DNS act as an authentication channel for the server certificate but doesn't chew up as much bandwidth as the certificate or the public key. Of course, that still requires some channel for the certificate, but if you're doing SSL that happens by default. It obviously works less well for non-cert protocols like SSH, and doesn't give you any kind of preloading benefit.

RR Type
The second issue is what resource record type should be used. While this seems like a topic that ought to be of interest only to DNS weenies, it turns out to be important for at least one reason: you can have multiple services with different keys running on different ports on the same host. As far as I can tell, none of the proposed mechanisms really provide for this. It's not really an issue for IPsec, where there is more or less one cert for each IP, but it's definitely an issue for SSL or SSH. One could imagine inventing a new record type which segregated by port, or using a TXT record, or... Anyway, my impression is that this isn't totally trivial.

Of course, we've totally ignored the question of whether this is a good plan. But this post is getting awfully long, so I'll deal with it in a subsequent post.


January 5, 2009

Computerworld reports that a bunch of famous people's Twitter accounts were subverted and used to send quasi-embarassing messages:
"This morning we discovered 33 Twitter accounts had been 'hacked,' including prominent Twitter-ers like Rick Sanchez and Barack Obama," Twitter co-founder Biz Stone said in post to the company blog. "We immediately locked down the accounts and investigated the issue. Rick, Barack and others are now back in control of their accounts."

Earlier in the day, the hacked accounts had been used to send malicious messages, many of them offensive. CNN correspondent Rick Sanchez's account, for example, tweeted a message claiming that "i am high on crack right now might not be coming to work today," while Fox News' Twitter update reported "Breaking: Bill O Riley [sic] is gay," referring to the network's conservative talk show host.

According to Twitter, the accounts were hijacked using the company's own internal support tools. "These accounts were compromised by an individual who hacked into some of the tools our support team uses to help people do things like edit the e-mail address associated with their Twitter account when they can't remember or get stuck," Stone admitted. "We considered this a very serious breach of security and immediately took the support tools offline. We'll put them back only when they're safe and secure."

I would be interested to hear more about exactly what happened with the support tools—though I doubt we will. It's easy to imagine a bunch of vulnerabilities in these tools (remote compromise, predictable URLs, insecure address changes) and most of them are easily fixed. However, even if the tools are implemented correctly, account recovery is one of the most challenging problems for this kind of application. The basic problem is that you don't know that much about your user other than their username and password, so it's very hard to distinguish an attacker from a user who has forgotten his password. The two conventional techniques are "security questions" ("what's your mother's first pet's maiden name and when did it graduate from high school?") and email recovery ("we've sent a message to the email address you registered with. Please click on the link in the message"), but neither of these is really that great for reasons the security community has hashed over ad nauseum. Obviously, we can make account theft harder by creating a tighter relationship with the user (get more personal information, have him pay for access so you can double check is credit card number), etc. However, this comes at a convenience and effort cost. Unless you're willing to make password recover incredibly painful, it's pretty hard to reduce this risk down to the level where a dedicated attacker wouldn't have a shot at cracking some people's accounts (and that's not to mention the use of weak passwords). This isn't to say, of course, that Twitter's tools aren't broken. As I said, I don't know much about that.

What I find interesting about this attack and most other "content" hijacking attacks you hear about (e.g., Web sites) is how lame they are. The attackers take over someone's site and then post something transparently forged that is supposed to be embarassing to the victim. Surely if you were serious about it you could generate some content which would be credible and much more damaging (remember when Google replayed an old article and tanked United's stock price?). The more that organizations use the Web, Twitter, etc. as primary communication mechanisms, the more effective this is going to be.


January 4, 2009

Was running at Rancho this afternoon (Lower Meadow, Wildcat, Upper wildcat, Upper High Meady, Rogue Valley; Rancho Runner code: 2DYcEF3UTS6RKLNM3FEcYD2) and right as I was coming up from Wildcat to Upper High Meadow what do I see but a coyote. Unfortunately, I neglected to read the instructional placard about what to do if you see one, but I remembered something about making yourself look big. Anyway, he (actually, I don't know it was a he—I didn't get close enough to check out its crotch) was on the trail in my way so we sort of edged past each other, me on one side of the trail and him on the other until we'd sort of swapped positions. At this point I started slowly backing away and he started to follow me a bit, but I gradually made some distance. Once I thought I was far enough away, I started running but at that point he started running after me. I don't expect to be able to outrun a coyote, so I turned around, raised my hands (trying to look big, remember) and yelled "aaargh" at him. He looked pretty startled and started to walk away, which seemed sort of promising. I backed away and finally after I turned the corner I looked back and he didn't seem to be following so I took off. Sorry, I don't have any pictures. This may be the one run where I regretted not having a camera; that and maybe an AK.

January 3, 2009

One of the truly odd things about the US (and indeed the world) financial system is the degree to which we seem willing to leave the state of the economy in the hands of a bunch of unelected technocrats (i.e., the other decisions that way: even scientifically oriented organizations like EPA, FDA, or DOE are often non-scientists, and even when they are scientists, they're subject to political control, unlike the Fed Governors, who are appointed for 14 year terms and in practice don't get fired (though they can be removed for cause.) Can you imagine appointing 7 scientists to serve as the "carbon emissions board" with power to decide on the price of carbon emissions (which seems fairly analogous to the Fed's power over interest rates)? Even our process for deciding how much lead and mercury get emitted into the atmosphere (and I think we can all agree that they're not good for you) isn't anywhere near that independent. I don't have an answer to this; I just find it puzzling.

January 2, 2009

As I mentioned earlier, I've been thinking of swapping out my ASICS 2130s for one of the Inov-8 trail shoes. After an abortive attempt at ordering the Roclite 305s from Zappos (wrong size), I decided it was best to buy them from a store where I could try them on, in this case Zombie Runner Palo Alto. After trying on a bunch of different shoe I ended up in the Roclite 295.1

So far, I've done one run in them at Rancho San Antonio (Rogue Valley Trail out to the hairpin turnaround up to the High Meadow Trail: 5.3 miles). They're extremely comfortable, with a nice, close fit, especially when you're wearing Injinji Tetrasoks which tend to make your feet fill out the toe box a little more. The heel counter is substantially lower than your average running shoe, so initially you feel like you're coming out of the shoe, but the lacing system keeps your heel locked in with minimal slippage, and you get used to the feel quickly.

The ride is interesting—the shoe is low with a flat sole/heel and minimal bounce so you end up a little more forward on your toes and don't roll off the heel as much as with a conventional running shoe like my ASICS. There's also very little support, which is odd after all the shoes I've worn which try to correct my flat feet; there's no need to break down the arch before the shoe is comfortable. Hard to know how it will hold up a couple hundred miles in, though. Traction in mud is outstanding, my partner was in the 2007 ASICS DS Trainer, and was sliding all over the place, but I felt like I was running on dry asphalt, just much softer. More on these later once I've had a chance to wear them for a longer run.

P.S. Netflix now has Endurance, which I've previously claimed is the greatest running movie ever available for instant play (though not DVD). It's the story of the Ethiopian distance runner Haile Gebrselassie, who dominated the 10K in the 1990s and currently holds the WR in the marathon, having brought it down a full minute to sub 2:04 since 2003. A little hard to get if you're not a runner, but well worth watching if you are.

1.Note for those who aren't familiar with Inov-8, the name refers to the tread pattern and the number refers to the weight in grams. This is a semi-lightweight trail runner.


January 1, 2009

And just as mysteriously as it broke, Google Reader is now picking up all my posts from the Atom feed. This could be the result of anything from my upgrade fo MT 4.23 (Grrr!), my whining to friends who work for Google, or just random software frobbing. Incidentally, this illustrates an annoying feature of RSS-style blog reading: blogs just drop off the radar and the software doesn't always tell you (or even know, if there's some bug) that something is hosed, as opposed to authors just not writing anything. So your subscription list just rots.