August 2010 Archives


August 30, 2010

Last time I reported on my experience running with VFFs things were going pretty well. I wrote:
Bottom Line

I suspect I'd be able to run much longer in VFFs (and I'll try a 10 this weekend), but given how much trouble I had when I ran on grave [gravel --EKR] of the wrong size, I'm not sure I would want to do something like an ultra, where I couldn't turn around and didn't know that the surface would be good. In view of that, I'll probably start mixing it up more to make sure I still can run in shoes if I want to.

Since then, things have taken a turn for the worse. About 10 weeks ago I felt like my overall fitness was good enough to start introducing intervals back into my training plan. I started out relatively easy with 1/2 mile repeats and thing were going well. In keeping with my "mixed footwear" strategy I was trying to run something like:

DayWorkoutSurface Footwear
TuesdayIntervalsAsphaltInov-8 295s
Wednesday Easy 3-5AsphaltVFFs
FridayModerate 5-7 TrailsVFFs
SundayEasy (8+)TrailsInov-8 295s

This was going OK and then after one interval workout (note: regular shoes) I noticed pain in my right foot at the first metatarsal-phalanges joint (where the big toe intersects the foot) and spreading across the metatarsals towards the little toe. wearing regular shoes. I'd noticed some pain like this before right when I first started running in VFFS, but it went away. Figuring it would go away again and not wanting to interrupt my workout plan I tried my Wednesday run as planned, but only got about 1/2 mile before I had to turn around and walk back; every impact hurt.

At this point I knew I had an injury but not how bad it was. I limped around for a day or two but then it seemed to get better so I waited a week and then tried a two mile run which had a little bit of discomfort but was mostly OK. I decided to try my ordinary Friday run (you've probably heard endurance athletes are stupid) but with Inov-8s instead of VFFs so I got some shock absorption. Bad idea. About 2 miles in I was in bad enough pain that I couldn't run at all (thanks to Kyle Welch for convincing me that running in intense pain was bad) and had to walk the two miles back. I spent the next 3-4 days barely able to run at all. Since then, I haven't been brave enough to run more than 2-3 miles at a stretch and even after doing that, I have discomfort or not pain. Visits to doctors produced some nonspecific diagnoses—possibly sesamoiditis, possibly tendonitis—and the all-purpose referral to PT, the go-to-plan for hard-to-diagnose joint-related injuries. We'll see if that helps.

It's obviously tempting to attribute this to the VFFs. The evidence for that view is that you tend to push off a lot harder with your toes, that it's a new injury occurring after a change in training regime, that I was experiencing pain there even before the acute injury, and that walking around the house barefoot seems to hurt more than wearing shoes.. The evidence against that view is that the actual acute injury happened after running in regular shoes and that it happened after a substantial change/ramp-up in training load, which is often a cause of injuries. I don't have an answer here and it's clear that—since we don't even really know what the problem is—the doctors don't know either. Once I'm able to run again in regular shoes I'll reassess whether I want to try minimal footwear again. For now, I'm supposed to stop running and wearing stiff-soled shoes all the time, so the question is kind of moot.


August 29, 2010

The second major attack described by Prasad et al. is a clip-on memory manipulator. The device in question is a small microcrontroller attached to a clip-on connector. You open the control unit, clip the device onto the memory storage chip, rewrite the votes, and disconnect it. There are a number of countermeasures we could consider here.

Physical Countermeasures
We've already discussed seals, but one obvious countermeasure is to encase the entire memory chip in plastic/epoxy. This would make it dramatically harder to access the memory chip. One concern I have about this is heat: were these chips designed to operate without any cooling. That seems like a question that could be experimentally answered. I think you'd want to use transparent epoxy here, to prevent an attacker from drilling in, access the memory chip, and covering it over, maybe with a small piece of plastic to permit future access. I also had an anonymous correspondent suggest encasing the entire unit in epoxy, but at most this would be the circuit board, since the device has buttons and the like; this would of course make the heat problem worse.

Cryptographic Countermeasures
Another possibility would be to extend the cryptographic checksum technique I suggested to deal with the dishonest display. At the end of the election when the totals are recorded the CPU writes a MAC into the memory over all the votes (however recorded) as well as writing a MAC over the totals. It then erases the per-election key from memory (by overwriting it with zeros). This makes post-election rewriting attacks much harder—the attacker would need to also know the per-election key (which requires either insider information or access to the machine between setup and the election) and the per-machine key, which requires extensive physical access. I think it's plausible to argue that the machine can be secured at least during the election and potentially before it. Note that this system could be made much more secure by having a dedicated memory built into the CPU for storage of the per-unit key, but that would involve a lot more reengineering than I'm suggesting here.


August 27, 2010

In their paper on Indian EVMs, Prasad et al. demonstrate that you can easily pry off the LED segment display module and replace it with a malicious display. At a high level, it's important to realize that no computer system can be made secure if the attacker is able to replace arbitrary components, since in the limit he can just swap everything out with lookalike components.

The two basic defenses here to use anti-counterfeiting techniques and to use cryptography with hardware security modules. Most of the proposals for fixing this problem (and the memory overwriting problem) are of the anti-counterfeiting variety; you seal everything up with tamper-evident seals and make it very hard to get/make the seals. Then any attacker who wants to swap components needs to break the seals, which is in theory obvious. Unfortunately, it's very hard to make seals that resist a dedicated attacker. In addition, sealing requires good seal procedures in terms of placing them and checking them with this many machines in the field it's going to be quite hard to actually do that in a substantially better way than we are doing now.

The other main defense is to use cryptography. The idea is that you embed all your sensitive stuff in a hardware security module (one per device). That module has an embedded cryptographic key and is designed so that if anyone tampers with the module it erases the key. When you want to make sure that a device is legitimate, you challenge the module to prove it knows the key. That way, even if an attacker creates a lookalike module, it can't generate the appropriate proof and so the substitution doesn't work. Obviously, this means that anything you need to trust needs to be cryptographically verified (i.e., signed) as well. Periodically one does see the suggestion of rearchitecting DRE-style voting machines to be HSM-based, but this seems like a pretty big change for India, both in terms of hardware and in terms of procedures for managing the keying material, verifying the signatures, etc.

However, there is an intermediate approach which would make a Prasad-style attack substantially harder without anywhere near as much effort. The idea is that each machine would be programmed by the Election Commission of India with a unique cryptographic key. This could be done at the same time as it was programmed for the election to minimize logistical hassle. Then at the same time that the vote totals are read out, the machine also reads out a MAC (checksum) of the results computed using that key. That MAC is reported along with the totals and if it doesn't verify, that machine is investigated. Even though the malicious display can show anything the attacker wants, the attacker cannot compute the MAC and therefore can't generate a valid report of vote totals. The MAC can be quite short, even 4 decimal digits reduces the chance of successful attack on a machine to 1/10000.

This approach is significantly less secure than a real HSM, since an attacker who recovers the key for a given machine can program a display for that machine. But it means that the window of opportunity for that attack is much shorter; if the key is reprogrammed for each election then you need to remount the attack between programming time and election time, instead of attacking the machine once and leaving the display in place indefinitely. It's also worth asking if we could make it harder to recover the key; if it's just in the machine memory, then it's not going to be hard to read out using the same technique that Prasad et al. demonstrate for rewriting vote totals. However, you could make the key harder to read by, for instance, having two keys, one of which is burned into the machine at manufacture time in the unreadable (hard to read) firmware which is already a part of each machine and another which is reprogrammed at each election. The MAC would be computed using both keys. This would require the attacker to attack both the firmware on the machine (once) and the main memory (before each election).

Clearly, this isn't an ideal solution but as I said at the beginning of this series, the idea is to improve things without doing too much violence to the existing design. Other approaches welcome.

As I mentioned earlier, Prasad et al.'s research clearly demonstrates that there are attacks on the election machines used in India. On the other hand, during the panel at EVT/WOTE the Indian election officials argued that there were serious fraud problems (especially ballot box stuffing) with the paper ballot-based system they used before the introduction of the EVMs, so there's going to be a huge amount of resistance to just going back to paper ballots. Without taking a position on paper versus machines, it's worth asking whether it's possible to build a better version of the existing EVMs (bearing in mind that there are something like a million of these machines out there, so change isn't going to be easy.)

Prasad et al. have three major complaints about the EVMs:

  • It's possible to replace the display, causing it to show any vote totals the attacker chooses.
  • It's possible to rewrite the memory chip that stores the vote totals.
  • The firmware on the devices is secret and the the devices are designed so that the firmware cannot be read off. This makes it difficult to determine whether the devices have malware (either installed at manufacture time or later.)

These are obviously real problems, though how serious they are of course depends on whatever procedural controls are used with the machines. Obviously, it would be better to have a machine without those problems. In DC I asked the panel to assume that they were stuck with something like the existing DREs (this isn't hard for Indiresan and Shukla, of course) and consider how they would improve them. I didn't get much of an answer, but I still think it's worth considering.

Over the next few days, I'll be talking a bit about how to address some of these issues.


August 24, 2010

I've held off on writing much about EVT/WOTE because I've been waiting for the A/V recordings to be posted. Most of them are up now, including an unfortunately partial recording of the most dramatic part of the conference, the panel on Indian EVMs. (There's some other good stuff like the rump session that's not up or only partially up.)

As background, Indian elections are conducted on a relatively simple hardware-based DRE machine, I.e., a small handset with buttons for each candidate; votes are recorded in memory and then totals read out on a control module. Hari Prasad, Alex Halderman, Rop Gonggrijp, Scott Wolchock, Eric Wustrow, Arun Kankipati, Sal Sakhamuri, and Vasavya Yagati got ahold of one of the machines and managed to demonstrate some attacks on it (see their analysis here). This naturally provoked a lot of controversy, and we decided this made a good topic for a panel. The panelists were:

  • P.V. Indiresan, Former Director, IIT-Madras
  • G.V.L Narasimha Rao, Citizens for Verifiability, Transparency, and Accountability in Elections, VeTA
  • Alok Shukla, Election Commission of India
  • J. Alex Halderman, University of Michigan

Unsurprisingly, the panel was extremely contentious, with Joseph Lorenzo Hall doing a great job of keeping the various strong personalities to the agreed upon format. It's definitely worth watching for yourself: we have complete audio and video for the last hour or so.

You may have heard that Hari Prasad has been arrested. This has obviously raised some very strong feelings, but I don't think it really bears one way or another on the arguments about whether EVMs are a good choice or not. The issues here aren't really that technical; the attacks reported by Prasad at all are straightforward, as are the attacks that the representatives of the Election Commission of India report were common on paper ballot systems before the introduction of EVMs. It's definitely worth watching/listening to this panel and making your own assessment.


August 18, 2010

I flew through Amsterdam on the way back from IETF Maastricht and got the opportunity—well, maybe opportunity isn't quite the right word, since I think it was mandatory—to try out the new body scanners they've installed at Schiphol. (My understanding is that they're millimeter wave, but they could be backscatter x-ray.) Anyway, it's pretty straightforward: you walk into the portal, hold your hands up in a goofy position for 5-10 seconds, and then walk on through.

I did get to see what it is the security screeners see on their display for few seconds. Looks like the public reports were right and they really don't get to see much. The display was maybe 8" diagonal with a sort of stylized figure (including hair, so either it's someone else or it's really stylized) with boxes that apparently indicate stuff that was detected. As I understand it, what's going on here is that the real image is shown somewhere else and then some screener elsewhere points out the regions of interest for local handling.

Here's something I've been wondering about: how are those signals transmitted to/from the screening room? Is it wireless or wired? If wireless, what's the security? If wired, do the cables run through an area that's potentially user-accessible. Interestingly, I didn't walk through the magnetometer, which means that the scanner is the sole line of defense for anything you carry on your body. An attacker who could control this network could, it seems to me, suppress warnings from the remote screener and walk through carrying anything he wanted. (They don't really do a complete pat down in many cases.) Another possibility would be to remotely subvert either the screening consoles or the scanner itself. There's sure to be plenty of software in both. Finally—even with a wired network—would be to monitor RF emissions off that network, constituting a privacy threat.

Anyone want to loan me a scanner?


August 15, 2010

I spent some time today listening to Michael Sandel's popular Justice course on iTunes U. The first two episodes are kind of a "greatest hits" of ethics hypos: he starts with the trolley problem (switching and fat man variants) and then moves onto the transplant problem. As usual, a lot of people are willing to switch the track, a lot fewer are willing to push the fat man, and practically nobody is willing to kill a healthy person to harvest his organs. There's all the usual flailing around as the students try to differentiate cases which are similar in body count but provoke radically different moral intuitions.

The organ transplant case is supposed to be a sort of reductio ad absurdum for consequentialists. Like the basic trolley case, it involves trading the death of one person to save five, but unlike the basic trolley cases, you're actually murdering the guy. Unsurprisingly, practically nobody is willing to sign on to that being morally acceptable, but since the math is the same, this suggests that there's something morally icky about consequentialism. Obviously, you can just embrace the cold (but allegedly logical) result, but in this particular case there's an alternative response within the framework of consequentialism: argue that sacrificing one unwilling donor to save five people isn't actually the right consequentialist choice. Obviously, if we're talking about this one case, then the cost/benefit analysis works out, but you need to think about the general equilibrium scenario. If innocent people regularly walk into the ER and coming out in pieces, then everyone is going to be a lot less likely to visit the doctor, which is not only bad for them, but also means that there won't be anyone to use as organ donors, in which case there won't be any recipients to trade off against donors anyway. (Note that the hypo here is that the doctor doesn't get caught, but if the base rate of going in healthy and coming out dead goes up a lot, this deters people from visiting doctors even if they think the causes of death are innocent).

Obviously, you can try to sharpen the case to remove this problem (it's a one-time thing for some reason, for instance). This is the philosopher's natural response, as evidenced when one student suggests waiting for one of the sick people to die and harvesting his organs instead; Sandel's response is that he's ruined the philosophical point; more on this in a second. But now you run into the objection that the scenario is basically so artificial that it doesn't tell you anything useful about day-to-day moral reasoning, which is of cours the generic objection to most of the philosophical ethics hypos. The nice thing about the organ donar hypo is that it does seem plausible at some level, the more you adjust it to sharpen it the less it works as an intuition pump.


August 14, 2010

One thing I've noticed about the Kindle is that it rebalances your reading priorities a bit. As any of my friends can tell you, I'm fairly cheap, so much of my pre-Kindle reading was either library books or used books (Bookbuyers in Mountain View has an excellent science fiction selection). But with the Kindle (and in particular the fact that it doesn't let you transfer books), my options have become a bit more limited. You can of course buy books from Amazon, but there's a lot less discount than you might like and you of course can't amortize the cost over multiple readers.1 (I'm assuming for the sake of argument here that you're not interested in breaking the DRM, though a little searching suggests this isn't that hard.)

As I mentioned earlier there are a number of sources of free Kindle books. However, due to the copyright situation they tend to be predominately older books, and since they generally have to be manually converted, this mostly means "classics". Unsurprisingly, I find myself reading a lot less modern fiction and a lot more "literature". Recently, I've read Jack London's "The Scarlet Plague", Joseph Conrad's "The Heart of Darkness", "The Adventures of Sherlock Holmes", and "Brave New World" (I actually paid $0.99 for this at Amazon). I don't think I'm alone here, either: on a recent flight back from DC I noticed the guy next to me reading "Moby Dick" on his iPad.

1.My other big source of books was loaners from Kevin Dick, who is rather less cheap. Unfortunately, he bought a Kindle.


August 13, 2010

I spent the first half of this week at at the 2010 Electronic Voting Technology/Workshop on Trustworthy Elections (EVT/WOTE) workshop. I only had one paper this time, a collaboration with Kai Wang, Hovav Shacham, and Serge Belongie called OpenScan. The basic idea behind OpenScan is really simple: in conventional optical scan systems the ballots are read by a single scanner operated by the election officials and there's no direct way for a third party to verify that the scanner is reading the ballots accurately. For all you know, the scanner (which after all is a computer) has been totally reprogrammed by some attacker.

There have been two major lines of development designed to provide some third-party verifiability for opscan systems. The first is a hand-counted audit, in which you randomly draw a sample of ballots, hand count them, and use statistical techniques to determine whether the results provide convincing evidence that the original count was correct. Audits require considerable statistical sophistication and have some other logistical drawbacks, so uptake has been a bit slow. The second approach, exemplified by the Humboldt Election Transparency Project, involved rescanning all the ballots and publishing images so that anyone can verify the count themselves. The problem with ETP-style approaches is that it's not practical to have everyone scan independently, so you're still at the mercy of the scanner. In principle, you now have two independent scans but of course the people who are allowed to do their own scan necessarily have a special relationship with election officials, so this doesn't create as high a level of assurance as you might like. (You can find my discussion of this issue here).

OpenScan takes the ETP to its logical conclusion by allowing an arbitrary number of independent scans. The basic idea is that instead of using a scanner, you have a mechanical paper transport and then use video cameras to record the ballots as they go by. You can then use computer vision techniques to isolate the individual frames corresponding to each ballot, rectify them so they're square, and then merge them into a singel image suitable for processing with standard optical scan processing software. Because the system works even when the camera doesn't have a head-on view, you can support multiple cameras, which means that a large number of independent observers can bring their own cameras, thus verifying the election without needing to trust the scanner (aside from that it will feed the ballots correctly). The nice thing about this design is that in principle you can build your own ballot processing framework completely independently out of commodity components (though this is a lot of work), so you don't really need to trust anything but that your generic camera and generic PC are behaving correctly. This means that election officials can allow third-party verifiability just be staging a single event where they run all the ballots through the sheet feeding apparatus and letting people video them. Note that like ETP this has privacy implications, so we would need to do some cost/benefit analysis before deploying such a system.

It turns out that this idea isn't original to us, as we unfortunately learned right after the camera-ready deadline. Anwar Adi suggested the same concept in 2006 and built a prototype, but it's quite a bit less capable than ours: it requires you to manually identify a single video frame for each ballot, doesn't produce ballot images (it uses simple edge detection algorithms to process the ballots itself) and won't work with the standard kind of opscan ballots in widespread use.

We (by which I mean Kai) have built a prototype of the system and it works pretty well. With the limited number of ballots we're processing, we get extremely good quality output; our test set of 50 ballots (600 opscan targets) produced no errors. The major drawback is performance: it takes about 10 seconds to process a single frame, so the system operates quite a bit slower than real-time with respect to the video camera. This can of course be dealt with by throwing hardware at the problem (we estimate that you would need something like 3-5 8-core machines to process at 30 ballots/minute, which is already pretty reasonable), but we've been working on faster algorithms and hope to be able to at least lower the required amount of hardware. Once we get this figured out, we're hoping to release an open source version of the software.


August 4, 2010

I recently bought a Kindle DX. I've only had it for about 10 days, but after taking it to the Maastricht IETF, I'm ready to report some initial impressions.

The best part of the Kindle is the screen. The new high-contrast display just looks fantastic. Even on the smallest font size, it's still very readable, and you can easily take your average academic paper (typically Times Roman or Computer Modern in 10-12 point), copy it on the Kindle and read it there with no adjustments. When you're reading books, which aren't really formatted for this display, you do have to compromise between number of words on the page and line length, but I find that the smallest size usually works well. And of course, you can always use a bigger font size.

All e-paper displays are fairly slow to change, and when you change pages, the entire screen goes black briefly and then you get the new page. This is annoying at first but eventually you stop noticing it. The new display refreshes much faster than the old Kindle display and it makes a big difference here.

Since the display isn't backlit, you need some light source; most sources will do, but there is enough glare off the screen under really direct light that I can't use a headlamp to read in bed, the way I sometimes do with paper books. This is a minor flaw but is obviously something that could use some improvement.

The worst part of the Kindle is the UI. To some extent, this is dictated by the performance of the screen. Because the response is so bad, you just can't build as responsive a UI as you can with an LCD. So, whenever you want to do anything complicated, you end up waiting for the UI to do stuff, which is annoying. The UI is better than with the first generation Kindle. Amazon (or maybe the display manufacturer) has figured out how to change parts of the display without blanking everything and so you can at least tab through links on the page at a reasonable pace (though there's still a fair amount of ghosting).

The UI problems can't be laid entirely at the feet of the display, though: for instance, I'm right handed but I like to hold books left-handed while I'm doing other stuff (brushing my teeth, making an espresso; yes, I'm that guy). Unfortunately, the navigation buttons are only on the right, so this doesn't really work out. The original Kindle had buttons on both sides but Amazon seems to have given up on that. Regardless, it's annoying.

Another irritating feature is the keyboard, which just stinks, even by the standards of cruddy chiclet keyboards. Just typing a search term using the keyboard is annoying and using it for actual annotations is out of the question (at least for me).

Other Ergonomic Factors
The balance of the Kindle DX is a little off. You tend to want to towards the bottom, but then there is a long lever arm and this puts some stress on your hands which you don't get with pocket-sized paperbacks. The situation isn't any worse than with hardcover or trade books, so you just need to find somewhere to rest the device if you're going to do any really extended reading.

Kindle Store
Amazon, of course, runs a Kindle e-book store. You can access it via your Kindle but because of the aforementioned UI issues, that's kind of a last resort thing. Instead, you want to use the online store, which is just like Amazon's ordinary book store, except that when you buy something it gets wirelessly delivered to the Kindle (the thing about "around a minute" is no lie, btw. I ordered a book from my PC in Europe and had it downloaded in a minute or so.) That's all fine, but to be honest Amazon's prices aren't that great, with Kindle books generally running at between 6.99 and 9.99, so more than the corresponding paperback but less than a hardcover. Obviously e-books have advantages, but it's annoying to be paying more than you would for a physical book which you could lend to a friend when you're done. Of course, I knew this was the situation going in and planned to mostly use free books (see below).

Free Sources
There is a really excellent supply of free e-books for the Kindle. Mostly what's available is books (classics and otherwise) which are out of copyright. Some authors have also made their work available under Creative Commons or other free licenses. The best place to start is probably Feedbooks, which provides a "book" called the Kindle Download Guide. Really, the Guide is a meta-book, since it consists mainly of book descriptions and links to where you can download the books from Feedbooks. This all works relatively smoothly. You can also apparently get books from Project Gutenberg and The Internet Archive, but I haven't really tried either. There seems to be a fair amount of overlap between these sources. For instance, many of the books on Feedbooks come from Project Gutenberg. See here for a more complete page of free Kindle books.

Amazon also provides a selection of free books in the Kindle store, but unfortunately (though unsurprisingly) it's not really that easy for me to browse through; in particular, the organization into categories is kind of messed up. For instance, it thinks Lorentz's "Einstein's Theory of Relativity" is fiction.

Loading Your Own Files
Finally, you can load your own files, which is useful for papers, documentation, etc. The Kindle DX will process PDF and as I mentioned above, it doesn't need to reformat them the way that the smaller Kindle does. All you need to do here is download the PDF onto your computer, then plug the Kindle in via the USB cable, where it appears as a hard drive. You just copy the files into the Kindle's document folder and they're available as soon as you unmount the Kindle.

The major drawback here is organizational. The basic Kindle interface is just that you have a pile of documents which get sorted in some order (like most recent). They recently added a "collections" feature where you can tag documents as belonging to a particular collection, but the UI for this, like all the UI, is clunky, and there's no official way to manage it from your computer. (More on how to make this work later).


August 2, 2010

I do a lot of reading and I'd been thinking for a while of buying some kind of e-book reader. The advantages (light weight, no need to compromise on selection, etc.) were obvious, but so are the drawbacks (lack of content portability, no real ability to mark up), so I'd been holding off in hopes that something better would come along. But faced with 30,000+ miles in 6 weeks, I just couldn't face lugging my usual pile of books and decided I better do something.

I do two major kinds of reading on the road: books and papers. Any of the major devices seems to do reasonably well on books, though there is some variation in how extensive the available library is. For papers, what I really want is the ability to copy over my own PDFs and then mark them up on the device; my existing workflow was to print everything out, mark it up, and then transcribe from the marked up ms., so ideally I would want the same workflow on a device. I don't need any kind of OCR, just to be able to see my marks (which are mostly circles, strikeouts, etc. anyway) and transcribe them. Unfortunately, none of the major devices seem to have this kind of capability, and while there are some fringy devices that seem to (e.g., the iRex), they're expensive and I didn't want to deal with some device that nobody else had and that I couldn't try before I bought.1

This left three major options:

  • An iPad
  • The Kindle (either regular or DX)
  • The Sony reader

I never even really considered the Sony reader. I do a lot of buying from Amazon and it just seemed convenient to have something integrated with an existing popular library. Also, I'd seen early Sonys and wasn't that impressed. This may have been a mistake, but that was my decision process.

This left iPad versus Kindle. I tried a friend's first generation Kindle on one trip and was pretty impressed with the battery life and general readability. The UI is pretty kludgy but eventually I got used to it. Obviously, the regular Kindle is nowhere near large enough to display a full page of text, but if I couldn't annotate onscreen I thought it might be worth sacrificing a full page view for size. Ultimately, though I tried a DX and was pretty impressed with its usability and concluded that there were lots of times I would want to read papers and only do light editing, if any, where the DX would work well. Ultimately, I bought one of the new Graphite DXs.

So, why didn't I buy an iPad? Obviously an iPad is a far more capable device, but I already have a Macbook Air, so if I want to play games or watch movies, Apple has already sold me a perfectly good general computing device which isn't annoyingly handcuffed to their App store, so that extra capability doesn't buy me a lot. The iPad also has a number of drawbacks. The screen is bright and clear but in terms of readability for a long book I prefer the matte unlit Kindle display (though of course the e-ink display lag is annoying). Also, the iPad is quite a bit heavier (about 4oz/20%) and the battery lifetime is significantly worse. I used my Kindle quite heavily over a week with the wireless on much of the time and only ran out of battery at the very end. This doesn't match what I hear about people's iPad experiences.

Finally, there's the issue of price: the Kindle DX is $389 and the bare bones iPad with 3G is $699, but then you have to pay for the data plan. By contrast, you can use the Kindle internationally for free; I had several books wirelessly delivered to me in Holland and didn't even think about the cost. This is a huge advantage for me, since it's precisely in settings where I don't want to pay hefty 3G roaming fees that I most want to be able to read for free. And of course you can use the Kindle as a free (bad) Web browser if you get desperate enough.

All in all, I'm reasonably happy with the Kindle (full review to come later) though I wouldn't have paid $600 for it. If a device that lets me mark up directly appears, I'd definitely seriously consider that (heck, if there is one now, I'm still within my 30 day return window) but in the meantime the Kindle seems like a reasonable compromise.

1. A friend of mine recently attempted to order an iRex and reports that it's more or less eternally back-ordered. After talking to me he decided on a DX.