EKR: September 2009 Archives


September 28, 2009

My comments can be found here. You may also be interested in ACCURATE's comments which can be found here.

September 27, 2009

[See here]

S 5.2.2 requires that systems be written in programming languages which support block-structured exception handling:

The above requirement may be satisfied by using COTS extension packages to add missing control constructs to languages that could not otherwise conform. For example, C99[2] does not support block-structured exception handling, but the construct can be retrofitted using (e.g.) cexcept[3] or another COTS package.

The use of non-COTS extension packages or manufacturer-specific code for this purpose is not acceptable, as it would place an unreasonable burden on the VSTL to verify the soundness of an unproven extension (effectively a new programming language). The package must have a proven track record of performance supporting the assertion that it would be stable and suitable for use in voting systems, just as the compiler or interpreter for the base programming language must.

One could interpret this requirement as simply being that the language must support this functionality, not that it be used, in which case the requirement is unobjectionable. However, S 5.2.5 makes clear that programmers are expected to actually use exception constructs, which is significantly more problematic.

The first issue is that an exception-oriented programming style is significantly different than an error code-oriented programming style, so complying with the spirit of this requirement implies a very substantial rewrite of any existing system which uses error codes. However, experience with previous VVSG programming style requirements suggests that vendors will do the minimum required to comply with the letter of the VVSG. Because the two styles are similar enough that the conversion process can be done semi-mechanically, the likely result will be a a program which superficially works but which now has subtle bugs which were introduced during the conversion process.

One class of bugs that deserves particular attention is the proper cleanup of objects before function exit. When a function creates a set of objects and then encounters an error, it is important to clean up those objects before returning with the error. Failure to do so can leak memory and, more importantly, lead to improper finalization of objects which may be connected to non-memory resources such as network connections, hardware, etc. In languages such as Java, this is handled by garbage collection, but C is not garbage collected. Thus, when an exception is thrown from a function deep in the call stack and caught in a higher function, it bypasses all explicit cleanup routines in intermediate functions, which can lead to serious errors. Handling this situation correctly requires extreme care comparable to that required with conventional error handling. Writing correct code under these circumstances is challenging under the best of conditions, but is likely to be impractical under conditions where programmers are required to convert existing error code-based software.

While it might be the case that a completely new system written along exception-oriented lines would be superior, I am aware of no evidence that a retrofitted system would be superior and there is a substantial risk that it will be worse.

The second issue is that because C has no native exception handling, systems written in C will need to use a COTS package. Unfortunately, because exception handling is not a native feature of C, any attempt to retrofit it involves tradeoffs. As an example, the cexcept[3] package cited above, does not support conditional exception handling. In C++ exceptions, it is possible to have a Catch statement which only catches some exceptions, e.g.,

   try {

   catch (memory_exception){
   catch (data_exception){
   // Other exceptions get passed through
But in cexcept, a Catch statement catches exceptions of all types and you need to use an explicit conditional in order to discover which exception was thrown. But this creates much the same opportunity to ignore/mishandle unexpected exceptions that error codes do.

Another problem with cexcept is that it is very brittle whenever exception handling is intermixed with conventional error handling. Any function which jumps out of a try/catch block can result in "undefined" behavior (i.e., the implementation can do anything whatsoever). This, of course, is an easy mistake to make when converting from return codes to exceptions.

cexcept is not, of course, the only C exception package. For instance, Doug Jones has developed a different exception package, which makes different tradeoffs (though the above intermixed exception/return problem seems to exist here too).

Third, the use of the term "COTS" to cover these packages seems to require a fairly loose definition of COTS. While it is true that there are a number of exception packages available for free download, it is not clear to what extent they are in wide use by programmers. In my experience as a professional programmer, I have yet to work on a system written in C which used on of these packages. As the stated purpose of the COTS requirement is to be to ensure that the packages have seen enough deployment that we can have confidence in their quality, it seems questionable whether any of the available packages meet this standard.

I'm watching Mega Shark versus Giant Octopus and I just saw a huge shark jump out of the water and eat an airplane. Did I mention that this movie stars Lorenzo Lamas and Debbie Gibson (yes, that Debbie Gibson). The best news? It's available on Netflix instant play.

September 26, 2009

Last week my local public radio station (KQED) was fund raising again, and they've introduced a new gimmick where Ira Glass from This American Life calls up some poor listener and hassles them about why they're not donating to public radio. I've done a fair bit of thinking about this topic, so I'd certainly be more than willing to receive such a call.

I think the facts of the situation are fairly well agreed upon:

  1. Public radio stations produce programs and broadcast them over the airwaves.
  2. The programs are available to anyone with a radio receiver (non-excludable).
  3. My choice to listen to a given program does not increase the station's costs or interfere with anyone else's ability to listen (non-rivalrous).
  4. The station has a program where people can donate money, but only a relatively small fraction (10% is the number they claim) of listeners actually do so.
  5. The station sells some advertisements (they call it underwriting) and receives some goverment funding, but neither of these is sufficient to cover their costs.

I don't think there's any real controversy about the above; the only question is about what these facts imply. KQED's implicit (and sometimes explicit argument) is that if you listen to and presumably enjoy (an economist would say this is the same thing) then you have an obligation to donate. [Actually, they drift between this and the argument that you should give them as much as you would be willing to pay for the service, but that's pretty clearly crazy; it's hard to see what the ethical justification for them capturing the entire consumer surplus is.] They recognize a narrow exception if you can't afford it, but let's stipulate for the moment that you can do so.

I don't find this argument particularly convincing. Actually, it's not really an argument, just simple assertion: you use our free service, which wouldn't exist without people chipping in, so you have an obligation to chip in. Even so, we can plug other values into this equation to see how it holds up.

The wide availability of computer networks has brought us a huge number of products and services that are offered on a free basis: much of the software that runs the Internet is distributed for free, as are many web sites, blogs, etc. In many cases, those services are partially supported by grants and advertising, but in almost all of those, this support is nowhere near enough to cover all the costs, which are typically donated either explicitly or implicitly by the people providing the service. So, here we have a very similar situation (points 1-5 all apply) and yet my experience is that practically nobody donates money to support projects like this (for instance, I don't ask you to hand over money to read EG) and people generally don't feel like they have any obligation to do so.

So what's the difference? As far as I can tell, the principal way in which public radio is different from Web sites and free software projects is that in the latter two cases, the people providing the service have day jobs and are donating their time instead of expecting to be paid for it. This allows them to keep their costs a lot lower so the activity doesn't absolutely depend on donations to keep it operational. By contrast, because the people in public radio have this as their job, their costs are a lot higher. Maybe I'm missing something but it's not clear to me why the desire of public radio employees to get paid rather than working for free somehow creates an obligation on my part to fund their lifestyle.

Oh, one more thing: KQED's web server runs on Apache; I wonder if they've donated any money to the Apache Software Foundation. I don't see them on the sponsorship page (minimum placement $5K). Do you think we can get someone from ASF to give Ira Glass a call?

Full Disclosure: This is a little bit of a cheap shot, but only a little bit; Chicago Public Radio, which does This American Life, runs IIS, but this ad is airing on KQED, so I don't think it's a totally unfair point.


September 23, 2009

Nominum is introducing a new "cloud" DNS service called Skye. Part of their pitch for this service is that it's supposedly a lot more secure. Check out this interview with Nominum's John Shalowitz where he compares using their service to putting fluoride in the water:
In the announcement for Nominum's new Skye cloud DNS services, you say Skye 'closes a key weakness in the internet'. What is that weakness?

A: Freeware legacy DNS is the internet's dirty little secret - and it's not even little, it's probably a big secret. Because if you think of all the places outside of where Nominum is today - whether it's the majority of enterprise accounts or some of the smaller ISPs - they all have essentially been running freeware up until now.

Given all the nasty things that have happened this year, freeware is a recipe for problems, and it's just going to get worse.


What characterises that open-source, freeware legacy DNS that you think makes it weaker?

Number one is in terms of security controls. If I have a secret way of blocking a hacker from attacking my software, if it's freeware or open source, the hacker can look at the code.

By virtue of something being open source, it has to be open to everybody to look into. I can't keep secrets in there. But if I have a commercial-grade software product, then all of that is closed off, and so things are not visible to the hacker.

By its very nature, something that is freeware or open source [is open]. There are vendors that take a freeware product and make a slight variant of it, but they are never going to be ever able to change every component to lock it down.

Nominum software was written 100 percent from the ground up, and by having software with source code that is not open for everybody to look at, it is inherently more secure.

First, I should say that I don't have any position on the relative security of Nominum's software versus the various open source DNS products. With that said, I'm not really that convinced. The conventional argument goes that it's harder for attackers to find vulnerabilities in closed source software because it's harder to work with the binaries than the source. This is a proposition which I've seen vigorously argued but for which there isn't much evidence. Now, it's certainly true that if nobody can get access to your program at all, then it's much harder to figure out how it works and how to attack it. However, Nominum does sell DNS software, so unless the stuff they're running on Skye is totally different, it's not clear how much of an advantage this is.

Salowitz also argues that being closed source lets him hide "secret way[s] of blocking a hacker from attacking my software". This seems even less convincing, primarily because it's not really clear that such techniques exist; there's been a huge amount of work on software attack and defense in the public literature, so how likely is it that Nominum has really invented something fundamentally new? And if you did in fact have such a technique, but one that's only secure as long as it's secret, then it's far more vulnerable to reverse engineering than programs ordinarily are, since the attacker just needs to reverse engineer it once and it's insecure forever. By contrast, if they reverse engineer your program to find a vulnerability, you can close that vulnerability and then they need to find a new one.

Again, this isn't to say that Nominum's system is or isn't more secure than other DNS servers (though DJBDNS, for instance, has a very good reputation). I don't have any detailed information one way or the other. However, this particular argument doesn't seem to me to establish anything useful.


September 20, 2009

Transportation to and from the JMT seems like a perennial problem for people. The basic issue here is that JMT is a point-to-point trip and so if you want to drive, you end up leaving your car at Whitney or Yosemite. Here are your major logistical options:
  1. Get someone to drop you off and pick you up.
  2. Drop your car off at the finish and have someone shuttle you to the start. I suppose you could do the reverse too.
  3. Take mass transit to the start and back from the end.
  4. Drive to the start and take mass transit back.

The first two options require having friends, so I was left with the second two. I originally intended to take mass transit both ways, but due to an airline scheduling screwup (details here), I didn't have time to take transit and had to drive out to Yosemite. I figured that maybe I could bum a ride back from Whitney from some other hikers and in the worst case, I could take transit back.

I really do mean the worst case here, since the transit situation is pretty grim. The nearest trailhead to Whitney is at Whitney Portal, which is basically just a parking lot, campground, and store in the middle of nowhere. The nearest town to Whitney Portal is Lone Pine, 11 miles away. The official story about getting from Lone Pine to Yosemite Valley is that you take CREST from Lone Pine to Mammoth (leaves Mon, Tue, Thur, Fri at 6:15 AM, arrives Mammoth at 8:20 AM). You then take YARTS from Mammoth to Yosemite Valley (leaves 7:00 AM, arrives 10:55 AM). Yes, you're reading that right: CREST arrives in Mammoth 80 minutes after YARTS leaves for Yosemite. Now, I've never personally been to Mammoth and I hear it's kind of nice, but after a couple of weeks on the trail, I suspect most people are ready to get home—I know I was—so this doesn't seem like a real attractive prospect, and I went looking for alternatives. First, I'll tell you what I did and then I'll tell you what I probably should have done.

You may have noticed that I've left something out of our little trip. When you get off the trail you're still stuck at Whitney Portal and you need to get to Lone Pine. There are shuttle services, but really this part is pretty simple; the road to Whitney Portal (cleverly named Whitney Portal Road) from 395 runs right through the center of Lone Pine, so more or less anyone leaving Portal is going to go to Lone Pine. A decent fraction of them have room in their cars and are happy to pick up backpackers, so hitch-hiking this section is easy. That said, I was hoping that I could get a ride to Yosemite or Mammoth from someone who was executing the aforementioned option 2. I hadn't met anyone who fit the bill, but I figured some people might come off the trail so I made a "Yosemite/Mammoth" sign and stood by the exit to Portal for an hour or so. After about 10 people had stopped and offered to take me to Lone Pine, I got the idea, put down my sign and stuck out my thumb and got a ride within 5 minutes.

This got me as far as the Whitney Portal Hostel, but again I was hoping to avoid two overnights (one in Lone Pine and one in Mammoth). One of the other people staying at the hostel suggested I try hitchhiking [remember, 395 is the main strip in Lone Pine]. After a few minutes with my sign, I ran into a woman pushing her kid in a stroller who said that her family was driving to Tuolumne the next morning and they could probably take me. After some negotiation with her and her husband, we were on. I met them at their hotel the next morning and they dropped me off at the Tuolumne store around 11 AM. Now remember, I really wanted to be in Yosemite Valley, which isn't exactly an easy walk from Tuolumne. Luckily, there is a bus that leaves from the store at 2:15. This isn't a a terrible option but it still leaves me sitting around for three hours, so I figured why not try to hitch it the rest of the way? Five minutes with a marker and a cardboard box and I had a new sign and was standing out on the road. A few minutes later, another hiker who had come off JMT the same day as I had walked over. She helped me improve my sign (with a better pen) and we chatted as people drove by without picking me up. After 45 minutes or so a van full of Italian tourists (I hear Europeans are better about this than Americans) came by and picked us both up, taking us to one of the Valley parking lots by around 2 PM. From there, you can take a quick shuttle to the trailhead parking and your car.

Anyway, what I learned from the other hiker was this: don't take the bus to Mammoth. Instead, take the same bus all the way to Lee Vining (a town right near Tioga Pass and the entrance to Yosemite). Because Lee Vining is more or less at the intersection of 395 and 120, it's easy to hitch a ride from people driving into Yosemite from the East Side; she said that she spent about 5 minutes waiting before she had a ride into Tuolomne. Even if you can't hitch a ride, the worst case scenario is that you take the same YARTS bus the next morning into Yosemite Valley, so aside from having to spend the night in exciting Lee Vining instead of Mammoth, you're no worse off. Of course, this advice only makes sense if you don't mind hitch-hiking; I usually wouldn't be willing to, but it's common enough near Yosemite that you don't have to feel too awkward and I generally feel like I can take care of myelf. Your mileage may of course vary.

UPDATE: The CREST bus arrives in Mammoth after, not before the YARTS bus leaves. Thanks to Eu-Jin Goh for pointing this out.


September 19, 2009

A number of political blogs (e.g., Obsidian Wings, Matthew Yglesias, etc.) seem to have a problem with comment impersonation. The general pattern is that someone will show up and post something more or less blatantly offensive under the name of a well-known commenter. This is then followed by a series of posts asking "was that really John or just a comment spoofer?" "Can someone check/block their IP?", and often eventual removal of the offending comment, leaving everyone confused about what the fuss is about.

Obviously, the underlying source of the problem is that most blog software has completely open commenting: you don't need to register and you can provide any identity you want and it will just accept it. This is convenient if you regularly post from random machines, but makes this sort of impersonation trivial. The natural "security guy" defense here is of course to require actual user authentication [this seems to be supported by most blog software], but that's really overkill for this situation, where all we really want to do is stop random people from impersonating random other people. Here, then, is my suggestion for a small set of changes which would make most casual impersonation very difficult:

  1. The first time a given identity is used, record the IP from which it is used and install a cookie in the browser which is used to make the comment.
  2. In future, restrict use of that identity to requests which either come from that source IP or present the right cookie.
  3. If you see a request from a different source IP that present the right cookie, add that source IP to the whitelist.
  4. If you see a request from a whitelisted source IP without a cookie, install the cookie.
  5. Have a manual mechanism (e.g., e-mail challenge response) for allowing a new computer to post comments under an existing name.

This isn't perfect in a number of respects. First, it doesn't provide perfect security. For instance, if I ever post from a hotspot (which generally has a NATted network) anyone else from that hotspot will be able to post as me. However, that seems relatively unlikely given the form of attack which we're generally seeing here, which is mostly trolls trying to disrupt the conversation. The second problem, of course, is that it's a little inconvenient if you have multiple computers, but even people who do post from multiple computers generally only have a few and those would quickly be whitelisted. The big advantage of this scheme is that it provides reasonable deterrence against a common attack and is generally pretty transparent to most users. We don't have a comment impersonation problem here on EG, and I'm too lazy to implement it for the public good, but I'm a litle surprised that hosting services like Typepad haven't implemented something similar.


September 18, 2009


September 15, 2009

Ed Felten writes about the problem of fleeing voters:
Well designed voting systems tend to have a prominent, clearly labeled control or action that the voter uses to officially cast his or her vote. This might be a big red "CAST VOTE" button. The Finnish system mistakenly used the same "OK" button used previously in the process, making voter mistakes more likely. Adding to the problem, the voter's smart card was protruding from the front of the machine, making it all too easy for a voter to grab the card and walk away.

No voting machine can stop a "fleeing voter" scenario, where a voter simply walks away during the voting process (we conventionally say "fleeing" even if the voter leaves by mistake), but some systems are much better than others in this respect. Diebold's touchscreen voting machines, for all their faults, got this design element right, pulling the voter's smart card all of the way into the machine and ejecting it only when the voter was supposed to leave -- thus turning the voter's desire to return the smart card into a countermeasure against premature voter departure, rather than a cause of it. (ATM machines often use this same trick of holding the card inside the machine to stop the user from grabbing the card and walking away at the wrong time.) Some older lever machines use an even simpler method against fleeing voters: the same big red handle that casts the ballot also opens the curtains so the voter can leave.

I was at the Fidelity office in Palo Alto today and I noticed an ingenious solution to a related problem: fleeing customers. Their investment terminals have dead-man switches, well mats:

The way that this works (apparently) is that there's a pressure sensitive mat in front of the terminal, positioned so that you need to (or at least it's really inconvenient not to) stand on the mat in order to use the terminal. When you step off the mat to walk away, the terminal logs you out, so there's only a minimal window of vulnerability where you're logged in but not present. Now, obviously, a real attacker could tamper with the mats to keep you logged in, but this seems like a pretty good safeguard against simple user error being exploited by subsequent customers. You could imagine building a similar safeguard into voting machines, where the machine rings some alarm if you step away.

UPDATE: Fixed blockquote...


September 14, 2009

David Coursey complains about how long it took IEEE to develop 802.11n:
802.11n is the poster child for a standards process gone wrong. Seven years after it began and at least two years after 802.11 "draft" devices arrived, the IEEE has finally adopted a final standard for faster, stronger, more secure wireless.

Ideally, standards arrive before the products that implement them. However, the IEEE process moved so slowly that vendors adopted a draft standard and started manufacturing hardware. After a few little glitches, the hardware became compatible and many of us have--for years--been running multivendor 802.11n networks despite the lack of an approved standard.


If standards bodies expect to be taken seriously, they need to do their work in reasonable periods. Releasing a "final" standard long after customer adoption has begun is not only anti-climatic but undercuts the value of the standards making process.

In this case, the process failed. The IEEE should either improve its process or get out of the way and left industry leaders just create de facto standards as they see fit. That is not preferable, but if the IEEE process is stuck, it will be what happens.

My experience with IEEE standards making is limited, but I have extensive experience with IETF's process, and I'm a little puzzled as to what Coursey thinks the problem is here. Developing standards is like developing any other technical artifact: you start out with an idea, do some initial prototypes, test those prototypes, modify the design in response to the testing, and iterate till you're satisfied. Now, in the case of a protocol standard, the artifact is the document that defines how implementations are supposed to behave, and the testing phase, at least in part, is implementors building systems that (nominally) conform the the spec and seeing how well they work, whether they interoperate, etc. With any complicated system, this process needs to include building systems which will be used by end-users and seeing how they function in the field. If you don't do this, you end up with systems which only work in the lab.

There's not too much you can do to avoid going through these steps; it's just really hard to build workable systems without a certain level of testing. Of course, that still leaves you with the question of when you call the document done. Roughly speaking, there are two strategies: you can stamp the document "standard" before it's seen any real deployment and then churn out a revision a few years later in response to your deployment experience. Alternately, you can go through a series of drafts, refining them in response to experience, until eventually you just publish a finished standard, but it's based on what people have been using for years. An intermediate possibility is to have different maturity levels. For instance, IETF has "proposed standards", "draft standards", and then "standards". This doesn't work that well in practice: it takes so long to develop each revision that many important protocols never make it past "proposed standard." In all three cases, you go through mostly the same system development process, you just label the documents differently.

With that in mind, it's not clear to me that IEEE has done anything wrong here: if they decided to take the second approach and publish a really polished document and 802.11n is indeed nice and polished and the new document won't need a revision for 5+ years, then this seems like a fairly successful effort. I should hasten to add that I don't know that this is true: 802.11n could be totally broken. However, the facts that Coursey presents sound like pretty normal standards development.


September 13, 2009

One of the results of Joe Wilson (R-South Carolina) calling President Obama a liar on national TV was that money started pouring in, both to Wilson and his likely opponent in 2010 (Rob Miller). Piryx, who hosts Wilson's site, claims that on Friday and Saturday they were then subject to a 10 hour DoS attack against their systems:
Yesterday (Friday) around 3:12pm CST we noticed the bandwidth spike on the downstream connections to Piryx.com server collocation facility. Our bandwidth and packet rate threshold monitors went off and we saw both traditional DOS bandwidth based attacks as well as very high packet rate, low bandwidth ICMP floods all destined for our IP address.

...At this point we have spent 40+ man hours, with 10 external techs fully monopolized in researching and mitigating this attack.

To give a sense of scale, the attacks were sending us 500+ Mbps of traffic, which would run about $147,500 per month in bandwidth overages.

I think most people would agree that technical attacks on candidates Web sites, donation systems, etc. aren't good for democracy—just as it would be bad if candidates were regularly assassinated—and it would be good if they didn't happen. While there are technical countermeasures against, DoS, they're expensive and only really work well if you have a site with a lot of capacity so that you can absorb the attack, which isn't necessarily something that every HSP has.

This may turn out to be a bad idea, but it occurred to me that one way to deal with this kind of attack might be for the federal government to simply run its own HSP, dedicated solely to hosting sites for candidates and to accepting payments on their behalf. Such a site could be large enough—though compared to big service providers, comparatively small—to resist most DoS attacks. Also, to the extent to which everyone ran their candidate sites there, it would remove the differential effect of DoS attacks: sure you can DoS the site, but you're damaging your own preferred candidate as much as the opposition. Obviously, this doesn't help if the event that precipitates the surge of donations massively favors one side, but in this case, at least, both sides saw a surge. I don't know if this is universally true though.

Of course, this would put the site operator (either the feds or whoever they outsourced it to) in a position to know who donated to which candidate, but in many cases this must be disclosed anyway, and presumably if the operation was outsourced, one could put a firewall in to keep the information not subject to disclosure away from the feds.


September 10, 2009

I've finally managed to get my JMT trail pictures up. You can find the gallery here. Each picture has meta-data indicating when it was taken, so you can work out some of it by reference to my itinerary, except that all the dates are off by one day (i.e., the pictures allegedly taken on the 14th were really on the 13th). I was just shooting semi-randomly, but here are some good ones if you're short on time:


September 9, 2009

Eu-Jin Goh pointed out to me that Patagonia is cancelling their relationship with Sigg:
Patagonia formally announced on September 4th that it would terminate all co-branding and co-marketing efforts with SIGG, Inc. It has come to Patagonia's attention from recent news reports that a Bisphenol A (BPA) epoxy coating was used in most aluminum SIGG bottles manufactured prior to August 2008, despite earlier assurances from SIGG that the liners of their bottles did not contain BPA. Bisphenol A is a chemical that Patagonia does not support the use of in consumer products, hence the company has terminated its co-branding relationship with SIGG. In addition, Patagonia is ceasing the sale of SIGG bottles in its stores, as well as through its catalog and on-line distribution.


"We did our homework on the topic of BPA, going all the way back to 2005 when this subject first emerged in discussions in scientific journals" Rick Ridgeway, Patagonia's VP of environmental initiatives states. "We even arranged for one of the leading scientists on BPA research to come to our company to educate us on the issue. Once we concluded there was basis for concern, we immediately pulled all drinking bottles that contained BPA from our shelves and then searched for a BPA-free bottle. We very clearly asked SIGG if there was BPA in their bottles and their liners, and they clearly said there was not. After conducting such thorough due diligence, we are more than chagrined to see the ad that is appearing in Backpacker, but we also feel that with this explanation our customers will appreciate and understand our position."

The last paragraph is the most interesting for me. In Sigg's public statements, it seems like they were mostly evasive, but it would be interesting to know if they flat-out lied to Patagonia. I'm starting to think Sigg may take a pretty big hit here: people bought their product cause they were trying to get away from BPA and seem more upset with Sigg than with Nalge, who never denied their product had BPA in it, just kept saying it was OK until they finally caved and brought out a non-BPA bottle. So, even though it seems like there was more BPA risk from Nalgene (if you believe the studies) people seem more angry at Sigg because they feel like Sigg wasn't honest.

Also, check out this interview (also via Eu-Jin) with Adam Bradley, who just set a new record for thru-hiking the Pacific Crest Trail (from Mexico to Canada).


September 7, 2009

Brett Maune just shattered the unsupported JMT speed record, going from Whitney Portal to Happy Isles in 3 days, 14 hours and 13 minutes. By the way this also breaks the supported record, held by Sue Johnston.

September 6, 2009

You may recall that a year ago when people started to have serious concerns about Bisphenol-A, Sigg was supporting research on BPA leaching and generally marketing themselves as a safer alternative to BPA-based polycarbonate bottles such as Nalgene [Note: Nalgene stopped selling polycarbonate bottles.] During this time period, Sigg was fairly evasive about the exact construction of their bottles. As I wrote last August:
Sigg bottles (yes, the ones that look like fuel bottles) are a backpacking standard and have had a resurgence since people went off Nalgene. They're aluminum, not stainless, with a plastic cap. Because of concerns over aluminum leaching into your drink, they're coated with some unspecified (but they swear it's safe!) proprietary enamel-type coating. I like the Sigg a lot better than the Klean Kanteen, but it's not perfect.

It's recently become clear why they were evasive: the coating on the bottles contained BPA. They've replaced the coating with a new co-polyester based "EcoCare" coating. (I'm curious if this is the same material used in the new Nalgene bottles). Sigg's defense is that they never said anything untrue: They claim (and this is supported by a Sigg-sponsored study) that the old bottles contained BPA but didn't leach BPA. I'm not sure how seriously one needs to take this: if you believe Sigg's study, then the bottles really don't leach BPA, but on the other hand, I have one Sigg bottle and the liner seems to the cap, which isn't encouraging in terms of feeling like you're not ingesting anything. Anyway, if you decide you want to replace your bottles Sigg is currently running an exchange program; you ship them your bottles and they somehow replace them. On the other hand, I bought my Sigg at REI, so it's easier for me to return it there.

Acknowledgement: Thanks to Eu-Jin Goh for pointing out this story to me.


September 5, 2009

Probably the two pieces of backpacking gear where fit is most important are your pack and whatever you wear on your feet. In both cases, the gear is an interface between your body and a heavy load, so it's important to have something that works for you or you're likely to end up in serious discomfort. Back in the old days, everyone used to wear hiking boots but as lightweight backpacking has started to take off, it's become a lot more popular (and more practical) to wear something lighter, generally some sort of trail runner. I've always worn hiking boots but this time I decided to transition to trail runners. Since I already had experience with them, I decided to go with Inov-8 Roclite 295s. I've worn these for plenty of trail miles and I know they fit well and are comfortable, though they wear fast, so I bought a new pair and just lightly broke them in before my trip.

I had two major concerns about transitioning to a trail shoe: ankle protection and water resistance. One of the claimed benefits of a hiking boot is that the high top protects your ankles, but after my most recent trip to Emigrant Wilderness, my ankles were still pretty beat up in my boots so I figured trail shoes weren't likely to be much worse. A few short hikes with them seemed to confirm that. My second concern was water resistance. Like many hiking boots, mine are Gore-Tex lined and so waterproof at least until you step into water above the top of the boot. The Inov-8s are largely mesh and so not water resistant at all. I considered getting a Gore-Tex trail shoe, but the problem with those is that they don't drain and since a low shoe increases the chance you'll step into water above the top of the shoe, I figured better to have mesh shoe that drains fast. I also brought a pair of VFFs for stream crossings and use as a camp shoe.

As far as socks go, standard procedure is to wear two pairs: a liner sock and a thick hiking sock, but with a shoe this light I decided to skip that and just wear Injinji Tetrasoks. I've worn these for plenty of runs and races and know they're comfortable and wanted to give my feet some space to breathe. I initially brought two pairs of Injinjis and one of hiking socks as a backup, but I never wore the hiking socks and traded them in for Injinjis at Muir Trail Ranch.

Overall, this system worked out moderately well. While I was initially worried about the water issue, it turned out not to be a problem. On day 4 or so I stepped ankle deep in a stream and it just turned out not to be that bad. My feet dried quickly and I was comfortable enough that I didn't feel like I needed a water shoe. Unlike other trips I've done, my feet didn't feel horribly beaten up at the end of the day and I found myself just wearing the Inov-8s without socks and unlaced to walk around camp. I never wore the VFFs and when I got to Muir Trail Ranch I shipped them home: no point in carrying an extra 300g of useless shoe. The Injinjis got dirty fast but I was able to wash them in streams and keep them from getting too filthy.

I said I was reasonably comfortable, but I did experience two problems. First, by day 7 or so, due to some combination of rocky terrain forcing constant pronation and supination, fatigue, and maybe just being poked by the occasional rock was starting to wear on me and the outside of both feet started to hurt in mid-metatarsal. I was worried this would be trip-ending but keeping a high load of naproxen and wrapping a couple of strips of tape around each foot seemed to relieve the pain enough that I only got occasional twinges if I really stepped wrong. This was uncomfortable but not fatal and after the 9th day I was no longer seriously worried about this killing the trip—two weeks later my right foot still hurts though, so I'll have to see how long it takes to recover. It's hard to know if this would be a problem in hiking boots, since it only happened after a week or so and I've never been out that long before.

The second problem is that the Injinjis wear fast and by days 9 and 10 the pair I was wearing had gotten so threadbare that I got a blister on the ball of my right foot. This was my only blister the entire trip and I just drained it and kept going, so overall this was very minor. Still, it serves as a reminder that you need to pay attention to your sock wear and in the future I might bring one more extra pair of socks.

All things considered, I don't think I'd go back to boots. They're less comfortable for short trips at least and the weight penalty is just too extreme. However, I might try out other trail shoes or experiment to see why I started to develop foot problems towards the end. I should also mention that I beat up the Inov-8s pretty badly—200-300 miles is about normal for a trail shoe and the soles on these had worn pretty far down and the synthetic leather part of the uppers was starting to peel off the mesh. I suspect another week and they might have started to fall apart on me. Even as things were, I had to replace the Engo patches that stopped me from getting heel blisters. I'm not complaining here: it's just something you would need to keep an eye on if you were doing a lot of backpacking in lightweight shoes.


September 3, 2009

On my way to lunch Wednesday, I stopped by Barefoot Coffee to pick up some beans. As I'm checking out, the cashier asked me if I'd like a free coffee or espresso (this is pretty standard with bulk bean purchases). I'd already had way too much caffeine that day, but I'm not one to turn down a free espresso so I said sure and slid down the counter to wait. The place was pretty packed and the baristas were backed up, so I sat there for about 10 minutes getting increasingly antsy but unwilling to just walk away (Daniel Kahneman, call your office).

Eventually my espresso came up, but at that point Brian had been waiting for me in the car for about 15 minutes and I figured he was starting to get antsy, so I asked for my drink in a paper cup, only to be told "We don't do that. It kills it." Now first, I strongly suspect this of being mostly BS: the objection seems to be that you lose the crema, but that mostly stays stuck to the side of the ceramic cup anyway. Even if it were true, I'm the customer, and if I want to ruin my own espresso that seems to be my right. Had I had the presence of mind, I would have told them to pour it in one of the 12 oz paper cups they had for drip coffee, but instead I grumbled something about having to go and they told me I could leave the cup on the table outside (what a huge concession!).

In future, I'll just be ordering from Blue Bottle which is cheaper [if you have it shipped] and arguably better. Also, their Web site won't lecture me on how I should drink my coffee.