EKR: October 2009 Archives

 

October 31, 2009

Another Halloween, another bowl full of candy, and another night when only a few kids show up. Tonight we had 3 groups of kids, each in the 5-7 range. It's not clear to me why Halloween in the Bay Area is so lame; when I was a kid and went out trick or treating the streets would be thick with other trick-or-treaters to the point where you actually had to wait in line at some houses. Here, though, the streets are empty and in a good year we'll see maybe 5 groups. One of my friends lives in San Francisco and says that in the 10 year he's been there he's never had anyone come by. I would understand it this were some crime-ridden area, but I actually live in suburban Palo Alto about 5 blocks from the local elementary school. It's a totally safe neighborhood with lots of kids living nearby. Moreover, this year Halloween is on a Saturday, so you'd think that it would be especially hot.

One theory—popular with the denizens of Fark—is that overprotective parents have ruined trick-or-treating. See, for instance, this article and the related Fark thread. However, this doesn't seem to be correct: no less authoritative a source than the National Confectioners Assoction reported in 2005 that 95% of children intended to go out. I would be interested in hearing reports from people in other parts of the country about their flow rates.

Another explanation is that it's a collective action problem: It's only worth trick-or-treating when enough houses dispense candy to make it worthwhile. [I've written about this before]. Similarly, it's only worth dispensing candy when enough kids come by: I was invited to several Halloween parties and stayed home to give out candy, but I'm not sure I would do that again. So, if you have a neighborhood which is in a no-trick-or-treating equilibrium, it's hard for it to take hold. I'm sure there's some effect here. For instance, Belvedere street in San Francisco has a huge party every year with the result that kids come by from across the city and residents put in huge amounts of effort decorating their houses. On the other hand, when I went out at 5:30 to pick up some more candy (ending up with way more than I needed), I had to stand in line at the register, so obviously people are giving out candy, and that means I don't have a good explanation.

 

October 19, 2009

Apparently Coke is introducing a new 7.5 oz Coke "mini-can" and William Saletan thinks it's a bad idea or dishonest, or something:
These messages sound a lot like what tobacco companies said when they introduced light cigarettes. According to a 2001 U.S. government report, internal documents obtained from tobacco companies
reveal the industry's efforts to produce cigarettes that could be marketed as acceptable to health-conscious consumers. Ultimately, these low-tar/low-nicotine cigarettes were part of the industry's plan to maintain and expand its consumer base. ... [T]obacco companies set out to develop cigarette designs that markedly lowered the tar and nicotine yield results as measured by the Federal Trade Commission (FTC) testing method. Yet, these cigarettes can be manipulated by the smoker to increase the intake of tar and nicotine. The use of these "decreased risk" cigarettes [has] not significantly decreased the disease risk. In fact, the use of these cigarettes may be partly responsible for the increase in lung cancer for longterm smokers who have switched to the low-tar/low-nicotine brands. Finally, switching to these cigarettes may provide smokers with a false sense of reduced risk, when the actual amount of tar and nicotine consumed may be the same as, or more than, the previously used higher yield brand.

Coca-Cola's promotional video for its mini cans delivers a similar pitch. It features Jan Tilley, a "registered dietitian" and consultant to beverage companies. "The new 90-calorie mini-can is a great way for people to enjoy the taste of Coca-Cola that they love, while still managing their calorie intake," says Tilley, smiling all the way:

The size of the packaging really reinforces moderation. ... Part of maintaining a healthy lifestyle is not feeling deprived. ... The new 90-calorie mini-can is a great way for people who like Coca-Cola to enjoy the taste with built-in portion control. A treat or a favorite food or beverage is a wonderful way to ensure that you're going to be able to practice a healthy lifestyle for life.

...

So you'll drink Coke mini for the same reason you already drink Coke: to sate your addiction. And if you don't get enough "sparkle" from the smaller can, no problem. The mini containers "will be sold in eight-packs," says the company. Just open a second 7.5-ounce can, and you'll get 20 percent more sparkle than you used to get from a 12-ounce hit.

You'll also get 20 percent more calories. According to the company's nutrition information page, an 8-ounce serving of Coca-Cola classic has 97 calories. That's roughly 145 calories in each 12-ounce can. At 90 calories per shot, the 7.5-ounce Coke mini can keeps pace with the original calorie rate, and the second mini can brings you to a sparkling 180 calories. But you'll feel better about yourself, because now you're practicing "portion control" and "a healthy lifestyle." Just like you felt better about smoking light cigarettes.

Saletan, of course, doesn't offer any actual argument, just snark, but the underlying argument you're supposed to infer presumably goes goes something like this:

  1. Cigarettes are bad for you.
  2. The tobacco companies introduced light cigarettes and suggested they were healthier.
  3. Tobacco companies are really not very nice.
  4. Coke isn't good for you.
  5. Coke is introducing a smaller portion size and suggesting that it's healthier.
  6. Coke is really bad just like the tobacco companies.

Of course, this form of argument is clearly bogus ("you know who else was a vegetarian? Hitler" [note: apparently this is a myth.]), and there are some pretty clear dissimilarities between Coke and cigarettes. First, Coke really isn't anywhere near as bad for you as cigarettes. Then there's the small problem being that light cigarettes were basically a huge scam, for two reasons: (1) the tar/nicotine measurements taken by the test machines didn't accurately reflect what happens when people smoked them (2) there was reason to worry that people would compensate by smoking more cigarettes or inhaling more deeply.

The first of these issues doesn't exist with mini Cokes: they're just Coke in smaller containers, so you're left with the compensation issue. Saletan implies that people will just drink a second coke (15 oz total) and thus be left worse off than before, but that's not at all obvious: there's extensive data suggesting that how much people consume is strongly influenced by the size of the portions in front of them and it's not all crazy to think that if you had a bunch of smaller Coke cans you would drink less Coke overall. It's true that because Coke contains caffeine, that's a potential confounding factor to the portion control effects we see with ordinary food, but most people really aren't that addicted to caffeine (and respond to it in quite small doses) so it's not at all clear that people would over-compensate. It's easy to do the math here: if you replace every 12 oz coke with a 7.5 oz coke you're getting 62.5% of the usual dose. If only half the time you drink a second mini can you come out very slightly ahead (11.25 oz). Obviously, it's an empirical question what people's real behavior is, but it seems plausible to me that they would do so infrequently enough for it to be a net win. I know that personally I tend to drink a whole bottle of whatever beverage I have, so when I buy 20 oz cokes it seems like I drink more coke than if I buy 12 oz cokes; sometimes I'll have an extra 12 ozer, but I don't think enough to compensate for all the times that I drank 20 oz just because it was in front of me.

In any case, the tobacco comparison seems at best premature: the tobacco companies knew that light cigarettes weren't any healthier; as far as I know, Coke doesn't know any such thing, and it may not even be true.

 

October 17, 2009

Eugene Kaspersky argues that one should need to have a "passport" to use the Internet (via /.):
That's it? What's wrong with the design of the Internet?

There's anonymity. Everyone should and must have an identification, or Internet passport. The Internet was designed not for public use, but for American scientists and the U.S. military. That was just a limited group of people--hundreds, or maybe thousands. Then it was introduced to the public and it was wrong to introduce it in the same way.

I'd like to change the design of the Internet by introducing regulation--Internet passports, Internet police and international agreement--about following Internet standards. And if some countries don't agree with or don't pay attention to the agreement, just cut them off.

Isn't it enough to have everyone register with ISPs (Internet service providers) and have IP addresses made known?

You're not sure who exactly has the connection. I can have a Wi-Fi connection and connect using a password, or give away the password for someone else to use that connection. Or the connection could be hacked. Even if the IP address is traced to an Internet café, they will not know who the customer or person is behind the attacks. Think about cars--you have plates on the cars, but you also have driver licenses.

Unfortunately, Kaspersky didn't elaborate on how this would actually work, which is too bad because it's not really that clear to me how one would develop such a system. Let's stipulate for the moment that we had some mechanism for giving everyone who was allowed to access the Internet some sort of credential (the natural thing here would be an X.509 certificate, but of course you could imagine any number of other options). All that you would need to accoplish this would be to somehow positively identify every person on the planet, get them to generate an asymmetric key pair, issue them a certificate, and give them some way to move it around between all their Internet-connected devices (and it's not at all unusual to have both a PC and a smartphone), as well as find some way for them to use it in Internet cafes, libraries, etc. And of course, having the credential is the easy part: we still need to find some way to actually verify it.

At a high level, there are three places we could imagine verifying someone's credentials: (1) at the access point (2) in the network core (3) at the other endpoint. None of these is particularly satisfactory:

Access Point
The naive place to verify people's identity is at the point where they connect to the Internet. Of course, in the vast majority of cases (home service, mobile, etc.), no "passport" is required, because the user has a subscriber relationship with the service provider, so as long as the service provider keeps adequate records it's relatively straightforward to track down who was using any given IP address at a given time. This leaves us with a variety of "open access" type situations where someone has a network that anyone can use, such as libraries, conferences, and people's home networks. One could imagine requiring that those people program their network access elements to authenticate anyone who wanted to use them, but since this would require reprogramming an untold number of Linksys access points which are currently running whatever firmware they were loaded with when they were manufactured in 2006, this doesn't sound like a very practical proposition. Even if one did somehow manage to arrange for a mass upgrade, people who run open APs don't have a huge amount of incentive to keep them secure, so it wouldn't be long before there was a large population of APs which couldn't be trusted to properly report who had used them and we're back to where we started.

Network Core
Moving outward from the access point, one could imagine doing authentication somewhere in the network core (which is sort of what Kaspersky's comments imply). Unfortunately, this would involve some pretty major changes to the Internet architecture. Remember that as far as the core is concerned, there are just a bunch of packets flowing from node to node and being switched as fast as possible by the core routers which don't have any real relationship with the endpoints. Unless we're going to change that (pretty much out of the question no matter how ambitious you are), then about all that's left is having the endpoints digitally sign their packets with their credentials. And of course, those signatures would then have to be verified at something approaching wire speed (if you don't verify them in real time, then people will just send bogus signatures; if you only verify a fraction, then you need some sort of punishment scheme because otherwise you just reduce traffic by that fraction). And of course, the signatures would create massive packet bloat. So, this doesn't sound like a very practical retrofit to the existing Internet.

Other Endpoint
This leaves us with verifying the users's identity at the other endpoint, which is probably the most practical option, given that we already have technology for this in the form of IPsec, SSL/TLS, etc. Again, we have the retrofit problem, and also a huge incentive issue; most sites are primarily interested in having a lot of visitors and don't much care who they are, so they're not really incentivized to verify user identities, especially during the (extended) transition period when requiring authentication would mean rejecting traffic from legitimate visitors. Still, it's at least technically possible, though it's not clear to me why one would want to require this form of authentication through some regulatory process: the major entity which is hurt by being unable to verify whoever is sending them traffic is after all the other endpoint, so if they don't care to authenticate their peer, why would we want to require it.

 

Unfortunately, even the above issues (which aren't very promising) aren't the real obstacle. Remember that we're going to require everyone who wants to access the Internet have one of this credentials. That includes your grandmother, who hasn't ever run Windows update and has over half of her hard drive taken up with assorted varieties of malware. It's not going to be at all difficult for attackers to get their hands on an arbitrary number of "Internet passports" belonging to other people (remember that attackers don't have any trouble getting credit cards, which people actually do have some interest in protecting).

The bottom line, then, is that unless I'm missing something, it's not clear to me that fits Kaspersky's description is likely to be particularly useful.

 

October 11, 2009

Earlier today I was listening to an NPR report on the early winter snowstorms in Alberta and Manitoba. Apparently there's been a lot of snow and people are driving off the roads, rolling their cars, etc. Anyway, at the end of the report, what do I hear but "This is Dan Karpenchuk in Toronto." Now, this makes sense until you realize that Alberta is in Western Canada, roughly North of Montana. Even Manitoba is somewhere North of Minnesota. In terms of flight time, Calgary is over an hour closer to San Francisco than Toronto is (2:35 versus 3:40). Winnipeg is closer to Toronto, but it's still over 2 hrs away by air and someting over 2000 km away. There's no good reason to think that someone in Toronto is going to be particularly well informed about events in Calgary or Winnipeg than someone located in San Francisco unless you conveniently forget that Canada is a huge country. (This isn't particularly uncommon for NPR: I'm pretty sure I've heard reports about Northern Africa from their correspondent in Johannesburg).

More generally, it's not entirely clear to me what value NPR's foreign correspondents bring to the party. Generally, they spend 20-30 seconds delivering some report that could just as well have been delivered by someone in the US reading whatever came over AP or Reuters. I suppose it lends an air of authority to the proceedings, but as far as I can tell it's primarily false authority.

 

October 10, 2009

This isn't my ordinary type of science fiction but I was recently looking for something light to read and grabbed Walter Jon Williams's Rock Of Ages, the third in his Drake Maijstral, which I originally picked up for half price at the used book store. Williams has written a lot of straight SF, including the rather good Days of Atonement and Aristoi, but the Maijstral novels are something different, kind of a cross between science fiction, restoration comedy and a Donald Westlake caper novel.

The setting is that the human race has been conquered by some extremely stuffy humanoid aliens called the Khosali. The Khosali have reconstructed human society in their image with behavior mostly bound by "High Custom". Drake Maijstral, is an impoverished human aristocrat who takes up a life of crime of sorts, becoming an "Allowed Burglar", one of the odder pieces of High Custom. It seems a previous Khosali Emperor was a kleptomaniac and since the Emperor defines High Custom, the Khosali rationalized it by creating the institution of Allowed Burglary: Allowed Burglars are allowed to steal as long as they keep the loot in their possession for 24 hours after the theft and don't get caught within that 24 hour period. Because Allowed Burglars record their capers and broadcast them, Maijstral (together with his long-suffering manservant Roman [think Jeeves]) is a huge celebrity, with an admiring fan base and a video program based on his exploits.

One of the recurring elements in the novels is the complete distortion of Earth's history and culture resulting from centuries of Khosali domination. For instance:

Once in his suite, Maijstral settled his unease by watching a Western till it was time to dress. This one, The Long Night of Billy The Kid, was an old-fashioned trajedy featuring the legendary rivary between Billy and Elvis Presley for the affections of Katie Elder. Katie's Heart belonged to Billy, but despite her tearful pleadings Billy rode the outlaw trail; and finally, brokenhearted Katie left Billy to go on tour with Elvis as a backup singer, while Billy rode on to his long-foreshadowed death at the hands of Greenhorn inventor-turned lawman Nikola Tesla.

There are three Maijstral novels: The Crown Jewels, House of Shards, and Rock of Ages. Unfortunately, they all seem to be out of print, but you can get them used. Highly recommended.

 

October 7, 2009

Over at Slate, Tim Wu writes about the appeal of vintage (60s and 70s) Hondas:
To the faithful, among whom I count myself, the Hondas made in the 1960s and '70s are objects of mystic beauty, each a mechanical Helen of Troy. Look at the photo above, and you'll see what I mean. In the 1960s, Honda sought to capture and improve on the spirit of the English motorcycles of the day, and rarely has East met West with more pleasing consequences. The Japanese take on British motorbike aesthetics is, to my mind, a cross of cultures unrivaled since Italians began mixing tomatoes with Chinese noodles.

The machine looks ready to go. It is full of derring-do. It has plenty of shiny bits. It speaks of a controlled power that stops short of aggression. The vintage Hondas are, in the lingo, "naked"--you can see everything that makes them run. The pistons, less mighty than faithful, chug away. The chain snakes, and oil and gas drip here and there.

...

If the vintage Hondas are so great and so popular, why did Honda stop making them? I don't know the answer to that question, but I do know that tragedy struck in the 1980s (as with many things aesthetic). The bikes got fat. The flat back sank, like a worn-out horse. Most of today's Honda motorcycles are, effectively, two-wheeled SUVs: obese creatures, covered with too much plastic. The kick-start is long gone--and what's the fun in a motorcycle that starts every time?

As it happens, I used to own a very similar bike: a 1980 Honda CM-400T. The primary difference between this bike and those of a few years earlier is an electric start and that the seat isn't entirely flat. I did plenty of miles on the CM-400 and then rode a 1984 Honda CB700SC (Nighthawk S) for over 10 years. I've also spent a fair bit of time on two more modern bikes borrowed from a friend: a KTM Duke 650 single and a 1998 CBR900 RR. With that as background I can say with some confidence that my CM-400T was a POS and I strongly suspect that the same applies to the bikes that Wu is raving about in his piece. Certainly, my Nighthawk S was a far better bike, more powerful, better handling, lower maintenance (shaft drive), more comfortable, better looking, etc. I will admit that the CBR900RR feels a beefy and overpowered, but it's fantastically responsive, if a little terrifying. So, no, I don't think it's really accurate to say that the 70s and 80s bikes were better.

It's certainly true that modern sportbikes are plastic covered, but it's not like you can't get a bike with an aesthetic pretty similar to those vintage bikes. Indeed, the Honda CB250 Nighthawk is fairly like the 400-class bikes of years past, both in aesthetics (though the back isn't as flat) and in power: it's a 234cc parallel twin instead of 400cc, but with more modern technology you get up to 20 HP (the CB350 developed 24 HP). This is pretty much a better bike in every dimension.

As far as I can tell, the major argument that Wu has to offer for older bikes is nostalgia and unreliability. I suppose there is a certain charm here if you think of a motorcycle as a toy instead of a form of transportation. On the other hand, if you want a bike that isn't covered in plastic, rides well, and needs a ton of maintenance, Ducati has got you covered.

 

October 6, 2009

Richard Barnes pointed me to the joint ICANN/VeriSign presentation from RIPE 59 (today in Lisbon) on their plans for rolling out signing of the root zone. For those who aren't up on DNSSEC, each TLD (.com, .net, .us, etc.) will sign the domains under it, but the design calls for the information for each of those domains to be signed as well at the root of the tree. There's some question about how important this really is from a technical perspective but the DNSSEC community seems convinced (wrongly in my opinion) that it's essential, so it's socially important even if not technically important.

Anyway, Richard pointed out something interesting to me: they plan to roll over the root Zone Signing Key (ZSK) four times a year (see slide 19) which doesn't really make sense to me. Actually, the whole key rollover scheme doesn't make much sense to me.

It might be helpful to start with a little background. The way things are going to work is this: ICANN is going to have a long-lived (2-5 years) Key Signing Key (KSK). The public half of this key will be built into people's resolvers. But the KSK will not be used to directly sign any user data. Rather, it will be used to sign a short-lived (3 months) ZSK [held by VeriSign] which will be used to sign the data. Because the relying party (i.e., your computer) knows the KSK, it can verify any new ZSK without having to get it directly.

Why are they doing this? As far as I can tell the rationale is as follows:

  • The security of RSA key pairs is directly connected to key length, which is also the length of the signature that the key pair produces.
  • Space in DNS packets is limited.

The combination of these two factors means that if you want to use longer (higher security) key pairs to sign zone data, you start running into size limitations in the packet. That's perfectly understandable, but why does having two keys help. The idea here is that you have a big (2048-bit) and a short (1024-bit) ZSK. But because the ZSK is changed frequently, you don't need as strong a key and can still get good security. I wasn't able to find a good description of this in the DNSSEC documents, but Wikipedia came through:

Keys in DNSKEY records can be used for two different things and typically different DNSKEY records are used for each. First, there are Key Signing Keys (KSK) which are used to sign other DNSKEY records and the DS records. Second, there are Zone Signing Keys (ZSK) which are used to sign RRSIG and NSEC/NSEC3 records. Since the ZSKs are under complete control and use by one particular DNS zone, they can be switched more easily and more often. As a result, ZSKs can be much shorter than KSKs and still offer the same level of protection, but reducing the size of the RRSIG/NSEC/NSEC3 records.

The only problem with this reasoning is that it's almost completely wrong, as can be seen by doing some simple calculations. Let's say we have a key with lifespan one year that requires C computations to break. An attacker buys enough hardware to do C computations in two months and then is able to use the key to forge signatures for the next 10 months (I'll try to write about keys used for confidentiality at some later point.) If we think about a series of keys, they will be vulnerable 10/12 of the time. Now, let's say that we halve the lifespan of the key to 6 months, which shortens the window of vulnerability to 4 months per key, or 2/3 of the time. But if the attacker just buys 2C compute power, he can break the key in 1 month, at which we're back to having the keys vulnerable 10/12 of the time. If we generalize this computation, we can see that if we increase the frequency of key changes by a factor of X, we also increase the attacker's workload by a factor of X.

More concretely, if we originally intended to change keys every 4 years and instead we change them every quarter, this is a factor of 16 (4 bits) improvement in security. Opinions vary about the strength of asymmetric keys, but if we assume that 1024-bit RSA keys have a strength of about 72 bits [*] then this increases the effective strength to around 76 bits, which is somewhere in the neighborhood of 1100 bit RSA keys, a pretty negligible security advantage and nowhere near the strength of a 2048 bit RSA key (> 100 bits of security). It's certainly not correct that this offers the "same level of protection".

The more general lesson here is that changing keys rapidly is nearly useless as a method of preventing analytic attacks. It's almost never practical to change keys frequently enough to have a significant impact on the attacker's required level of effort. If you're that close to the edge of a successful attack, what you need is a stronger key, not to change your weak keys more frequently. In the specific case of DNSSEC, just expanding the size of the packet by 10 bytes or so would have as much if not more security impact at a far lower system complexity cost.

 

October 3, 2009

For obvious reasons, California law forbids cities from sharing revenue from red light cameras with the vendors who operate the cameras. Apparently, cities have found a way to get around this restriction:
California law explicitly bans local jurisdictions from rewarding red light camera companies with payments based on the number of citations issued or as a percentage of fines generated. At least fifty cities have attempted to skirt this requirement with a clever arrangement known as cost neutrality. These contract provisions allow a city to pay the contractor based on the number of citations issued up to a certain monthly amount. After this cap is reached, the city keeps all of the revenue generated. The provisions are designed to ensure that cities can only profit from photo ticketing and will never pay to operate the program.

"If the total compensation paid to Redflex pursuant to this agreement exceeds that portion of fines received by customer for citations issued during the same twelve (12) month period, then Redflex agrees to absorb, eliminate, or reimburse customer for the excess expense thereby covering the cost for system operation so that the customer achieves cost neutrality in accordance with the representation that the system(s) shall pay for themselves," Section 6.5 of San Mateo's contract states.

This pretty clearly gives the vendor an incentive to issue more tickets, and the judge in the case linked above concluded that it violated the law and struck down the program. With that said, I'm not aware of any evidence that red light camera companies do anything to issue bogus tickets. There likely are some ways to issue tickets when people weren't really violating the red light (e.g., by issuing bogus timestamps; I don't think the photos include the light in the frame), but that isn't to say that the companies which operate the cameras do anything like that.

 

October 2, 2009

The Times has an interesting article about how life extension research is getting more respectable. Obviously, really good life extension/anti-aging therapies would be world-changing, but even modestly effective drugs would have a big impact (imagine the effect of having to pay another 5 years of Social Security and Medicare for half the senior citizens in the US).

I want to focus on a rather more trivial effect, though: age grouping in athletic events. Athletic performance is fairly strongly correlated to age, so many amateur sports provide awards in different age categories (typically 5 or 10 year brackets). Even one age bracket can make a real difference: for instance at last year's Ironman World Championship, the 5th place finisher in M35-39 would have won M40-44. Similarly, the 6th place finisher in M40-44 would have won M45-49. The gap between the first place in M35-39 and M45-49 is around 30 minutes (~5%), so if you had a medication which reduced your physiological age by 5-10 years, that would represent a huge advantage. The problem from a doping control perspective is that you can't really ban people from taking a drug which extends their lifespan by 5-10 years, but people on this drug (which certainly won't initially be everyone) are going to have a big advantage over their competitors, who aren't on the drug. It will be interesting to see how the doping control regime responds to this sort of development (assuming it actually happens).

 
Those of you from the US may know the Civil War-era marching song "John Brown's Body" ("John Brown's body lies a mouldering in the grave ... but his soul goes marching on." etc.) which was later rewritten as "The Battle Hymn of the Republic". I'd always assumed that this song was about the John Brown, i.e., this dude:

This seemed like a logical assumption, but apparently that's unclear at best. Wikipedia has two origin stories, at least one of which is only tenuously connected to the famous John Brown:
The tune arose out of the folk hymn tradition of the American camp meeting movement of the 1800s. During the American Civil War the lyrics referenced Sergeant John Brown of the Second Battalion, Boston Light Infantry Volunteer Militia, a Boston based unit. Later, people mistakenly believed it referenced the abolitionist John Brown and later verses were added referencing him.[1]

...

Maine songwriter, musician, band leader, and Union soldier Thomas Brigham Bishop (1835-1905) has also been credited as the originator of the John Brown Song.[19] Bishop's biographer and friend James MacIntyre, in an interview with Time Magazine in 1935, stated that this version was first published by John Church of Cincinnati in 1861.[20] Bishop, who would later command a company of black troops in the American Civil War, was in nearby Martinsburg when Brown was hanged at Charles Town in 1859 and, according to MacIntyre, Bishop wrote the first four verses of the song at the time. The "Jeff Davis" verse was added later when it caught on as a Union marching song. According to MacIntyre, Bishop's account was that he based the song on an earlier hymn he had written for, or in mockery of, a pious brother-in-law, taking from this earlier song the "glory hallelujah" chorus, the phrase "to be a soldier in the army of the Lord", and the tune. According to MacIntyre, this hymn became popular at religious meetings in Maine.[21] The phrase "to be a soldier in the army of the Lord" is not found in any extant copies of "Say, Brothers"--either those published before or after 1860. [22]

Hard to see how to reconcile these two stories, but the first version seems to at least have some reasonable sourcing, and is certainly more amusing.

 

October 1, 2009

Over at the Volokh Conspiracy, Eugene Volokh asks people to weigh in on whether 0 is even or odd. This is, as they say, a simple question with a simple answer: 0 is even. Despite this, nearly half the people in Volokh's poll (around 1500) get the answer wrong. (Most of those people say it's neither odd nor even). Moreover, this question spawned two threads of over 100 comments each, with people seriously arguing—despite extremely clear arguments to the contrary from people with real mathematical expertise—that zero was not even.

Part of the problem is educational: apparently some schools, textbooks, etc. teach that zero is neither odd nor even. Volokh cites McGraw Hill's Catholic High School Entrance Exams:

An integer is even if it is a member of the following set: [...,-6,-4,-2,2,4,6,...]. An integer is odd if it is a member [of] the following set: [...,-5,-3,-1,1,3,5,...]. The number zero (0) is neither even nor odd.

Reading the comments, though, there seems to be something else going on: many of the commenters seem to assume that they can just reason out the answer from (incorrect) first principles, and that expert opinion doesn't matter. For no doubt bad reasons (primarily boredom), I have read through a number of Volokh Conspiracy threads on other scientific topics and this seems to be a fairly common pattern. Usually, though it's usually confined to discussions where the scientific questions have some political implications (global warming, evolution, etc.), but in this case it just seems to be that people have the wrong intuitions and don't want to listen to anything that contradicts them—and at some level are actively hostile to being told that actual expertise might count for something... I'd be interested to see whether there's any correlation between commenter's positions on the zero parity issue and issues with more political weight behind them.