May 2007 Archives


May 28, 2007

A few impressions from the EFHW, but first some tutorial material.

For those of you who don't know how hash functions are constructed, the basic idea is iteration. The current hash functions all use a construction called Merkle-Damgard (MD). You start with some compression function F which takes a block M of size m and turns it into a block S of size s where m > s. For SHA-1, these are 512 bits and 160 bits respectively. F also takes as input a state variable of length m. This allows you to chain by taking the output value and feeding into F as the next state.

So, the idea here is that we start with state IS and then compute S0 = F(IS,M[0]). That's the first block. We compute the second block as S1 = F(S0, M[1]) and so on. Once we've processed every block, the final S value is our hash output. [Yes, I know this only works on even block-length messages. There's a padding technique to fix this.] There are other ways to connect up compression functions but given that all the compression functions anyone is seriously proposing can only process block sizes of limited length, pretty much all the hash functions need some chaining mode, whether it's M-D or something else.

So, in order to define a new hash function you need to specify:

  • The compression function.
  • The chaining mode.

As I said earlier, all the major hash functions (MD5, SHA-1, SHA-256, ...) use M-D, but they differ in the compression function. Why use M-D? It's simple and you can prove that if the compression function is collision resistant than so is the final hash construction. Of course, as we've seen with MD5 and SHA-1, if the compression function isn't collision resistant, than the game changes.

So, with that in mind, the workshop:

  • Everybody wants to have a hash function that's provably secure. Unfortunately, it's not something we actually know how to do. We actually have a pretty good idea how to prove stuff about the chaining modes but the problem is the compression function. To a first order, the compression functions fall into two categories: bit-shuffling algorithms like all the current hash functions (and like block ciphers, incidentally) and algebraic functions like VSH. We know how to prove the security of the algebraic functions (or at least nobody is willing to propose one they can't prove security for) but the performance is too slow and as far as I can tell nobody has any practical ideas for how to make it much faster. Nobody knows how to prove anything about the bit shuffling algorithms.
  • The security model for hash functions is turning out to be a lot harder to define than people would like. In particular, the properties aren't limited to the classic preimage resistance and collision resistance. People would also like it to act "like" a random oracle. Why? Because the proofs of security of other constructions (E.g., OAEP, PSS) depend on random oracles. Note that current hashes definitely aren't random oracles because given a hash value H and the fact that it corresponds to some message M we can compute the hash of M || X without knowing M (this is called extension and is a basic property of the simple M-D construction). This isn't allowed of a random oracle.
  • To make matters worse, hash functions are used in a whole bunch of ways that we don't even have analyzed security models for. We have no idea what kind of properties those functions demand, so it's pretty hard to know what properties the hash function has to have not to break them. It seems likely that a new hash in the style of the ones we have now would be ok—or at least not obviously bad—but that narrows the design space a lot.
  • Not only do we not have a good theory of how to build a provable hash function, we don't really have a good theory of how to break them. We just have a bunch of techniques which people are still improving. Nobody seemed to feel confident that they could design a bit shuffling hash function that would resist a concerted attack. There's a fair amount of skepticism about SHA-256, too, mostly on the principle that it was designed before these new attacks and that it was designed by the same organization which produced SHA-0 (since badly broken) and SHA-1 (significantly weakened).
  • NIST is planning to have a competition to define what I think John Kelsey called the Advanced Hash Standard (AHS) [which as Hovav Shacham points out would presumably be pronounced "ass"]. Slides not on the Web yet, unfortunately. Summary: They don't really have any idea how many entries they're going to get, how they're going to prune them down, or what the final design criteria are going to be. There seems to be a real chance that they're going to end up with a bunch of functions none of which they know how to break but without any good way to decide between them. SHA-256 will not be part of the competition. This actually seems to me to be not that great an idea, since it's going to be kind of weird if the new functions aren't measurably better than SHA-256 but NIST is telling us to replace it, especially since by the time a decision is reached we'll have just gotten SHA-256 wedged into everything. I joked to John Kelsey that I was going to submit SHA-256 myself, along with all the papers devoted to attacking it as my submission.

May 24, 2007

If you're not already in Barcelona, it's probably too late to catch my ECRYPT IT, "Indigestion: Assessing the impact of known and future hash function attacks". The slides, however, are here.

May 21, 2007

As is widely known, the RSA cryptosystem is based on the problem of factoring large numbers which are the product of two large primes. So, whenever progress is made in factoring, that gives us some information about how secure RSA is against brute-force attacks. There are two kinds of relevant benchmarks here:
  • Special numbers which are (relatively) easy to factor.
  • Actual RSA moduli formed by multiplying two large primes.

The latter are substantially harder to factor.

On March 6, a team factored a 307 digit (1020 bit) special number using three large clusters. Wall clock time was 11 months. The largest non-special number ever factored was RSA 640 (640 bits), factored in 2005, at a cost of 20 2.2 GHz Opteron years. For comparison, 1024 bits is sort of the standard size for RSA keys (though most paranoid security types recommend larger keys). Given the gap between general and special numbers, we're likely still a ways off from the point where it's practical for an attacker to go after a single person's RSA key, even at 1024 bits.


May 20, 2007

A while back, I received a spam with the following contents:
Send data without opening your ports.

Steelcape has taken an innovative approach to security, instead of trying to repair TCP/IP, we have built a solution inside the TCP/IP protocol. This method allows Steelcape to secure environments without opening ports on the firewall.

The packets are encrypted at 256 bits and signed with a 48 bit digital signature. Also works with IPv6. Please take a look at our website

This rang in at about 10W40 on my snake-oil-ometer, but seeing as they'd been kind enough to give me their information, I figured I'd check it out and if it was snake oil, make fun of them publicly.

First, you can check out their Web site, which reiterates their basic claims:

  • Steelcape has the only solution that does not require the opening of ports on your firewall.
  • Our technology is up to 30% to 40% faster than TCP/IP based technologies.
  • Fast installation and minimal administration.
  • Cross platform compatible.

This doesn't really tell you how the thing works, though. Points 2-4 are easily understood, but what does point 1 mean? Not opening any ports? It's not TCP/IP? How does the data get through? Figuring that out required reading their somewhat confusing WP (link omitted because you don't want to give them your name and email address any more than I do), an exchange of emails, and finally a con call with one of the sales/marketing guys and some technical guys.

Making sense of this requires some background on how Enterprise networks, firewalls, and VPNs work. Figure 1 shows the world's simplest firewalled Enterprise network.

Figure 1: A basic firewall configuration

You've got a bunch of hosts on an internal network separated from the Internet via a firewall, which also doubles as the Internet access router. The firewall prevents all connections to the internal hosts from the Internet. Defining what an internal connection is turns out to be a bit tricky, so, let's just talk about TCP here. Your classic firewall forbids incoming TCP connections but allows outgoing ones, at least to some ports. This works fine if you only want to have client machines, but if you want to have servers (e.g., a Web server), you need to do something else. To a first order, you can either punch a hole in your firewall or put the mail server outside your firewall. Actually, people often do both; they run two firewalls, with the web server in between them in what's called the DMZ. The outside firewall has a hole punched in it to allow access to the Web server, but because it's outside the inner firewall there's no need to punch a hole there. Figure 2 shows what I'm talking about.

Figure 2: Firewall with DMZ

This DMZ strategy of course works less well if you want to allow VPN access to your network, since the whole point of a VPN is to allow remote—albeit secured;access to the internal network. Either you make the firewall and VPN access router one and the same or you place the VPN access router inside the firewall, i.e.,

Figure 3: Firewall with VPN router

And, of course you need to have a hole in the firewall in order to let packets to/from the VPN router.

So, this gives us the background for understanding the "no holes in the firewall" claim. The first thing you need to understand is that this is a VPN-only solution: you can't use it to secure access to your public mail server or Web server. Those need to have open firewall ports. So, the idea is that you install Steelcape's stuff on both sides of the system, for instance as an appliance on your home network and then on the laptops of your remote users.

But I've just said that the VPN server needs to be able to talk directly to the Internet. So, how does Steelcape work? There WP says:

In TCP/IP, data is sent though ports, over a network, to a firewall or other network device, and sent on to listening hosts within a businesses infrastructure. This a potential point of exposure, since unwanted packets could end up on host systems. Steelcape leaves hostile data behind at the firewall as enabled by a "pull" architecture. Steelcape-enabled hosts pull packets from the firewall, validate them, and route them on to host systems. If Steelcape does not qualify the packets, they are kept at arm's length from the system and dropped at the firewall. This is accomplished without any sacrifice network bandwidth.

This description sounds initially coherent, but doesn't make sense in light of the claim (in e-mail) that it works with commodity firewalls, which don't have any such buffering or pull built into them. You can't really get this out of the WP, but it turns out that what's going on is that you have a topology like that shown in Figure 4.

Figure 4: Basic Steelcape installation

Whereas your classic VPN installation requires one box at each site, the Steelcape design requires two, one inside and one outside the firewall. The way that this works is that the gateway has a permanent connection tied up to the enterprise server (ES). This connection traverses the firewall. When a remote user wants to VPN in, he contacts the ES and authenticates. The ES sends a message to the gateway over this permanent signalling channel telling the ES that a client wants to come in and the client's IP address. The gateway then uses NAT/firewall hole-punching techniques (like ICE but the Steelcape guys say it's not ICE) to let the remote agent talk to the VPN server (it helps here that Steelcape pushes their traffic over UDP). If this strategy sounds familiar to you, it should. It's exactly the topology used by VoIP systems, with the SIP proxy taking the place of the ES.2 It should also now be clear why this can't work with generic Internet services: they're not set up to contact Steelcape's ES.

This approach has two claimed advantages: security and ease of installation (the other claimed advantages of Steelcape's stuff come from different techniques). In terms of security, it doesn't require that the VPN server be accessible from arbitrary Internet hosts all the time. Holes only get punched for hosts which have been authorized by the ES. Think of this as decreasing the surface area of attack. Of course, the flip side of this is that the ES does have to be accessible from any host. However, it's true that in order to attack the internal network you do need to compromise two hosts instead of one, so if implemented properly this could give you a measure of defense in depth. I'm not really confident it's that enormous, but all other things being equal (i.e., Steelcape's software being equally secure to other people's software), it probably does give you a measure of defense in depth.

In terms of ease of installation, it's certainly true that you should be able to install a Steelcape system without the cooperation of the firewall administrator. This is of course the kind of feature that users love and firewall administrators hate, because (from their perspective) it's far too often used to bypass enterprise security policies.1. Certainly, I would think that most enterprises would want any VPN appliances to have the approval of the firewall admin, so getting him to punch a hole hardly seems that onerous. And on the downside, of course, now you have two machines to maintain, which increases your maintenance effort.

The other major claim, increased performance, derives from Steelcape's use of a proprietary protocol which is allegedly faster than TCP. It was a little hard to work out what the optimizations were, but it sounded like principally it was that they didn't use compression and maybe that they had more aggressive congestion control3 It's certainly the case that you can get improved network performance via link-layer compression, so that sounds plausible, though hardly proprietary. That can of course be done within standard TCP/IP (see IPComp) so there's not likely to be anything proprietary there. And of course as soon as you are doing TCP/IP translation on Steelcape's boxes rather than TCP end-to-end there's lots of opportunity for things to go wrong. So, the "replaces TCP/IP" stuff looks mostly like marketing special sauce to me.

The other thing that probably should make you seriously nervous here is that Steelcape isn't using a standard security protocol (e.g., IPsec, SSL/TLS) but rather something they designed themselves. I didn't drill down on this too far, but apparently it uses Blowfish (a fine algorithm, but generally not a sign of crypto sophistication; the pros use AES) with (according to their product overview) a "48 bit digital signature") which is "randomized every few milliseconds". Since the security of a VPN system fundamentally depends on the crypto in use, using an unevaluated protocol isn't exactly confidence inspiring.

It should be obvious here that it's possible to design a standards-based system that uses the same firewall/NAT traversal techniques used by Steelcape. That would have some advantages and disadvantages vis-a-vis ordinary VPN approaches, plus you'd have a high level of confidence that the security protocol was secure. It doesn't look to me like that's what Steelcape is providing, however.

1. See also draft-saito-mmusic-sdp-ike-00 for a very similar standards-based approach.
2. It's interesting to note that in this case there's no requirement that the ES actually be in the DMZ. It could be some entirely different third party server on the network—after all, this is how SIP works—which puts a somewhat different spin on the above security argument.
3. They also told me that they eliminate the TCP checksum computation so that this sped up intermediate routers. As far as I know, checksum computatations are not a significant part of packet processing overhead.

UPDATE: Ed Felten hypothesizes that the 48-bit "digital signature" is actually a MAC. Another possibility is that it's not data dependent at all; it's just a secret value that gets appended to the packet. That would be consistent with the "randomized every few milliseconds" claim, since a MAC would be different for every packet. Needless to say, if that's what it is, it's not adequate.


May 16, 2007

Terence Spies pointed me at Someecards, snarky electronic greeting cards:

Pretty entertaining.


May 15, 2007

In the Fellowship of the Ring, as everyone knows, the eponymous fellowship sets out from Rivendell to RMA the One Ring. Clearly this is an important mission, but for some reason they're incredibly poorly equipped. Relatively early on we discover that Frodo is the owner of some really cool mithril body armor, which saves his life. Yet none of the others are issued any, something Boromir no doubt comes to regret later when orcs turn him into a pincushion. Similarly, when they pass through Lothlorien, the elves give them elven cloaks, some sort of elf superbread, and some other gifts.

So, here's my question: why does their gear suck so much? If you're going to send 9 guys out to save the world, wouldn't you want them to have the best gear you could scrape up? You'd think the elves could have managed to find enough. Seeing as they started out from Rivendell, would it really have been so hard for Elrond to have given them bread himself? Maybe he was out, but surely he could have had Galadriel FedEx him some. Similarly with the armor, I get that Mithril is expensive, but the dwarves had whole mines of the stuff, so it's pretty hard to believe the elves couldn't get their hands on it, unless some sort of Elvish Donald Rumsfeld was out to prove he could win the war against Sauron on the cheap.


May 14, 2007

Was rereading Niven's Neutron Star the other day and the following passage from Flatlander struck me:
"You picked my pocket?"
"Sure! Think I found it? Would I risk my precious hand under all these spike heels?"
"How if I call a cop?"
"Cop? Oh, a stoneface." She laughed merrily. "Learn or go under, man. There's no law against picking pocket. Look around you." I looked around me, then looked back fast, afraid she'd disappear. Not only my cash but my Bank of Jinx draft for forty thousand stars was in that waller. Everything I owned.

"See them all? Sixty-four million people in Los Angeles alone. Eighteen billion in the whole world Suppose there was a law against picking pockets? How would you enforce it?" She deftly extracted the cash from my wallet and handed the wallet back. "Get yourself a new wallet and fast. It'll have a place for your address and a window for a tenth-start stamp. Put your address in right away, and a stamp too. Then the next guy who takes it can pull out the money and drop your wallet in the nearest mailbox—no sweat. Otherwise you lose your credit cards, your ident, everything."

This all sounds very plausible initially—Niven has a talent for sounding convincing—but upon a moment's reflection it doesn't make any sense. I suppose it's possible that with a high enough population density it would become impossible to enforce laws against pickpocketing, but the rest of the reasoning doesn't make sense. You wouldn't expect people to react to an epidemic of pickpocketing by just accepting it, but rather by taking countermeasures. Sure, it's inconvenient to lose your entire wallet, but losing your cash isn't much fun either.

Of course, there are simple countermeasures. First, it's dramatically harder to steal your wallet if you're wearing it inside your clothes rather than in your hip pocket. Second, if people are actually having their wallets stolen enough that they need to put a stamp in them, you'd expect them to simply stop using bearer instruments entirely—or at least stop keeping them in their wallets. And of course, once people stop using cash, there's no point in picking their pockets. This certainly seems like a more likely equilibrium than one where pepople frequently lose their hard earned cash to pickpockets and don't take any countermeasures.

Even stranger, it emerges later in the story that the pickpocket is an otherwise perfectly nice woman with a good job. Even if we accept that pickpocketing is legal, I think we can also agree that it's not exactly what you'd call nice. There are lots of things that are legal but still aren't done by people who desire not to hurt others. I would expect that even in a pickpocket-legal world, stealing others wallets would fall into that category.


May 13, 2007

Bellovin writes:
Those who remember the Crypto Wars of the 1990s will recall all of the claims about "we won't be able to wiretap because of encryption". In that regard, this portion of the latest DoJ wiretap report is interesting:
Public Law 106-197 amended 18 U.S.C. 2519(2)(b) to require that reporting should reflect the number of wiretap applications granted for which encryption was encountered and whether such encryption prevented law enforcement officials from obtaining the plain text of communications intercepted pursuant to the court orders. In 2006, no instances were reported of encryption encountered during any federal or state wiretap.

The situation may be different for national security wiretaps, but of course that's where compliance with any US anti-crypto laws are least likely. There was no mention of national security or terrorism-related wiretaps in the report, possibly because they've all been done with FISA warrants.

This is interesting data, but consider if you will the contrary interpretation: encrypted telephony has seen practically no deployment. During the crypto wars it was widely believed that if the government just got out of the way encryption would become ubiquitous. But export controls were loosened in 1999 and that still obviously hasn't happened. The one exception, of course, is that mobile communications are often encrypted for transmission over the air interface, but (1) they're not end-to-end encrypted so you can wiretap at the junction with the PSTN and (2) the algorithms have historically been quite weak. So, all this really does is impede wiretapping by RF collection, which is an issue for intelligence agencies but not really for law enforcement, which can just serve a warrant on the mobile provider. So, who won the crypto wars again?


May 11, 2007

I'm in the market for a new phone to replace my venerable Treo 600 and I'm thinking about the Blackberry 8300/Curve (allegedly available from Cingular/ATT in the US in a few weeks). Do any EG readers have one of these already and want to chime in with their experiences. I'm particularly interrested in (1) call quality and (2) how the keyboard compares to the Treo.

UPDATE: Here's CNET's Curve review

Kieran Healy posts about how annoying endnotes are:
Via Andrew Gelman, a post by Aaron Haspel about the evils of poorly-done endnotes, and endnotes in general. This is something John has written about before, too. Endnotes really are a problem in scholarly books. In general, footnotes are better. Both are better than author-in-text citations (Healy 2006).

Indeed. It's important to distinguish between references used as citations and what are basically sidebar comments. It's not that bad to have to flip to the end of the book to figure out which exact publication someone is citing, especially if there is an inline explanation. The right form here is "Rescorla [9] argues that...". On the other hand, having to flip to the end of the book to see some endnote that explains a subtle point of the argument is quite intolerable.

One particularly horrid practice is using the same code point for multiple endnotes (Haspel's suggested practice of using symbols rather than numbers is particularly problematic here). If you must use endnotes, best to number them continuously from the front of the text. Otherwise one is forced to remember not only the note number but also the chapter—or worse yet which page—it appears.

All that aside, while I hate endnotes I rather like footnotes. The linear nature of manuscripts formatted on paper (as opposed to electronic hypertext) lends itself to a particular expository style with a fairly short maximum context stack depth. Footnotes provide a limited escape hatch to that linearity (kind of the way programmers think about goto). The context switch overhead makes this effect a lot harder to achieve with endnotes.


May 9, 2007

OK, this is a clever countermeasure to DVD theft:
A chip smaller than the head of a pin is placed onto a DVD along with a thin coating that blocks a DVD player from reading critical information on the disc. At the register, the chip is activated and sends an electrical pulse through the coating, turning it clear and making the disc playable.

The radio frequency identification chip is made by NXP Semiconductors, based in the Netherlands, and the Radio Frequency Activation technology comes from Kestrel Wireless Inc., based in Emeryville.

Some obvious questions:

  • What's the incremental cost of production of a DVD with this chip in it? DVDs are incredibly cheap to manufacture, so even a few pennies are a significant increase in costs. And remember that while its the retailers who benefit from this technology it's the manufacturers who have to put it in. Will the retailers be willing to have the cost passed on to them?
  • There's a pretty substantial collective action problem. The retailers need to put the activators in, but it's of no value unless a substantial number of disks have it—and at least initially the clerks will forget to activate the disks when most don't need it and this means unhappy customers with broken disks. Similarly, the manufacturers won't put it in unless they see an advantage, which depends on the retailers activating it.
  • Is it really that hard to build or steal an activating unit? Typically ths stuff isn't cryptographically strong, it just relies on the obscurity of the design of the RFID activators. If so, you should expect to see bootleg activators. Obviously, a bootleg activator isn't useful for single-unit theft, but if you operate a DVD theft ring you can presumably afford one. Heck, can you just scrape off the coating with an x-acto knife?

Not saying it's not clever, though.


May 7, 2007

Mike O'Hare makes an interesting point about current efforts to convert incandescent lights to compact fluorescents (CFLs). Like many O'Hare posts, the writing is a bit hard to track, but the point is simple: any lighting system has two outputs, light and heat. The heat is typically thought of as waste and so CFLs are more efficient than incandescents in that they have a far higher light/heat ratio. If you're heating your home anyway, than that heat isn't waste, but rather it's something you want, so if you switch from incandescents to CFLs you end up heating your home some other way. Some of those ways are more efficient than others is,1 but you're certainly not getting the rated efficiency gain of CFLs. By contrast, if you're cooling your home (i.e., via AC), then not only is the heat waste but you then consume more energy by running the AC more to cool the house.

1 The physics here is a just a little complicated, but again the basic ideas not so much. When you burn fossil fuels locally to produce heat, you can arrange to capture the energy of the reaction quite efficiently (with the losses being in emitting exhaust that's hotter than the ambient temperature). So, if you're burning oil or gas to heat your house, this is quite efficient. By contrast, if you're using resistive electric heat, the fossil fuels are burned remotely and then turned into electricity and run to your house where you run them through wires to produce heat—just like a lightbulb but here the light is the waste and the heat is the intended output. Luckily, for second law reasons it's pretty easy to tune for a very high heat/light ratio. So, because of generation inefficiencies and transmissive loss, electric resistive heat is less efficient than locally burned fossil fuels. But there's not much efficiency difference between electric resistive heat and lightbulbs as long as you're lighting your house anyway. By the way, that while it's easy to burn fossil fuels for heat efficiently locally, it's not so easy to burn them for power locally, which is why plugin hybrids and electric cars are more efficient than regular internal combustion engines or even regular hybrids.

Coming from a background in spectroscopy, it's always bugged me that diabetics have to take physical samples to measure their blood glucose levels. If we can remotely measure the composition of the sun, 8 light minutes away, we should be able to measure the composition of some blood vessel 1 millimeter away. But no... You need to first take a sample (ouch!) and then do some chemical testing (that's what the test strips are for) [wikipedia background]. The meter is just a mechanical way of reading the result of the blood/strip reaction. We're talking pretty old-school analytical chemistry here. It looks like the situation is improving, though: some scientists in Hong Kong have developed a remote technique based on transdermal infrared spectroscopy.

May 6, 2007

Matthew Yglesias links to this depressing NYT article about some AEI conference where a bunch of conservatives expresed skepticism about evolution:
For some conservatives, accepting Darwin undercuts religious faith and produces an amoral, materialistic worldview that easily embraces abortion, embryonic stem cell research and other practices they abhor. As an alternative to Darwin, many advocate intelligent design, which holds that life is so intricately organized that only an intelligent power could have created it.


The reference to stem cells suggests just how wide the split is. "The current debate is not primarily about religious fundamentalism," Mr. West, the author of "Darwin's Conservatives: The Misguided Quest" (2006), said at Thursday's conference. "Nor is it simply an irrelevant rehashing of certain esoteric points of biology and philosophy. Darwinian reductionism has become culturally pervasive and inextricably intertwined with contemporary conflicts over traditional morality, personal responsibility, sex and family, and bioethics."


Skeptics of Darwinism like William F. Buckley, Mr. West and Mr. Gilder also object. The notion that "the whole universe contains no intelligence," Mr. Gilder said at Thursday's conference, is perpetuated by "Darwinian storm troopers."

"Both Nazism and communism were inspired by Darwinism," he continued. "Why conservatives should toady to these storm troopers is beyond me."

A few points worth making here. First, it would be great if the NYT would stop referring to the theory of evolution as "Darwinism". As far as I know, no biologist uses the term "Darwinism" to refer to the theory of evolution. The term is used more or less exclusively by Creationists as part of their frame that it's a religion rather than, you know, the consensus scientific theory of the development of more or less all life on Earth. It would be great if the NYT didn't implicitly buy into that frame. If they're going to call the IDers by their chosen name rather than "Creationists", the least they can do is use an accurate name in this case.

Second, it's not like Buckley, West, or Gilder are qualified to have any reasonable opinion about the truth value of the theory of evolution. Moreover, when you look at these quotes it becomes clear that at least in the case of Gilder his problem isn't that they have some evidence-based objection but rather that he doesn't like the cultural/moral implications of the theory, or even more ridiculously, that he doesn't like some of the conclusions that others have drawn. But of course that's not the criterion we use to judge the truth of a scientific theory, as Derbyshire points out:

As for Mr. Derbyshire, he would not say whether he thought evolutionary theory was good or bad for conservatism; the only thing that mattered was whether it was true. And, he said, if that turns out to be "bad for conservatives, then so much the worse for conservatism."


Washington Post's Security blog claims that AOL only verifies the first eight characters of the password:
A reader wrote in Friday with an interesting observation: When he went to access his account, he accidentally entered an extra character at the end of his password. But that didn't stop him from entering his account. Curious, the reader tried adding multiple alphanumeric sequences after his password, and each time it logged him in successfully.

It turns out that when someone signs up for an account, the user appears to be allowed to enter up to a 16-character password. AOL's system, however, doesn't read past the first eight characters.

How is this a bad set-up, security-wise? Well, let's take a fictional AOL user named Bob Jones, who signs up with AOL using the user name BobJones. Bob -- thinking himself very clever -- sets his password to be BobJones$4e?0. Now, if Bob's co-worker Alice or arch nemesis Charlie tries to guess his password, probably the first password he or she will try is Bob's user name, since people are lazy and often use their user name as their password.

I don't use AOL, so I have no idea if this is true or not, but if it is, it's pretty clearly bad. As suggested above, it's actually worse than just allowing 8 character passwords. In order to resist password search it's important to have a high entropy password, but the shorter the password the more random-looking the password has to be, and of course users have trouble remembering random-looking strings, which is why people are often encouraged to use passphrases because they're easier to remember. But of course each bit of the passphrase typically has fairly low entropy. So, if users think they can use long passwords and actually can't the situation is worse than if they just were told to use short passwords.

That said, when people talk about the need for high entropy passwords, the attack they're usually concerned with is dictionary attacks on the encrypted password file. If some attacker can get their hands on AOL's password file then there are much bigger problems than this. So, the relevant attack is one where people's passwords are really easy to guess (like their username). It's not clear that people who use passwords that bad are going to use passwords longer than 8 characters.

At the Republican presidential debate, the candidates were asked whether they believed in evolution and apparently Tancredo, Brownback, and Huckabee raised their hands for no. Huckabee has since issued some kind of clarification:
"And the main thing ... I'm not sure what in the world that has to do with being president of the United States," said the former Arkansas governor.

Huckabee said he has no problem with teaching evolution as a theory in the public schools and he doesn't expect schools to teach creationism.

He said it was his responsibility to teach his children his beliefs though he could accept that others believe in evolution.

"I believe that there is a God and that he put the process in motion," Huckabee said.

Well, I supposed that not expecting schools to teach creationism is better than expecting them to (though I wonder about "intelligent design"). I'm not sure I really believe him, since Wikipedia quotes him as saying something rather different:

Huckabee has voiced his support of creationism. He was quoted in July 2004 on "Arkansans Ask," his regular show on the Arkansas Educational Television Network: "I think that students also should be given exposure to the theories not only of evolution but to the basis of those who believe in creationism." Huckabee also stated "I do not necessarily buy into the traditional Darwinian theory, personally.

Moreover, I can't agree that it doesn't have much to do with being president. Even if you ignore the fact that the president does have to decide on policy positions that require some knowledge of biology (and so it would be helpful to know something about it) what does it say about someone that they've managed to get to be 52 years old and be nearly completely ignorant of the foundations of biology (or at least being willing to pretend to be so to get elected)? Truth be told, I can't decide which is more depressing.


May 4, 2007

I've been backpacking for years with the venerable Sierra Designs Clip 3. A nice, comfortable tent (though like all n-person tents more suitable for n-1 people), but not freestanding and more importantly, not exactly light. For my trip to Emigrant, I updated my gear with a Seedhouse SL2. The SL2 is a freestanding double wall two-person tent (though I think two people would be pretty cozy) with a rated weight (trail weight) is 2 lbs 14 oz, lighter than many single wall tents of equivalent size. This weight reduction is achieved at least partly by having the tent body itself (except the bathtub floor) made nearly entirely of mesh.

Putting the tent up is easy. It's a single pole design with the pole being sort of an H shape, or rather >-<. One end goes in each corner, giving you a freestanding pole, and then you clip the tent to the pole. Getting the pole inserted in the corners is a bit tricky, since once you get the first two ends in the other ends tend to spring out a bit, but it's fairly straightforward once you get the hang of it. Although the tent is technically freestanding, as a practical matter you want to put in at least two stakes—one per side—to pull the walls of the tent away from your body. The ground I was camping on was fairly hard and the stakes hard to get in, so I settled for those two and two more to stake out the vestibule. Also, if you're using the rainfly you can clip the walls of the tent to the rainfly to tension the walls even more. This wasn't necessary with one person but with two it would probably be good to do this or (if you're not using the rainfly) to guy out the tent walls.

Generally, this tent worked well and was comfortable. My only complaint is that because the tent body is solely mesh, air tended to come up through the vestibule and into the tent, which wasn't that great on a cold night at 8000 feet. I only noticed this effect in the middle of the night so just dealt with it by unstaking the vestibule and lettign it sit against the tent wall, which worked fine. This could probably be ameliorated in a number of better ways, either by carefully staking the vestibule to the ground (this requires getting your stakes pretty much all the way in) or by just sleeping with your feet to the door.

Aside from this issue, I'm happy with the SL2. Also, currently, it's on massive sale at REI for $219, down from $319.


May 3, 2007

I spent a couple days in the Emigrant Wilderness (trail conditions report to follow shortly) earlier this week. A short trip, but a good opportunity to try out some of my new gear, including Cilogear's 40 liter worksack (the older model).

Cilogear is a tiny company run by a guy named Graham Williams (he's who answers the email when you write to them). Their special sauce is the extreme configurability of the packs. In particular:

  • It's an internal frame pack but the framesheet and backpad fit into a sleeve in the back of the pack, so that they can be removed and the pack used as a frameless backpack. This feature isn't unique to Cilogear, but is nice. The backpad also doubles as a bivy pad.
  • The hip belt can be easily removed to make the pack even more of a simple knapsack. The lid/loft comes off. as well.
  • Instead of fixed compression straps like most packs have, each side of the pack has two vertical rows of attachment points, one on in the front and one in the back. This lets you use a variety of strap arrangements to transfer/control the load as appropriate for how you've packed the pack.

I got a dynamite (super-tough) model on clearance for $80 (just before the new models came out) and then got the upgrade kit, consisting of a new, thinner hip belt, an improved lid, and some extra straps for $40, for a total cost of $120. Total pack weight is advertised as 3.6 lbs, which is heavier than an ultralight pack but still lighter than the much larger the Gregory Forester (just under 5 lbs) it replaces.

Generally, I'm quite happy with this pack. I was carrying a moderate load of around 30 lbs, but I never felt overly weighed down. The pack is narrow and rides close to your back so you're agile over even fairly rough terrain. I definitely felt like that aspect was better than with the Gregory. The hip belt conforms extremely well to your hips. With the Gregory my hips started to hurt after only a few hours (some sort of pressure point over the ischial spines), but with the Cilogear they were never sore at all. After two days, my suprapinatus (between neck and shoulder) were somewhat sore, but this is to some extent a feature of every pack I've ever used and I think is just a feature of wanting to carry a bit more weight than average on my shoulders rather than my hips. It could probably be ameliorated with better pack positioning, as I have a chance to tune this pack a bit. One thing you'd think would be uncomfortable is that the backpad is flat and fairly hard, so you'd expect to have it hurt your spine and also get a lot of sweat collection. Neither seemed to happen much, though.

I definitely like the ability to use this bag as a frameless pack—it's a nice feature when traveling. I haven't come to a decision about how I feel about the strapping system yet. In theory you're supposed to be able to do a lot of adjustments to optimize load carrying but I haven't experimented enough to really think I've done substantially better than the fixed strapping systems on standard packs. Maybe this is something I'll get used to as I work with the pack some more. It's at least not worse.

A few small quibbles:

  • There's a second, smaller rip-stop sleeve in front of the main frame/bivy pad sleeve, presumably to let you put small items in. The top inch or so of stitching on mine has torn out. This isn't a big deal and the pocket still works, but given that it is a small bug.
  • This pack doesn't have any pockets for water bottles in the sides of the pack—though the crampon pocket on the back can be used for this purpose. This means you absolutely have to take your pack off to get a drink (unless you use a drinking bladder, which I don't). On the other hand, my experience with those pockets is that they tend to either be hard to get your bottles into/out of or that they eject the bottles unexpectedly (or both), so that's not as biga drawback as one might like. This does seem like something Cilogear could easily fix,

All in all, I'm quite happy with this pack. It's comfortable and carries well and I expect to use it for future trips rather than my Gregory, unless I really need to carry a lot of stuff). I imagine that the new version is even nicer. If you're in the market for a new pack, I encourage you to check out Cilogear.


May 2, 2007


May 1, 2007

As I've mentioned previously, simple password systems suffer from a variety of capture and replay attacks. All the best solutions involve cryptographic authentication but this involves changing both client and server, which is a pain (the client is the real problem). One of the early approaches to this problem was to give the users physical cryptographic tokens which they could use to generate a supplemental password. This had the big advantage that it could be deployed without changing the user's client software.

The most succesful of these tokens was the SecurID card (now part of RSA). A SecurID is a card with an LCD display that generates a new "random" value every 30 seconds. The server side is synchronized to the token and so can verify the token value. I mention this because VeriSign is introducing a credit card with a similar technology, aimed at login for e-banking.

Rosch explained that, at this point anyway, the cards would not be geared toward online retailers, like Instead, they're aiming the concept at businesses and consumers who set up online accounts, like banks, brokerages and PayPal.

The cards would hold an algorithm that could generate the six-digit passwords, which are only good for 30 seconds.

When a consumer wanted to log onto her online banking account, for instance, she would log on with her user name and password, as usual. Then the site would ask for her secondary password. She would press a button on her credit card and the numerical password would flash up on the LCD screen. The next time, she needed to log into her account, her card would give her a different number, which the site would match up with the card's unique serial number, which corresponds to the algorithm it uses.


Rosch added that even if a key logger planted surreptitiously on the user's computer picked up that second password, a hacker wouldn't be able to use it because subsequent transactions would require a different password.

This certainly seems reasonable as a phishing countermeasure. Web login and telnet are similar in a number of respects—in particular in the desire to avoid touching the client. It shouldn't be too hard to train users to type in the password from the LCD, given that it's going to appear more or less next to the credit card number. My main concerns would be the cost and durability of the cards. Durability is especially an issue. I have no particular information about durability of the ICT cards, but a card with electronics and an LCD, plus a magstripe seems likely to fail more often than just a magstripe. Even if the cards can be made as cheap as mag stripe cards, if they break a lot that means a lot of calls to customer service to get new cards, and customer service calls are expensive.

I note that they don't seem to be targeting this towards online retailers. That's not surprising since getting that right seems a lot harder. Online retail is a substantially less close fit for the password login model. In particular, merchants want to be able to both batch transactions and retain the credit card numbers for future transactions (think Amazon one-click). Obviously either of these is inconsistent with at least naive implementations of very short-term authenticators. This isn't to say it's not possible to make something along these lines work for credit cards, just that it's more complicated.