December 2007 Archives

 

December 30, 2007

The elevator in the International Terminal at SFO only goes to two floors. Internally, it has two buttons, for the top floor and the bottom floor. But of course, when you get in it's either at the top floor or at the bottom and the only place it could go is the other floor. So, at most, you only need one button: next floor. I guess this would need a different internal design for the elevator software/firmware/wiring, but the programmer in me does find the current arrangement a bit inelegant.
 

December 29, 2007

Dropped in at two Vancouver area climbing gyms this week, Vertical Reality in Surrey and Cliffhanger in Coquitlam. Both gyms have bouldering and routes, but I just bouldered.

Vertical Reality ($10 to boulder, $15 to toprope) is in a sort of industrial park in Surrey. It's a bit ghetto, dimly lit, with short walls. There's a lot of bouldering, though, and the bouldering walls are incredibly dense, with multiple colors of tape on nearly every hold. They've even run out of tape colors so a lot of the labels are like yellow with red stripe. This all makes it fairly hard to track the problems—you definitely need to scope them before you get on. Also, the problems are deliberately rated about one V-grade harder than normal. Bonus features: free coffee (though it is Folgers) and during the 1-3 drop-in period the staff will belay you (I didn't do this).

Cliffhanger ($13.50 to boulder, $15 to toprope, includes gear) is a fair bit bigger, but still substantially smaller than the gyms in the Bay Area. There's one big bouldering wall that's fairly overhanging, plus a small doorway/roof, and problems scattered on the route walls. The problems here seem to be a bit more accurately rated, though they'd just had a comp and so they were all numbered, so you had to mentally map that 1-10 were V0, etc., which was a bit irritating. The problem density was pretty high again, with some hard to see tape colors so you still had to really scope them out before getting on the wall. Some of the doorway/roof problems were especially cool—powerful and reachy. Negatives: if you want to top-rope you need to pay a $6 belay test fee the first time. I've never seen this anywhere else.

Other notable features of both gyms:

  • Tricky starts. Apparently this is a common local feature because Squamish starts are difficult.
  • The same ropes for toprope and lead. Note that this means if you lead, you either need to finish or someone else needs to climb to the top to reset the rope. The way things work at the gym I climb at is that there are separate fixed ropes for toprope and then you bring your own rope for lead.

Also, next time I bring my own shoes. Rental climbing shoes really suck, especially since you definitely want to wear socks—which screws up your feel for the rock—unless you'd enjoy a case of foot fungus.

 

December 28, 2007

Schneier notes the TSA's new rules about lithium ion batteries. Here's their overall policy:
The following quantity limits apply to both your spare and installed batteries. The limits are expressed in grams of “equivalent lithium content.” 8 grams of equivalent lithium content is approximately 100 watt-hours. 25 grams is approximately 300 watt-hours:
  • Under the new rules, you can bring batteries with up to 8-gram equivalent lithium content. All lithium ion batteries in cell phones are below 8 gram equivalent lithium content. Nearly all laptop computers also are below this quantity threshold.
  • You can also bring up to two spare batteries with an aggregate equivalent lithium content of up to 25 grams, in addition to any batteries that fall below the 8-gram threshold. Examples of two types of lithium ion batteries with equivalent lithium content over 8 grams but below 25 are shown below.
  • For a lithium metal battery, whether installed in a device or carried as a spare, the limit on lithium content is 2 grams of lithium metal per battery. Almost all consumer-type lithium metal batteries are below 2 grams of lithium metal. But if you are unsure, contact the manufacturer!

This seems like it will be a lot of fun. I'm really looking forward to watching TSA reps try to figure out whether a given device has over 8-gram equivalents of lithium in it, let alone trying to add up the watt hours in various devices to decide if they are over 300 (note that 8 grams is claimed to be about 100 watt-hours, so what if you have 302 watt-hours, which is over 300, but probably less than 25 grams). This "contact the manufacturer" thing is pretty nuts. TSA needs to have a list to decide what they want to accept. Why don't they just publish it?

Another thing that's weird is that you can't have spare batteries in your checked luggage, but you are allowed to have such batteries installed in your devices. I'm sure my laptop will contain any fires or explosions. Outstanding!

 
Just paid a traffic ticket (see here) via Santa Clara county's not very impressive IVR (amazingly, there is no Web system). Among the high-tech features:
  • Extreme amounts of clipping and distortion to the extent to which you can only understand about 3/4 of the prompts. It's strangely inconsistent—some of the prompts sound fine, but some are almost incomprehensible.
  • The interface for entering your citation number and your last name is pretty bad. Say your citation number is H01234. First it asks you if there are any letters. Then it asks you to key in the letter, then prompts you for each possible letter on the number key. Then it asks if there is another letter. If not, it asks you to key in any numbers there are. Then it asks if there are any letters. Repeat. I'm not sure this is actually the IVR's fault, but rather that they have to deal with tickets from a wide variety of jurisdictions. Still, there seem like a bunch of ways to make this better (standardize citation numbers, add a jurisdiction/format code so that the format is predictable, etc.)
  • After you've paid your fine, it gives you a (really long) receipt number and then asks you to press one to repeat, two to continue. If you press two, it asks you to key in your citation number. I assume you're done at this point and can hang up, but if not there may soon be a warrant out for my arrest.

Oh, there's also a $12.95 "convenience fee" for using this system to pay your fine by credit card.

 

December 27, 2007

Linos, Linos, and Colditz's BMJ paper on airport screening is getting a lot of attention. LLN write:
We systematically reviewed the literature on airport security screening tools. A systematic search of PubMed, Embase, ISI Web of Science, Lexis, Nexis, JSTOR, and Academic Search Premier (EBSCOhost) found no comprehensive studies that evaluated the effectiveness of x ray screening of passengers or hand luggage, screening with metal detectors, or screening to detect explosives. When research teams requested such information from the US Transportation Security Administration they were told that evaluating new screening programmes might be useful, but it was overshadowed by "time pressures to implement needed security measures quickly."16 In addition, we noticed that new airport screening protocols were implemented immediately after news reports of terror threats (fig 1).

It's unsurprising that there are no real studies on this topic, but it's not at all clear that even if we wanted to do some it would be practical, or even possible to do so. The authors suggest a controlled trial of screening effectiveness at detecting specific types oxsxsf attacks:

After informing the airport managers, gaining approval from research ethics committees and police, and registering our trial with one of the acceptable International Committee of Medical Journal Editors trial registries, we would select passengers at random at the check-in desks and give each traveller a small wrapped package to put in their carry-on bags. (We would do this after they have answered the question about anyone interfering with their luggage.) A total of 600 passengers would be randomised to receive a package, containing a 200 ml bottle of a non-explosive liquid, a knife, or a bag of sand of similar weight (control package) in a 1:1:1 ratio. Investigators and passengers would be blinded to the contents of the package. Our undercover investigators would measure how long it takes to get through security queues and record how many of the tagged customers are stopped and how many get through. A passenger who is stopped and asked to open the wrapped box would be classed as a positive test result, and any unopened boxes would be considered a negative test result.

This study design seems problematic as a measure for screening effectiveness. Security screening is fundamentally different from screening for diseases because disease screening isn't adversarial.

To take the simplest case, consider genetic diseases. When you screen for Tay-Sachs, the Tay-Sachs gene isn't trying to figure out how to evade your screen. Even in cases like cystic fibrosis where there are genotypes which produce pathology but aren't detectable with standard screening methods (the basic CF screen only detects 80% of mutations) there's not selective pressure for the undetectable genotype, just pressure against the detectable ones. The undetectable genotypes don't increase in the population.

To take a slightly more complicated case, consider non-genetic diseases, which do evolve. HIV, for instance, regularly evolves resistance to the antiretrovirals we use to treat it. [Warning, I'm working from general principles here. If there are cases of evolved resistance to screening, I'd love to hear about them.] Screening is a different case, though, for at least two reasons. First, the reason you get HIV drug resistance is to a great extent due to selective pressure between the genotypes present in a given patient, so when you treat that patient with antiretrovirals, this exerts selective pressure against the susceptible genotypes and so you end up with a much higher fraction of resistant genotypes within the patient. But of course when you're doing screening, any nontrivial fraction of detectable organisms leads to a positive result and (presumably) treatment, so you don't get as much selective pressure between the detectable and undetectable variants. Second, virii and bacteria aren't intelligently trying to evade your screening, so even if there is some evolved stealth, you would likely have plenty of time to adapt and test your screening technology.

By contrast, in the case of airline screening, you have an intelligent attacker with a very short reaction cycle, so as soon as they know what kind of screening you are using they can move to evade it. Also, you don't need each attacker to independently evolve defenses—as soon as someone figures out a defense technique, they can tell a lot of other attackers about it. (This is also why signature-based virus detection is such a hard problem with relatively high false negative rates). This makes the problem of evaluating whether a given set of screening techniques work as the authors propose very problematic. By the time you've done your effectiveness study, it's already obsolete.

More importantly, this study design sort of confuses a technique (stopping people from bringing weapons through the security checkpoint) with the goal (stopping people from blowing up airplanes). But of course thse aren't the same thing. For instance, you could jump the fence and smuggle explosives into the sterile area. So, the question you really want to ask is whether airport security decreases the chance of planes being bombed. In order to do this, you need a different study design: one which compares various security regimes in terms of the number of terrorist attacks that occur under them. This is a much harder study to do, for a number of reasons.

First, you have the "outrun the bear problem". Say that you have both good and bad security and terrorists preferentially attack airports with bad security. This doesn't necessarily tell you that if everyone adopted good security you would see fewer attacks. The terrorist might just be lazy enough to choose the softer targets, but would mount attacks anyway—this is a variant of the adaptiveness problem. We just don't understand the supply model that well.

Second, ignoring this problem, it's not clear we have enough data to do a meaningful study, because the number of terrorist attacks is so low. Remember that there have been no successful US airline hijackings or bombings since September 11th 2001, so if you'd run a study of this type starting in 2002, you would not be able to reject the null hypothesis that good airline security (assuming, as seems likely, that there's existing variation in screening quality) was useless. We just don't know whether the reason we haven't had any attacks in over five years is because of good security or because people aren't trying, and you'd need a lot more data to get a significant result.

Given these issues, it's pretty hard to imagine what kind of study would let you decide these issues. That's not to say that I think that the current flavor of airport security is useful, but that doesn't mean that it's that meaningful a criticism that there aren't studies that show that it is.

 

December 25, 2007

Mrs. Guesswork and I are in White Rock, BC visiting her parents for the holidays. White Rock is only a few miles from the US, and I needed to go for a run, so I figured two great tastes that taste great together and header and headed for the border. Anyway, I usually fly into Canada (passport required) but my understanding was that you just needed ID to travel between the US and Canada, so I shoved my driver's license in my pocket and headed out.

I ran out to the 175th street border crossing and after a little screwing around figured out which building to go into. I showed the DHS guy my license and he asked me where my passport was (in my room) and said that due to the WHTI I would soon need a passport to enter the US. He asked me a bunch of questions about where I was born, etc. and then said that while he would let me into the US, without proof of citizenship the Canadians might not let me back in. I asked if he thought that was likely and he sort of waffled, but finally said that they might make me sit around until someone brought me a passport but that it probably wouldn't be a problem, especially if I had recently used my passport to enter Canada so they had records (I flew in on Monday).

I entered the US and ran to the Peace Arch border crossing. I went into the office there and showed my license and explained that I didn't have a passport. The woman asked me a bunch of questions (where I was staying, who I was with, etc.), then called over another agent who asked me some more questions, and then filled out some form, gave it to me, admonished me to carry proper ID, and send me over to another window where the agent asked me some more questions and said I could go on through.

A few notes about this:

  • It's not clear to me that you're actually required to show proof of citizenship just yet. The WHTI proof of citizenship requirements don't come into effect till January 31, 2008, so it seems like a driver's license should be enough for now.
  • A few minutes looking around and it's not as easy as you'd think to find a concrete statement of what the current identification requirements are. For Canadian citizens entering the US, it appears you need to present ID but that there's no actual requirement that you show proof of citizenship. The officer can accept an oral declaration of citizenship. According to the US customs officer, Canadian policy tends to track the US.
  • The American CBP officer did some sort of computer lookup. The Canadians didn't, so they clearly didn't check that I had actually ever presented a passport.
  • Regardless of the policy, letting me through seems to me the right plan—though of course I would say that—it's not like it's that hard to forge the relevant documents, so who would bother to come up with a story like mine and memorize all the details?
  • In neither case did people really try to physically stop me. In both cases, I went into and came out of the same door, so there was no real mechanism to make sure that I actually talked to anyone. In the CA->US direction, the CBP officer just gave me a piece of paper with a note on it to show to some other officer. Pretty hard to believe I couldn't have forged that. In the other direction there wasn't even that. And of course there aren't fences across the entire border.

Next time I'll bring my passport, though.

 

December 23, 2007

In my original post on Loren Weinstein's suggested adoption of universal HTTPS, I said that MITM attacks were a issue I would address in a separate post. This is that post. As cryptographers and COMSEC engineers never tire of pointing out, if your channel isn't authenticated then you're very vulnerable to active attackers. The classic attack is what's called a man-in-the-middle 1 (MITM) attack, but in general the problem is that you can end up talking to the attacker when you think you're talking to the right person. There are a lot of proposed solutions for this, but the only one that really works when you're trying to talk to someone you don't know is to have someone you do know (or at least trust) vouch for them. In TLS this is done with certificates, and the third party you trust is the certificate authority.

Whenever this topic comes up, you hear a lot of complaining about the difficulty and expense of obtaining certificates. For instance, here's Weinstein:

Certificates are required to enable TLS encryption in these environments, of course. And while the marketplace for commercial certs is far more competitive now than it was just a few years ago, the cost and hassle factors associated with their purchase and renewal are very relevant, especially for larger sites with many operational server names and systems.

It's certainly true that certs are an obstacle, though not as big an obstacle as people think. You can get a certificate for as little as $9/year. It's a little inconvenient, but it wouldn't be hard for Web hosting providers (who typically charge rather more than that) to simply issue you a certificate (or work with a CA to do so) as part of your Web site setup. But still, this is obviously more inconvenient than not doing anything. So, do you need a certificate at all? Here's Weinstein again:

However, in a vast number of applications where absolute identity confirmation is not required (particularly when commerce is not involved), self-signed certificates are quite adequate. Yes, as I alluded to in my previous blog posting, there are man-in-the-middle attack issues associated with this approach, but in the context of many routine communications I don't feel that this is as high a priority concern as is getting some level of crypto going as soon as possible.

Given their significant capabilities, why then are self-signed certs primarily employed within organizations, but comparatively rarely for servers used by the public at large, even where identity confirmation is not a major issue?

A primary reason is that most Web browsers will present a rather alarming and somewhat confusing (for the typical user) alert as part of a self-signed certificate acceptance query dialogue. This tends to scare off many people unnecessarily, and makes self-signed certificate use in public contexts significantly problematic.

Security purists may bristle at what I'm going to say next, but so be it. I believe that we should strongly consider something of a paradigm shift in the manner of browsers' handling of self-signed certificates, at the user's option.

When a browser user reaches a site with a self-signed certificate, they would be presented with a dialogue similar to that now displayed, but with additional, clear, explanatory text regarding self-signed certificates and their capabilities/limitations. The user would also be offered the opportunity to not only accept this particular cert, but also to optionally accept future self-signed certs without additional dialogues (this option could also be enabled or disabled via browser preference settings).

This topic has been debated endlessly on mailing lists, so I have no intention of detailing all the arguments. Instead, here's the bullet point version.

For

  • Active attacks aren't a major issue anyway, but passive attacks are, and you can use SSH-style leap of faith (remembering the server's cert) to block most active attacks.
  • What's important is to get people to use any crypto at all, and this whole certificate thing is an impediment.
  • Self-signed certs can be made to install automatically with the server.
  • In the real world, certificates are often screwed up in some way and people already ignore that.
  • Certificates are practically worthless since the CAs practically don't check who you are (the standard thing is to force you to respond to an email send to an admin address at your domain).
  • Many of the important attacks (e.g., phishing) aren't even detectable by certificate checks because the certs are right.

Against

  • What do you mean active attacks aren't important? This is an active attack we're looking at right here.
  • It's true that the certificate thing is an impediment, but that's really a social and engineering issue. Given how cheap certs are and how little checking the CAs do, cert issuance could easily be practically automated.
  • Encouraging people to accept self-signed certs undermines security for all the sites which want real certs—all the reasons why people don't check certs go double for why they won't check that the cert is not self-signed if they see both cases a lot and there's no way to tell what kind of site you should expect to talk to.
  • If you want to encourage the client authors to do something, encourage a free CA (like OpenCA used to be) that does simple email checking. At least that has the same security model as self-signed certs.

This doesn't really cut in either direction, but another possibility is to reserve the https: URL scheme for real certs but to have clients auto-negotiate SSL/TLS silently where possible (like RFC 2817 but done right). This at least gives you channel confidentiality, and if you cache the fact that you negotiated SSL/TLS, then some active attack resistance.

Note that an active attacker can of course downgrade you to straight HTTP (who knows how people respond to whatever warning accompanies "hey, I just negotiated HTTP even though before I was doing HTTPS?") but, then, they could MITM self-signed certs and Weinstein's argument that they won't:

Any ISP that was caught playing MITM certificate substitution games on encrypted data streams without explicit authorization would certainly be thoroughly pilloried and, to use the vernacular, utterly screwed in the court of public opinion -- and quite possibly be guilty of a criminal offense as well. I doubt that even the potentially lucrative revenue streams that could be generated by imposing themselves into users' Web communications would be enough to entice even the most aggressive ISPs into taking such a risk. But if they did anyway, the negative impacts on their businesses, and perhaps on their officials personally as well, would be, as Darth Vader would say, "Impressive. Most impressive."

seems kinda shakey to me.

1. Amusingly, the Wikipedia entry on Interlock, a protocol designed to stop MITM, reads in part:

Most cryptographic protocols rely on the prior establishment of secret or public keys or passwords. However, the Diffie-Hellman key exchange protocol introduced the concept of two parties establishing a secure channel (that is, with at least some desirable security properties) without any such prior agreement. Unauthenticated Diffie-Hellman, as an anonymous key agreement protocol, has long been known to be subject to man in the middle attack. However, the dream of a "zipless" mutually authenticated secure channel remained.

As far as I know, this use of the term "zipless" comes from Erica Jong's novel Fear of Flying, where it refers to a somewhat different type of interaction.

 

December 20, 2007

Proaxiom writes:
I've been trying for some time to find a globe oriented with Antarctica at the top. I could do this with a regular globe, but the writing would be upside-down.

I presume demand isn't great enough for anyone to manufacture such a globe. You can find maps with south-at-top orientation, mostly from Australia, but no globes.

I want to have such a globe to put it in my office. For me it serves as a reminder that many things we naturally think of as immutable are in fact completely arbitrary.

Here's Iain Banks:

"Sma, believe me; it has not all been 'fun.'" He leaned against a cabinet full of ancient projectile weapons. "And worse than that," he insisted, is when you turn the godamn maps upside down."

"What?" Sma said, puzzled.

"Turning the maps upside down," he repeated. "Have you any idea how annoying and inconvenient it is when you get to a place and find they map the place the other way up compared to the maps you've got? Because of something stupid like some people think a magnetic needle is pointing up to heaven, when other people think it's heavier and pointing down? Or because it's done according to the galactic plane or something? I mean, this might sound trivial, but it's very upsetting."

Incidentally, a lot of GPS-based navigation systems seem to be configured by default to orient the map in the direction you're travelling. I suppose you could get used to this, but really I'd rather have it oriented North up.

 

December 19, 2007

Ryan Singel reports that despite the rather lax standards required for wiretaps, some FBI agents seem to have decided that they could skip procedure:
The revelation is the second this year showing that FBI employees bypassed court order requirements for phone records. In July, the FBI and the Justice Department Inspector General revealed the existence of a joint investigation into an FBI counter-terrorism office, after an audit found that the Communications Analysis Unit sent more than 700 fake emergency letters to phone companies seeking call records. An Inspector General spokeswoman declined to provide the status of that investigation, citing agency policy.

The June 2006 e-mail (.pdf) was buried in more than 600-pages of FBI documents obtained by the Electronic Frontier Foundation, in a Freedom of Information Act lawsuit.

The message was sent to an employee in the FBI's Operational Technology Division by a technical surveillance specialist at the FBI's Minneapolis field office -- both names were redacted from the documents. The e-mail describes widespread attempts to bypass court order requirements for cellphone data in the Minneapolis office.

Remarkably, when the technical agent began refusing to cooperate, other agents began calling telephone carriers directly, posing as the technical agent to get customer cellphone records.

Federal law prohibits phone companies from revealing customer information unless given a court order, or in the case of an emergency involving physical danger.

The actual document is here.

 

December 18, 2007

The other day I was listening to one of Thomas Laqueur's History 5 lectures and he mentioned that many older maps were centered on Jerusalem [*]. Laqueur observes that the center of a map is arbitrary and that there's nothing wrong with using Jerusalem as the center. Well, sort of. It's true that the Earth is roughly a sphere, but remember that it spins on an axis going between the North and South poles which gives it a natural asymmetry. So, while the longitude of the center of a map is certainly arbitrary and there's nothing particularly special about Greenwich1, the Equator is special and it would be sort of weird to center the map vertically anywhere else—and at about 31 degrees North, Jerusalem is way off the Equator.

Note that this isn't purely a matter of latitude not having been discovered yet. The Greeks knew that the Earth was a sphere and already had the idea of latitude and longitude. Techniques for measuring latitude (and impractical techniques for measuring longitude) were known in Medieval Europe as well at the time such maps were produced. The choice of the center of the map was an issue of religious commitments, not simple ignorance.

1. See Einstein's Clocks, Poincare's Maps: Empires of Time for a lucid description of the political maneuvering around the selection of GMT as the zero reference for time.

 
Sorry about the posting outage. My laptop died (yes, I have a backup of the data) and it took me a while to get another machine prepped and usable.

I'm also now in the market for a new laptop and may finally go Mac this time. They're incredibly heavy, though. I hear rumors that maybe finally this time they'll give us a subnotebook. Any readers want to share?

 

December 14, 2007

Orin Kerr points to the decision in in re Boucher where a magistrate ruled that forcing someone to disclose their PGP password violates the Fifth Amendment. This question has been the topic of an unbelievable amount of amateur lawyering on cypherpunks and associated mailing lists and a lot of that gets repeated in the Volokh Conspiracy comments. The key question seems to be whether disclosing their password is a testimonial or non-testimonial act. I'm no expert on this topic, but as I recall, in past discussions, people have suggested having a password which is inherently self-incriminating (e.g., "I murdered John Doe") in an attempt to create a Fifth Amendment situation, which always seemed to me to be too clever by half.
 
Ohio just published the results of their voting system review. They examined Hart, Premier Election Systems (formerly Diebold) and ES&S. Hart and Diebold were part of the California TTBR and a skim of the TOC and a sampling of the sections suggests that these reports mostly confirm the California/UC result (with one interesting new Hart vuln that allows an attacker with physical access to the interfaces to emulate keypresses, thus automating vote injection.) ES&S was not part of the UC review (but a smaller review was done afterward) and the Ohio team seems to have found serious problems in the ES&S system as well.
 

December 13, 2007

San Francisco has selected Sequoia Voting Systems voting machines, replacing their current ES&S systems. Reading the coverage, it's clear that whether SVS would open source their software was a big issue. It's not clear, however, whether SVS actually agreed to. Here's the Merc:
"This is a good system in front of us," Alioto-Pier said. "We should be excited for having it." She added that she felt open-source voting was important, but not readily achievable currently.

"Many of us are holding our noses around this vote," said Supervisor Tom Ammiano, who also opposed the resolution but acknowledged the agreement contained "some positives."

Ammiano said Sequoia had agreed to a third-party inspection of its source code and to bring its software into open-source compliance within a year.

Ammiano said a long-term solution might be for San Francisco to consider furnishing its own computer voting systems, to 'ensure that open source and transparency will happen,' he said.

The Examiner doesn't say anything about such a commitment:

"Until we take a stand and either force the vendors to open their source code to us or develop or own open source voting systems, really, what we did this past cycle is the only way that we can guarantee that every voters vote gets counted," he said.

Supervisor Sean Elsbernd countered that opponents "need to be a little bit more real" about The City's choices.

"Let's vote down this contract, and then what? We get to keep ES&S, the frauds. We cannot do that," Elsbernd said.

Supervisor Gerardo Sandoval said that with three elections on the horizon that The City needed to approve the contract.

"These elections are too important to have the results tabulated a month after the rest of the country knows what happens," he said, adding, "We can deal with the open- source issue at a later time."

I can't really tell whether SVS promised anything or not, but I'm not sure it really matters much one way or the other. First, if what you're concerned about is openness about the software the election is running, you don't need open source; you need published source. Open source software is about your ability to copy it and use it for your own purposes, but it's not clear why that's important for running elections. It's not like we're going to have election software up on SourceForge and then have the State of California compile the head of tree six weeks before the election. I suppose it's sort of possible that you would want to allow independent vendors to use Sequoia's software on their own hardware platforms, but given the certification requirements for the hardware—and the fact that each vendor has their own hardware—that seems fairly problematic.

If you're concerned about the security of elections, what you really need is to know that the software running the elections can be and has been reviewed. At most, though, that would require that the vendors publish copies of their source code so that people could review it. In this case, though, the software in question has already been reviewed fairly intensively, and the reviewers found fairly serious vulnerabilities. Given that, it's not clear how much you'd really learn about the security of the system from having the source code public so that people could informally review it.

 

December 12, 2007

Lauren Weinstein points out that the assclowns at Rogers are prototyping a system for splicing their own messages into other people's Web pages, like this:

Lauren argues that it's time to abandon unprotected web surfing:

That first, key action is to begin phasing out, as rapidly as possible and in as many application contexts as practicable, the use of unencrypted http: Web communications, and move rapidly to the routine use of TLS/https: whenever possible.

This is of course but an initial step in a rather long path toward pervasive Internet encryption, but it would be an immensely important one.

TLS is not a total panacea by any means. In the absence of prearranged user security certificates, TLS is still vulnerable to man-in-the-middle attacks, but any entity attempting to exploit that approach would likely find themselves in significant legal difficulty in short order.

Also, while TLS/https: would normally deprive ISPs -- or other intermediaries along the communications path -- of the ability to observe or modify data traffic contents, various transactional information, such as which Web sites subscribers were visiting (or at least which IP addresses), would still be available to ISPs (in the absence of encrypted proxy systems).

Another potential issue is the additional computational cost associated with setting up and maintaining TLS communication paths, which could become significant for busy server sites. However, thanks to system speed improvements and a choice of encryption algorithms, the additional overhead, while not trivial, is likely to at least be manageable.

Weinstein raises a number of issues here, namely:

  • Vulnerability to MITM attacks.
  • The effect of TLS on deep packet inspection engines.
  • The computational cost of TLS.
In this post, I want to address the second and third issues. MITM attacks deserve their own post. First, we need to be clear on what we're trying to do. The property the communicating parties (the client and server) want to ensure isn't that third parties can't read (the technical term here is confidentiality) the traffic going by but rather they can't modify it (the technical terms here are data origin authentication (knowing who sent the message) and message integrity (knowing that it hasn't been modified)). Obviously, there's no way to stop your ISP from sending you any data of his choice, but you can arrange to detect that and reject the data.

The general way that this is done is to have the server compute what's called a message integrity check (MIC) value over the data. The server sends the MIC, along with the data to the client. The client checks the MIC (I'm being deliberately vague about how this works) and if it isn't correct it knows that the data has been tampered with and the client discards the data. The way this works in TLS is that the client and the server do an initial handshake to exchange a symmetric key. This key is then used to key a message authentication code (MAC)1 function which is used to protect individual data records (up to 16K each).

So, going back to Issue 2, TLS actually provides confidentiality and message integrity/data origin authentication separately. In particular, there are modes which provide integrity but not confidentiality (confidentiality without integrity is only safe in some special cases so these modes aren't provided)—the so-called NULL modes. So, it's quite possible to arrange matters in such a way that intermediaries can inspect the traffic but not modify it. Of course, whether this is desirable is a separate issue, but I think it's pretty clear that many enterprises, at least, want to run various kinds of DPI engines on the traffic going by. Indeed, they want to so much that they deploy solutions to intercept encrypted traffic, so presumably they would be pretty unhappy if they couldn't see any Web traffic.

There are at least two major difficulties with providing a widely used integrity-only version of HTTPS. The first is that clients don't generally offer to negotiate it, at least in part because it's easier to just have users expect that HTTPS = the lock icon = security than to try to explain the whole thing about integrity vs. confidentiality. This brings us to the second issue, which is how we provide a UI which gives users the right understanding of what's going on. More on the UI issue in a subsequent post, but it should be clear that from a protocol perspective this can be made to work.

Moving on to the performance issue: HTTP over TLS is a lot more expensive than raw HTTP [CPD02]. So, TLS-izing everything involves taking a pretty serious performance hit. The basic issue is that each connection between the client and the server requires establishing a new cryptographic key to use with the MAC. This setup is expensive, but it's a more or less fundamental requirement of using a MAC because the same key is used to verify the MAC as to create it. So, in order to stop Alice from forging traffic to Bob from the server, Alice and Bob need to share different keys with the server. The situation can be improved to some extent by aggressive session reuse, thus amortizing the cost of the really expensive public key operations. Client-side session caching/TLS tickets can help here to some extent as well, but the bottom line is that (1) there's some per-connection cost and (2) it breaks proxy caches, which obviously puts even more load on the server.

One approach that doesn't have this performance drawback is to have the server authenticate with a digital signature. Because different keys are used to sign and verify, a single signed message can be replayed to multiple recipients. This reduces the load on the server, as well as (if the protocols are constructed correctly) working correctly with proxy caches. Obviously, this only works well when the pages the server is serving are exactly identical. If each page you're generating is different, this technique doesn't buy you much (though note that even dynamic pages tend to incorporate static components such as inline images.) Static signatures of this type were present in some of the early Web security protocols (e.g., S-HTTP) but SSL/TLS is a totally different kind of design and this sort of functionality would be complicated to retrofit into it at this point.


1. Yes, this whole MIC/MAC thing is incredibly confusing. It's even better when you're doing layer 2 communication security and MAC means the Ethernet MAC.

 

December 8, 2007

The TV in the apartment I was staying at in Vancouver featured some unusual extra channels:
  • Surveillance cameras covering the parking garage and the street in front of the apartment.
  • An apparently permanent head-on shot of a wood-burning fireplace.
  • A (live) screenshot of an Agilent spectrum analyzer attached to something or other.

The first two are a little weird—though I imagine a video fireplace might be of some value to someone—but I have to admit I don't have a good explanation for the spectrum analyzer thing.

 

December 7, 2007

You know, I never thought that I would never need to worry about hard drive neutrality. When I first heard about this I just sort of assumed it would be something vaguely sensible that people were overreacting to, but no, when you go to the site it sure seems to be true.
Due to unverifiable media license authentication, the following file types cannot be shared by different users using WD Anywhere Access.

If these file types are on a share on the WD My Book World Edition system and another user accesses the share, these file will not be displayed for sharing. Any other file types can be shared using WD Anywhere Access.

The list includes: MP3, AVI, WMA, AAC, etc. Outstanding!

 

December 4, 2007

Yesterday Dan Harkins cornered me in the hall and asked me the following question:
Given a random 128-bit integer d and another random x-bit integer n where x >> 128, what is the probability that n is an even multiple of d.

My immediate answer was 2-128. My second was to retract this and suggest that Dan ask a mathematician. My third was to try to work the problem. My reasoning below.

  • The probability that a random number is divisible by 1 is 1, divisible by 2 is 1/2, divisible by 3 is 1/3, etc.
  • If we assume that d is randomly distributed over 1..2128-1, then each of these probabilities is equiprobably, so we can compute:
  • This series (1 + 1/2 + 1/3, ...), the harmonic series.
  • The Harmonic series diverges, but very slowly. As the Wikipedia page says, the first 1043 terms sum to less than 100.
  • Since we're interested in the mean probability, we divide the sum by the number of terms, d, and since 1043 isn't that far off 2128, this means that we're looking at something like 2-120.

Unless I've screwed something up (always possible), I guess my intuition isn't completely broken.

UPDATE: 2/3 -> 1/3. Thanks to Dan for the fix.

 

December 3, 2007

OpenSSL has a FIPS-140 validated module. One of the requirements is self-testing of the PRNG. Unfortunately, it somehow doesn't quite work
A significant flaw in the PRNG implementation for the OpenSSL FIPS Object Module v1.1.1 (http://openssl.org/source/openssl-fips-1.1.1.tar.gz, FIPS 140-2 validation certificate #733, http://csrc.nist.gov/groups/STM/cmvp/documents/140-1/140val-all.htm#733) has been reported by Geoff Lowe of Secure Computing Corporation. Due to a coding error in the FIPS self-test the auto-seeding never takes place. That means that the PRNG key and seed used correspond to the last self-test. The FIPS PRNG gets additional seed data only from date-time information, so the generated random data is far more predictable than it should be, especially for the first few calls.

This vulnerability is tracked as CVE-2007-5502.

There's no real deep lesson here. This is the kind of mistake anyone can accidentally make. It's true that the more options you have in a piece of code, the higher the chance that there will be a some code path that doesn't work right, and in this case it's particularly striking because (1) there's no need to self-test a software PRNG 1 and (2) it's the addition of the self-test that broke it, but it could have easily have been something else.

1. In general, self-testing any cryptographic PRNG is difficult. The standard way to build a CSPRNG is to take whatever your entropy source is and run it through a bunch of hash functions. The result of this is that the output looks random under standard entropy tests. This is true even if the seed is very low entropy. All a self-test really means is that the hashing part of the PRNG is working correctly, but usually it's the seeding part that goes wrong (as seen here).

 

December 2, 2007

I spent yesterday at the IETF coding sprint. The idea here was to rewrite a bunch of the IETF software tools in a more modern system (Django), as well as write a bunch of new tools. I'd never worked with Python or Django before—other than writing test programs—but that didn't stop Cullen Jennings and I from trying to write an IETF charter management tool (still in development). Some initial notes after 15 hours or so of screwing around:

  • This kind of framework really does let you get an app up and running quickly. I figure I could have gotten slightly more done working directly in CGI and Perl, but when you factor in that I didn't really figure out how to get Django to do anything useful until about 3:30, Django seems to come out pretty far ahead.
  • Django embeds a lot of data in the URL itself rather than in arguments. The way this works is that there is a map table from URL patterns (regexes) (Jamie Zawinski, call your office). So, you get something like this:
            (r'^(?P[a-z0-9]+)/$',views.current),
            (r'^(?P[a-z0-9]+)/all/$',views.list),
            (r'^(?P[a-z0-9]+)/fake/$',views.fake_wg),
            (r'^(?P[a-z0-9]+)/add/$',views.add),
            (r'^(?P[a-z0-9]+)/$',views.current),
            (r'^(?P[a-z0-9]+)/all/$',views.list),
            (r'^(?P[a-z0-9]+)/fake/$',views.fake_wg),
            (r'^(?P[a-z0-9]+)/add/$',views.add),
    

    So, looking at the first of these lines, it says that any URL that matches the pattern ^(?P<wgname>[a-z0-9]+)/$ gets handled by the function views.current and the first parenthesized match gets passed as an argument via a parameter named wgname. This is clever, but kind of weird, especially when you realize that these expressions are evaluated in sequence, so there's a chance for collisions. I got bitten by this once already.

  • It's great to have automated mapping from data types to database schema, but it would be a lot better if it could hide the behavior of the relational DB a little better. To take an example, when you want to have a many-to-one mapping, (e.g., cards -> deck of cards), you use a "foreign key", like so:
    class Deck(models.Model):
          brand = models.CharField(max_length=20)
    
    class Card(models.Model):
          suit = models.CharField(max_length=10)
          value = models.IntegerField()
          deck = models.ForeignKey(Deck)
    

    Thinking about this as a data structure, this does two things, one obvious and one unobvious:

    • Create a pointer from any given Card object to the deck it belongs to. This is the forward mapping you'd expect since it's explicitly declared in Card
    • Create a slot in the Deck class called card_set which contains pointers to all the Card objects that belong to the Deck. This is fairly unobvious, since it's not explicitly declared, it just happens automatically.

    Of course, these aren't just data structures; they are mapped to underlying stuff in the database, so this creates some weirdness. To give you an example, I spent about 30 minutes trying to figure out when I created a Deck and then inserted a Card (ok, not really, but analogous structures), I ended up with null pointers in both directions. It turns out that you need to do a save() of the container (Deck) before creating the contained object (Card), otherwise it ends up pointing at nothing. I don't really know why—SQL experts should feel free to tell me—and ultimately had to have Fenner tell me how to make it work.

  • It's pretty clear that there are sophisticated and arguably elegant ways to do jobs (e.g., rendering HTML), but I don't know any of them, so I end up just doing things crudely, hardwiring forms into the HTML, etc. Probably if I were going to really work on this kind of stuff regularly, that would be worth learning.