EKR: November 2011 Archives

 

November 29, 2011

As I wrote earlier, many oversubscribed races use a performance-based qualification process as a way of selecting participants. What I mostly passed over, however, is whether different people should have to meet different qualifying standards. If your goal is to get the best people, you could simply just pick the top X%. However, if you were to do that, what you would get would be primarily men in the 20-40 age range. To give you an idea of this, consider Ironman Canada 2011, which had 65 Hawaii Qualifiers. If you just take the first 65 non-Pro finishers, the slowest qualifier would be around 10:17. This standard would have two amateur women, Gillian Clayton (W30-34) at 10:01.58 (a pretty amazing performance, since she's 18 minutes ahead of the next woman) and Rachel Ross (W35-39) at 10:12.17, and no man 55 or above.

If you're going to have a diversified field, then, you need to somehow adjust the qualifying standard for age and gender. The standard practice is to have separate categories for men and women and five year age brackets within each gender. (Some races also have "athena" and "clydesdale" divisions for women and men respectively who are over a certain weight, but at least in triathlon, these are used only for awards and not for Hawaii qualifying purposes.) However, it's also well-known that these categories do a fairly imperfect job of capturing age-related variation: it's widely recognized that "aging up" from the oldest part of your age group to the youngest part of the next age group represents a signficant improvement in your expected results.

UPDATE: I forgot to mention. Western States 100 has a completely gender neutral qualifying standard, but it's comparatively very soft.

 

November 28, 2011

One of the common patterns in endurance and ultra-endurance sports is to have one or two races that everyone wants to do (the Hawaii Ironman, the Boston Marathon, Western States 100, etc.) Naturally, as soon as the sport gets popular you have more people who want to do race X than the race organizers can accomodate. [Interestingly, this seems to be true no matter the size of the event: Hawaii typically has around 1800 participants, Boston over 20,000.] As a race organizer, then, you are faced with the problem of deciding how to pick the actual participants from those who have the desire to participate.

The first problem seems to be deciding what to optimize for, with the two most common objectives being:

  • Choose fairly among everyone who wants to do the race.
  • Choose the best athletes.

Fair Selection
The easiest way to choose fairly is generally to run a lottery. You take applications for a race up until date X and then just draw however many entrants you want out of that list. [Note that there is always a yield issue, since some people register who never show because of injuries or whatever, so the number of actual participants is never totally predictable.] For races which are only mildly oversubscribed, what's more common is take entries up until you're full and then close entry under the "you snooze, you lose" principle. Ironman Canada does this, but now it basically fills up right away every year so you more or less have to be there the day after the race when registration for the next year opens up.

Merit-Based Selection
Choosing the best athletes is a more difficult proposition, since you first need to identify them. You might think that you could just have a big qualifying race with everyone who wants to race and just pick the top X participants, but this clearly isn't going to work. Since the size of the target event is generally (though not always) set to be about the maximum practical size of a race, if you're going to pick out the top people to race in your target event, the qualifying event would have to be much much larger, well beyond the practical size. Instead, you somehow have to have a set of qualifying races and draw the best candidates from each race. In some cases this is easy: If you are drawing national teams for the world championship, you can just have each nation run its own qualifying race and since each such race only needs to draw from a smaller pool, it's still manageable. However, many events (e.g., Ironman) aren't laid out among national lines so this doesn't work.

There are two basic strategies for drawing your qualifying candidates from a number of races. First, you can have a qualifying time. For instance, if I wanted to run the Boston Marathon, I would need to run some marathon under 3:10. Obviously, there is a lot of variation in how difficult any given race is, and so this leads to people forum shopping for the fastest race. It's extremely common to see marathons advertised as good Boston qualifiers. The key words here are "flat and fast" (A qualifying race can only have a very small amount of net downhill, so non-flat means uphill,which slows you down.). Obviously, a qualifying time doesn't give you very tight control over how many people you actually admit, so you still have an admissions problem. As I understand it, Boston used to just use a first-come-first-served policy for qualifiers but in 2012 they're moving towards a rolling admissions policy designed to favor the fastest entrants. That said, At the other end of the spectrum, the Western States has their qualifying time set so that there are vastly more qualifiers than eventual participants (it looks to me like it's set so that practically anyone who can finish can qualify [observation due to Cullen Jennings]) and they use a lottery to choose among the qualifiers.

The other major predictable approach is that used for the Hawaii Ironman. The World Triathlon Corporation (who runs Hawaii) has made certain races "Hawaii qualifiers" (my understanding is that a race pays for this privilege) and each race gets a specific number of slots for each gender/age combination. The way that this works is that if there are 5 slots in your age group, then the top 5 finishers get them. If any of those people don't want the slot (for instance they may have already qualified) then the slots roll down to the 6th person, and so on. all of this happens the day of or the day after the race and in person. This method gives the race organizer a very predictable race size but poses some interesting strategic issues for participants: because participants compete directly against each other for slots, what you want is to pick a qualifying race that looks like it is going to have a weak field this year. Unfortunately, just because a race had a weak field last year doesn't mean that that will be true again, since everyone else is making the same calculation!

Arbitrary Selection
One thing that I've only seen in ultrarunning is invitational events with arbitrary (or at least unpublished) selection criteria. For instance, here's the situation with Badwater:

The application submission period begins on February 1, 2012 and ends on February 15, 2012. A committee of five race staff members, one of whom is the race director, will then review and rank each application on a scale of 0 to 10. The ranks will be tallied on February 18 and the top 45 rookie applicants with the highest scores, and the top 45 veterans with the highest scores, will be invited (rookies and veterans compete separately for 45 slots for each category). At that time, or later, up to ten more applicants (rookie and/or veteran) may be invited at the race director's discretion, for a total of approximately 100 entrants, and 90 actual competitors on race day.

I guess that's one way to do it.

 

November 8, 2011

The MacBook (Air, Pro, etc.) are great computers, but the sealed battery is a real limitation if you want to travel with it. My Air gets about 5-6 hours of life if I'm careful, which is fine for a transcontinental flight, but not a transatlantic one. The fix, of course, is to buy a HyperMac external battery, which plugs into the laptop at the only real point of access, the magsafe connector. Unfortunately, in 2010 Apple sued HyperMac for patent infringement and HyperMac stopped selling the relevant cable (which, as I understand it, was actually a modified version of an official Apple cable). Without the cable, of course, the battery is pretty useless.

I'm lucky enough to have one of the pre-lawsuit battery/cable combinations but recently a friend wanted one, so I looked again. It seems that HyperMac is back in business, but they've resorted to a do-it-yourself kind of ethos. Basically, you have two choices:

  1. HyperMac will sell you a connector that impersonates a 12V air/auto power connector. You then buy the Apple air/auto to MagSafe adaptor and plug it into your Mac.
  2. They sell you a pair of jacks that you splice into the cable for a legitimate Apple power supply. The way that this works is you take a standard Apple power supply and cut the magsafe half of the cable in two. You strip the wires and attach them to the jack; repeat for the other side.

Without taking a position on the merits of Apple's legal claims, this seems like a pretty lame state of affairs. First, the original HyperMac design was better because you could charge your battery at the same time as you powered your Mac with it. This works with the air/auto version but not with the DIY jack version. Second, while it's not exactly microsurgery to splice the cables, it's still something you could mess up.

Moreover, it's not like Apple has some super-expensive power expansion solution that HyperMac is competing with and the patent is protecting them from. Rather, they're just making life harder for people who want to use Apple's products in situations which are just more extreme versions of the situations which motivated the device having a battery in the first place. I just don't see how this makes anyone's life better.

 

November 5, 2011

A while ago I promised to write about countermeasures to the Rizzo/Duong BEAST attack that didn't involve using TLS 1.1. For reasons that the rest of this post should make clear, I had to adjust that plan a bit.

To recap, the requirements to mount this attack are a channel that:

  1. Is TLS 1.0 or older.
  2. Uses a block cipher (e.g., DES or AES).
  3. Is controllable by an attacker from a different origin.
  4. Allows the attackers to force the target secret to appear on at a controllable location.
  5. Allows the attacker to observe ciphertext block n and control a subsequent block m with only a small number of uncontrolled bits in between n and m.

I know this last requirement is a bit complicated, so for now just think of it as "observe block n and control the plaintext of n+1", but with an asterisk. It won't really enter into our story much.

So far, there are two publicly known channels that meet this criterion:

  • WebSockets-76 and previous (only relevant on Safari).
  • Java URLConnection

Note that requirements 1 and 2 are about the TLS stack and requirements 2-4 are about the Web application. Requirement 5 is about both. This suggests that there are two main angles for countermeasures: address the TLS stack and address the Web application. Moreover, there are three potential actors here: users, operators of potential victim sites, and implementors.

The TLS Stack
TLS 1.1
First let's dispose of the TLS 1.1 angle. As has been apparent from the beginning, the ultimate fix is to upgrade everyone to TLS 1.1. Unfortunately, the upgrade cycle is really long, especially as many of the popular stacks don't have TLS 1.1 support at all. To make matters worse, due to a number of unfortunate implementation decisions which I'll hopefully get time to write about later, it's likely to be possible for an attacker to force two TLS 1.1 implementations to speak TLS 1.0, making them vulnerable. So, upgrading to TLS 1.1 is basically a non-starter.

RC4
The next obvious angle (per requirement 2) is to force the use of RC4, which isn't vulnerable to this attack. This isn't really a general solution for a number of reasons, including that there are also (more theoretical) security concerns about the use of RC4 and there are a number of government and financial applications where AES is required.

The only really credible place to restrict the use of non-RC4 ciphers is the server. The browsers aren't going to turn them off because some sites require it. Users aren't going to turn them off en masse for the usual user laziness reasons (and because some browsers make it difficult or impossible to do). Even if users do restrict their cipher suite choices, Java uses its own SSL/TLS stack, so configuring the cipher suites on the browser doesn't help here. The server, however, can choose RC4 as long as the client supports it and this provides good protection. [Note that TLS's anti-downgrade countermeasures do help here; the server can use RC4 with clients which support both AES and RC4 and the attacker can't force the server to believe that the client supports only AES.] However, as I've said, this isn't really a permanent solution.

Record Splitting
A number of techniques have been suggested to randomize the CBC state. The general form of this is to split each plaintext write (i.e. the unit the attacker is required to provide) into two records, with the first containing less than one cipher block worth of plaintext. So, for instance, each time the user does a write you could send an empty record (zero-length plaintext). Because TLS encrypts the MAC, this means that the first plaintext block is actually the MAC, which the attacker can't predict, thus randomizing the CBC state.

In theory, this should work fine, since TLS doesn't guarantee any particular mapping between plaintext and records (just as TCP does not). However, it turns out that some SSL/TLS servers don't handle this kind of record splitting well (i.e., they assume some mapping) and so this technique causes problems in practice. Client implementors tried a bunch of techniques and ultimately settled on one where the first byte of the plaintext is sent separately in a single record and then the rest is sent in as many records as necessary (what has been called 1/n-1 splitting). [*]. This seems to be mostly compatible, though apparently some servers still choke

if you use Chrome or Firefox you should either have this fix already or get it soon. However, as mentioned above, those browsers aren't vulnerable to attack via WebSockets and Java uses a different stack, so the fix doesn't help with Java. The good news is that Oracle's October 18 Java patch claims to fix the Rizzo/Duong attack and running it under ssldump reveals that they are doing a 1/n-1 split. The bad news is that the version of Java that Apple ships for Lion hasn't been updated, so if you have a Mac you're still vulnerable.

Web-based Threat Vectors
The other angle is to remove the Web-based threat vectors. How feasible this is depends on how many such vectors there are. As I noted above, the only two known ones are WebSockets ≤ 76 and Java. I've heard claims that SilverLight is vulnerable, but Microsoft says otherwise here. This could of course be wrong. It's also of course possible to introduce a new threat vector. For instance, if we were to add an XHR variant that allowed streaming uploads, this combined with CORS would create a new potential vector. So, in the future we all need to adopt one of the TLS-based countermeasures or be really careful about what Web features we add; probably both to be honest.

We also need to subdivide these vectors into two categories: those which the server can protect itself against (WebSockets) and those which it cannot really (Java). To see the difference, consider that before the client is allowed to use WebSockets, the server needs to agree. So, if you have a standard non-WebSockets server, there's no WebSockets threat. By contrast Java allows URLConnections to the server without any agreement, so there's no way for the server to protect itself from a Java threat vector (other than trying to fingerprint the Java SSL/TLS stack and refuse service, which seems kind of impractical.) Obviously, then, the Java vector is more severe, especially since it's present even if the browser has been fixed.

To make matters worse, the previous version of Java is not only vulnerable to the Rizzo/Duong attack, but it also has what's called a "universal CSRF" issue. It's long been known that Java treats two hosts on the same IP address as on the same origin. It turns out that if you manage to be on the same IP address as the victim site (easy if you're a network attacker), then you can inject a Java applet which will do an HTTPS request to the victim site. That request (a) passes cookies to the site and (b) lets you read the response. These are the two elements necessary to mount a CSRF even in the face of the standard CSRF token defenses. (A related issue was fixed a few years ago, but only by suppressing client-side access to the cookie, which is an incomplete fix.) Obviously, this also serves as a vector for the Rizzo/Duong attack, though I don't know if it's the vector they used, since I don't have all the details of their procedure. Adam Barth and I discovered (or rediscovered, perhaps) the problem while trying to figure out how Rizzo and Duong's attack worked and notified Oracle, who fixed it in the most recent Java patch by supressing sending the cookie in this type of request. (Obviously, I put off writing this post to avoid leaking the issue.) The fix in question would also close this particular vector for the Rizzo/Duong attack, even without the 1/n-1 split, though that doesn't mean that this is the one they were using or that there aren't others.

The bottom line, then, is that you should be upgrading Java, or, if you can't do that, disabling it until you can.