EKR: June 2007 Archives


June 24, 2007

Nick Weaver suggests that the real "security" motivation for the sandboxing in the iPhone is to preserve the security of Apple (and Cingular's revenue stream):
The sandboxing depends on the objective...

Since the objective is to allow the cellphone company to keep charging discrimatory pricing based on traffic intent (EG, ~80 Mb/$ for bulk best-effort data, but .2 Mb/$ for SMS messages which can actually be worse-than-best-effort), the same origin sandboxing is the perfect policy.

After all, an IM client, at 80 Mb/$ would really hurt the income of the phone company when they could have you sending SMS instead.

Admittedly, this is a fairly plausible-sounding theory (though it's worth noting that AT&T does seem to offer an unlimited messaging plan), unifying, as it does, the sandboxing, and the various pieces of functionality (IM, Flash, turning MP3s into ringtones, etc.) that Apple seems to have decided to omit. But let's imagine for the moment that Apple isn't just trying to extract maximum revenue, but merely to make the iPhone work as well as possible. What sort of sandboxing would be most appropriate then?

In that case, you'd be mostly looking to ensure that 3rd-party apps (henceforth 3PA) interfere with the smooth functioning of the phone native apps (NAs). At minimum, you want 3PAs not to be able to crash the phone. Of course, this functionality is provided by pretty much any modern multiprocessing operating system, and is all you'd really need for most of the apps one the iPhone (Web browser, address book, maps, etc.) But for real-time apps (especially telephony), you probably want better guarantees. In particular, you want to avoid:

  • 3PAs interfering with NA's access to the processor.
  • 3PAs interfering with NA's access to the network.
  • 3PAs interfering with NA's access to the speaker and microphone.

Note that in all of these cases you don't want to deny the 3PA access to the network, just to ensure that the NA gets priority when it needs it. This can be sort of a tricky engineering problem at times but is far from insoluble.


June 23, 2007

Matthew Yglesias and Tyler Cowen complain about stores which ask you for ID if you haven't signed your credit card:
I usually forget to sign the back of my credit cards. Or, with one of my cards -- the one I use most frequently -- the signature rubs off quickly. Every now and then the card will be rejected because it doesn't have my signature on it. Or they will require ID.

I then offer to sign the card, but they never accept this possibility. Hrrmph.

Could not a thief have signed a previously unsigned card before using it? In fact I would expect precisely that behavior from a thief. Wouldn't a thief take more care to sign than would a lazy, careless card holder? Upon seeing the unsigned credit card, their estimate of my honesty should go up not down (well, that's not quite a stable equilibrium...).

I've often complained about the same thing, though in my experience merchants do accept you signing the cards in front of them—though the clerk tends to look annoyed. However, let's consider for a moment the possibility that this isn't totally pointless and ask what the reason might be. First, we have to assume that the signature is of any use at all, that is that merchants do check signatures against the card and that the attackers aren't good at forging signatures. I'm not sure that either of these is true, but let's assume that they are. If so, Visa has an incentive to force you to sign your card immediately to avoid situations in which your unsigned card falls into the hands of criminals, who then sign it. Since they can't send a CSR out with your card to make you sign, having your card denied if it's not signed is probably the best they can do.

This sort of matches up with Visa's card acceptance policy, which actually requires the customer to sign the card in the merchant's presence (thanks to Tangurena on MR for pointing to this):

While checking card security features, you should also make sure that the card is signed. An unsigned card is considered invalid and should not be accepted. If a customer gives you an unsigned card, the following steps must be taken:
  • Check the cardholder's ID. Ask the cardholder for some form of official government identification, such as a driver's license or passport. Where permissible by law, the ID serial number and expiration date should be written on the sales receipt before you complete the transaction.
  • Ask the customer to sign the card. The card should be signed within your full view, and the signature checked against the customer's signature on the ID. A refusal to sign means the card is still invalid and cannot be accepted. Ask the customer for another signed Visa card.
  • Compare the signature on the card to the signature on the ID.
If the cardholder refuses to sign the card, and you accept it, you may end up with financial liability for the transaction should the cardholder later dispute the charge.

Of course, if this theory is true, then it just makes the question of why they don't make the strips so that the signature doesn't rub off even more puzzling.


June 20, 2007

It appears that at least initially the iPhone will not have instant messaging, though SMS will "look like" instant messaging (cf. bacon juice vs. lemonade). Because the iPhone is semi-extendable, people are working to close this gap. Unfortunately, since all you can do is write Web apps, this has some limitations. The most important of which is, well, let him tell it:

Log in with your AOL IM account. No data is logged, but all of your information does pass through my server. I am not harvesting any information. This app is server intensive, so I'm limiting sessions to 10 minutes for now.

Obviously, having all my IMs go through somebody's server is pretty weak. Yeah, yeah, I know. I should be using crypto, and I already trust AOL, etc., but that doesn't mean I want to bring another person I have to trust into the loop. Unless I've missed something, this is a pretty inherent limitation of the design decision to only let you write Webapps and to sandbox them in the usual Webapp way. It works fine as long as your goals are limited, but as soon as you want to write something that (for instance) is a generic network client, you're hosed.

Here's Steve Jobs from WWDC:

And so you can write amazing Web 2.0 and AJAX apps that look and behave exactly like apps on the iPhone, and these apps can integrate perfectly with iPhone services. They can make a call, check email, look up a location on Gmaps... don't worry about distribution, just put 'em on an internet server. They're easy to update, just update it on your server. They're secure, and they run securely sandboxed on the iPhone. And guess what, there's no SDK you need! You've got everything you need if you can write modern web apps...

It's probably worth taking the time to unpack the sandboxing issue a bit. The sandboxing used in Java (and Javascript) has two major purposes: (1) to stop the applet from being a threat to your computer and (2) to stop the applet from being a threat to your network. So, when Java won't let an applet write to the disk, that's protecting your computer. When it won't let you connect to anyone other than the originating host (the same origin policy), that's to stop your computer being used to attack other computers on the network. The classic example of this is some applet which you download to your computer (behind the firewall) and then connects to hosts which would be firewalled off from a host on the public Internet.

But of course, if you want to write a real app (an IM or mail client, say), connecting to hosts other than the one you downloaded from is an absolute necessity—unless, that is, you think it's really cool that when you download Exchange you can only send mail to people at Microsoft. So, the idea with sandboxing is that there are a large (surprisingly large, actually) class of tasks that don't need quite so much functionality and that you can then connect to sites that do those tasks without having to think "should I give this program the right to write files onto my disk". But it should also be clear that this isn't the only class of tasks that people want their software to perform. which is why people want to be able to load applications on their computers, even if that whole code signing thing didn't work out as well as we hoped. It's just that you want to have to take positive action before allowing a more powerful app onto your computer (or phone). To the extent to which 3rd party iPhone apps are restricted to the Webapp sandbox, the device is only marginally extensible and that's not really that sweet.

Next: what sort of sandboxing makes sense for a device of this type?


June 17, 2007

Saw an iPhone on Friday and got a demo but wasn't allowed to touch it and can't say anything except that it looks pretty sweet. Which I guess makes this post fairly useless.

June 13, 2007

Paul Hoffman asks (in comments):
Is this really easier than a dictionary attack after one unsuccessful attempt? I guess that this attack works when the APOP password is not in any attack dictionary or algorithm, but I would still like to see a comparison between the work effort for the attack and a very deep dictionary run. Note that the dictionary attack is *much* less likely to raise suspicions since there is only one failure, not many.

So, the first thing you need to realize is that this is a byte-by-byte attack. For simplicity, ignore dictionary attacks and assume that you have a 64-bit password. Searching that entire space takes, you got it, 264 operations (all offline). Now, say you mount the Leurent-Sasaki attack on only the last byte. This requires intercepting average 256 (worst case 512) connections and finding a collision for each of those connections. It seems to be a little hard to map the cost of finding a collision directly onto hash operations, but Luerent reports about 5 seconds per collision, so we're probably looking at order 100,000 (216) operations per collision, so this is something like (216) operations. But look what's happened here: we now know the last byte of the password, so we can mount a search on the remaining bytes. Searching the remaining bytes requires only 256 operations, compared to which 224 is negligible. So, we've reduced our computational complexity by around a factor of 256, admittedly at the cost of intercepting a lot more connections.

If we could extend this technique to the whole password we'd need to intercept about 8*256 connections and do about 8*227== 232 hash computations, so we'd have reduced the work factor by about a factor of two. However, as I mentioned in the original post, this technique can only be used to extract the last three bytes of the password. To make a long story short, Leurent estimates that with 8 character passwords with 6 bits of entropy per password this brings the work factor down to 230, a reduction of 218. This is obviously a big improvement.

Of course, this improvement depends on a fairly unrealistic assumption about the entropy of the password. In general, the lower the entropy, the more attractive dictionary search looks, and with typical passwords, it probably is better, especially when you factor in the negative effects of interfering with the user's connections.

UPDATE: Roy Arends points out that I apparently can't multiply and that 8*24==227. So, no breakeven point, like I'd previously suggested. I guess 8:00 PM must be past my bedtime. Fixed.


June 11, 2007

So, I'd been going around saying that the collision attacks against MD5 and SHA-1 were pretty useless against real protocols. At ECRYPT, Dan Bernstein pointed out to me that someone has actually used a collision attack successfully against APOP, an old challenge-response style protocol. The paper, by Leurent, is here (and rediscovered by Sasaki here).

The way that APOP works (like pretty much all challenge-response protocols) is that the server sends the client some challenge (a fresh value) M and the client sends back F(K,M) where K is the shared key and F is some function. In modern systems, people tend to like HMAC for F but APOP was designed before HMAC and in an era where people were pretty loose about hash functions. APOP computes the response as: MD5(M || K). This turns out to enable an attack, provided that the attacker can control the challenge.

The basic attack allows the attacker to determine one character of the password. Say he thinks the last character is C The attacker generates two colliding messages M1 and M2 with a special structure.

  • They are one hash block long.
  • The last byte of M1 is C.
  • The last byte of M2 is also C [fixed -- EKR]

So, we have:

  • M1 = xxxxxxxxxxC
  • M2 = yyyyyyyyyyC

Where X and Y are arbitrary and come out of the collision finding algorithm (actually, these are longer, but nobody wants to see 63 xs.).

The attack then requires the user to authenticate twice. The first time the attacker gives the challenge xxxxxxxxxx. This causes the user to return H(xxxxxxxxxx || P1 || P2 || P3 || ... Pn) where P1 is the first character of his password, P2 is the second, etc. The second time the attacker sends yyyyyyyyyy and gets back H(yyyyyyyyyy || P1 || P2 || P3 || ...). Now, here's the key point: these challenges have been specifically arranged so that the first byte (P1) of the password lines up with the last byte of the first hash block. If P1 == C then the two first hash blocks will collide. And since P2 ... Pn are the same, the entire hash output will collide. In other words, if the attacker has guessed C correctly, then the responses to these two different challenges will be the same. Otherwise they will not be (with high probability).

We've now extracted the first character of the password. That's not bad, but what about the rest? Well, it's straightforward to extend this once we know the first character. We build two new messages:

  • M1 = xxxxxxxxxP1C
  • M2 = yyyyyyyyyP1C
And we can repeat the same procedure extracting the password one character at a time.

This attack strategy has been known for quite some time and is originally due to Preneel and van Oorschot. However, the bottleneck was always that finding collisions was too expensive. The discovery of efficient paths for finding collisions changes that. If it's easy to find collisions, then this method becomes very practical.

Of course, "easy" is where things get complicated. There are two factors to consider here. The first is that it can be difficult to control the collision values, so you don't always get to choose that the last n characters of the colliding blocks are equal. Indeed, Leurent reports that he can only recover the first three bytes of the password. The second complication (and this applies only to APOP) is that in APOP challenges are RFC-compliant message ids, and the colliding blocks above most likely contain non-ASCII characters and so don't fit. Implementations which check for compliance are probably not vulnerable. However, Leurent reports that he's successfully mounted this attack on a variety of clients and it works, which isn't too surprising. Note that improved collision-finding techniques could lead to relaxation of both of these constraints.

The bottom line here is that if you're using APOP without TLS you should probably stop. On the other hand, if you're using APOP without some kind of encryption you should have stopped long ago...


June 4, 2007

While we're on the topic of vampires... The central plot driver of Blade is Deacon Frost's plan to turn himself into the "blood god" who will somehow turn anyone in his path into vampires. Frankly, this is a pretty stupid plan, since it's kind of unclear why it would be useful to turn everyone into a vampire. What would they eat?

In a deleted scene, Dr. Karen Jenson (the female lead) makes this point and Frost shows her his incubator full of brainless corpses being used as blood production machines. This makes it a much less stupid plan. As I pointed out earlier, once you have a ready source of blood, vampirism becomes a sort of mild inconvenience, consisting of staying indoors at night and avoiding garlic pizza. Obviously it's uncool to turn people into vampires without their consent, but it's somewhere short of killing everyone and feasting on their blood. On the other hand, it's kind of unclear why it's in a vampire's interest for everyone else to be a vampire. Isn't the cool part of being a vampire having an edge over everyone else...


June 3, 2007

Rumor has it that the scientists over at NIH have developed a new life extension technique: Viral Augmentation and Mortality Prevention (VAMP). Subjects treated with VAMP develop substantially improved strength, reflexes, and eyesight. They also develop dramatically faster healing and appear to have significantly improved life expectancy, at list four times as long in animal models.

Like any medical procedure, VAMP has side effects. The two big ones are extreme sensitivity to sunlight and severe, chronic, anemia. As a consequence, subjects require frequent transfusions of whole blood. Obviously, this is an undesirable side effect and one we'd like to fix in future development, but one would expect that plenty of people would be willing to trade off a bit of a blood dependency for being immortal and superstrong. That seems like an especially good trade if you were old, terminally ill, etc.

So, ignoring the issue of whether the FDA would approve VAMP, is this something good for society? Once we get past objections about how immortality is inherently bad, we've got a situation that's inherently exclusive. Each subject requires a support base of some number of normal humans to provide transfusions for them, so this means that only a small fraction of the population can be treated—at least until we find some way to produce blood in vivo, which is likely to suddenly become a pretty high research priority, along with finding a version of VAMP without these side effects.

Even with the current set of conditions, though, it's not clear that VAMP is unacceptable. There are lots of drugs we can't afford to give to everyone, but that doesn't stop us from manufacturing them. Of course, VAMP is slightly different in two respects. First, you voluntarily acquire the condition but then we have to treat the side effects, but it seems to me that you still stand in the same relationship to the poor schlubs who can't afford the treatment. Second, while the side effects are unpleasant, being treated with VAMP arguably puts you in a superior position to others, rather than an inferior position, as with, say, HIV.

Of course, VAMP doesn't exist, but it's a useful way to think about vampires, or as they prefer to be known, vampiric-Americans. Vampires basically have a disease with some positive side effects (long life, strength, fast healing), and some negative ones (sun sensitivity, garlic allergy, etc.) And of course, the need for blood, but as we saw with VAMP, that's just a matter of a missing market for blood. If we had some cool blood mass production technique, then vampirism would just become another treatable condition. Without it, vampires are reduced to uh... freelance blood collection. With it, people would be lining up to get bitten.


June 2, 2007

Seeing as I was in the mood for gourmet dining experience, Terence and I repaired to Choix Frais. Everything was quite normal until we got to the steam trays with the pasta, etc. And there I saw it: a tray full of macaroni, a tray full of tortilla chips, and there, between them, a tray of fluorescent goop labelled cheddar cheese sauce. A closer inspection revealed the awful truth: the macaroni was labelled something like "macaroni and cheese" and the tortilla chips "build your own nachos". This was dual-use cheese sauce!

Ever the connoisseur of the disgusting, I figured it was time to give it a shot. I piled up some tortilla chips, poured the sauce over top, and then headed back to my table with water glass at the ready. Terence and I steeled ourselves and each took a chip. It wasn't the most disgusting thing I've ever had, but honestly, it was pretty bad. The first taste is vaguely like cheese, or rather, cheez [Terence says salt + cheetoey acidity], even reminescent of cheddar, but then there's this weird starchy taste/texture that's presumably some thickening or other texture modifying agent (it feels a little like a fast food milkshake) [Terence says: something that melts above body temperature]. Then, after you've finished you keep getting this horrible syntho-cheese aftertaste.

We weren't brave enough to try it on the macaroni.


June 1, 2007

Via Matthew Yglesisas, here's this quite entertaining study of douchebaggery:
I'm waiting for a friend at a wine bar and I see that the guy a couple of stools down from me keeps ostentatiously checking the late-model smartphone that lies before him on the granite countertop. He has the all-black Samsung BlackJack, which happens to be the coolest-looking smartphone there is —at least until the iPhone comes out—and he's wearing jeans that look like they cost $400, and his haircut was probably half that. I also notice that he's got an expensive- looking European leather briefcase at his feet that he no doubt calls an attache.

I'm thinking, what a douchebag.

And then I think, wait a second. I'm here, at this wine bar, just as he is. And frankly, when the iPhone does come out, I intend to get it (even though it's slated to cost more than $500) to replace the Treo I'm currently carrying. (Also: I really should check my e-mail right now.) And I'm due for a (quasi-expensive) haircut, in fact. And where's the freaking bartender already? And . . . and . . . and . . . am I a douchebag? I have met the enemy, and he is . . . me?

[Checks self]

OK, I don't drink wine, my "haircut" is courtesy of Mr. Mach 3 and runs about $.50/each time, but I have a Treo and may well buy an iPhone. So far so good...