COMSEC: July 2009 Archives


July 29, 2009

Wired reports on Apple's response to EFF's proposed DMCA exception for iPhone jailbreaking. I'm not qualified to have a position on the legal arguments Apple is advancing, but check out their technical arguments:
More generally, as Mr. Joswiak testified at the hearings in Palo Alto, a critical consideration in the development of the iPhone was to design it in such a way that a relationship of trust could be established with the telecommunication provider (AT&T in the case of users in the U.S.). Before partnering with Apple to provide voice and data services, it was critical to AT&T that the iPhone be secure against hacks that could allow malicious users, or even well- intentioned users, to wreak havoc on the network. Because jailbreaking makes hacking of the BBP software much easier, jailbreaking affords an avenue for hackers to accomplish a number of undesirable things on the network.

For example, each iPhone contains a unique Exclusive Chip Identification (ECID) number that identifies the phone to the cell tower. With access to the BBP via jailbreaking, hackers may be able to change the ECID, which in turn can enable phone calls to be made anonymously (this would be desirable to drug dealers, for example) or charges for the calls to be avoided. If changing the ECID results in multiple phones having the same ECID being connected to a given tower simultaneously, the tower software might react in an unknown manner, including possibly kicking those phones off the network, making their users unable to make phone calls or send/receive data. By hacking the BBP software through a jailbroken phone and taking control of the BBP software, a hacker can initiate commands to the cell tower software that may skirt the carrier's rules limiting the packet size or the amount of data that can be transmitted, or avoid charges for sending data. More pernicious forms of activity may also be enabled. For example, a local or international hacker could potentially initiate commands (such as a denial of service attack) that could crash the tower software, rendering the tower entirely inoperable to process calls or transmit data. In short, taking control of the BBP software would be much the equivalent of getting inside the firewall of a corporate computer - to potentially catastrophic result. The technological protection measures were designed into the iPhone precisely to prevent these kinds of pernicious activities, and if granted, the jailbreaking exemption would open the door to them.

This is an odd set of arguments: if what I want to do is bring down the cell network, I've got a lot of other options than hacking my iPhone. For instance, I could buy a less locked down phone or a standard programmable GSM development kit on the open market. In general, GSM chipsets and radios just aren't a controlled item. Second, if a misbehaving device is able to bring down a significant fraction of the cellular system, then this represents a serious design error in the network: a cell phone system is a distributed system with a very large number of devices under the control of potential attackers; you need to assume that some of them will be compromised and design the network so that it's resistant to partial compromise. The firewall analogy is particularly inapt here: you put untrusted devices outside the firewall, not inside. I'm not an expert on the design of GSM, but my impression is that it is designed to be robust against handset compromise. The designs for 3GPP I've seen certainly assume that the handsets can't be trusted.

That leaves us with more mundane applications where attackers want to actually use the iPhone in an unauthorized way. Mainly, this is network overuse, toll fraud, etc. (Anonymous calling isn't that relevant here, since you can just buy cheap prepaid cell phones at the 7/11. You'd think someone at Apple would have watched The Wire.) As far as toll fraud goes, I'm surprised to hear the claim that hacking the iPhone itself lets you impersonate other phones. My understanding was that authentication in the GSM network was primarily via the SIM card, which is provided by the carrier and isn't affected by phone compromise. [The GSM Security site sort of confirms this, but I know there are some EG readers who know more about GSM security than I do, so hopefully they will weigh in here.] It's certainly true that control of the iPhone will let you send traffic that the provider doesn't like, and the phone can be programmed to enforce controls on network usage, so this is probably getting closer to a relevant concern. On the other hand, controls like this can be enforced in the network in a way that can't be bypassed by tampering with the phone.

While I'm not that concerned about jailbreaking leading to the parade of horrors Apple cites here, it's arguable that Apple's insistence on locking down the platform has made the problem worse. What people want to do is primarily: (1) load new software on the phone and (2) unlock the phone so they can use it with other carriers. However, because Apple won't let you do either of these, a lot of effort has been put into breaking all the protections on the phone, which naturally leads to the development of expertise and tooling for breaking the platform in general. There's an analogy here to the observation (I think I heard Mark Kleiman make this) that minimum drinking ages lead to the development of a fake ID industry, which then makes it easier for criminals and terrorists to get fake IDs.


July 24, 2009

As you may have heard, Palm and Apple are currently in an arms race over whether the Palm Pre can sync with iTunes. When the Pre first came out, it synced with iTunes. Apple recently released a patch to block it, and Palm released an update to the Pre that counters Apple's blocking. The current round centers on USB vendor IDs. USB devices have a vendor id which identifies who makes the product. iTunes apparently checks for Apple's vendor ID. Palm is impersonating it, so the Pre appears to be an iPod.

It should be readily apparent that there's no technical way for Apple to prevail with this kind of strategy; as long as there is a single fixed string that a valid device emits, Palm just needs to get a copy of that string and send it to iTunes (communications security people call this a replay attack). That doesn't mean that Apple can't win, of course. For instance, they could convince the USB Implementor's Forum that Palm is violating the rules (Palm has already complained about Apple). I don't know what, if any enforcement powers USB-IF has, but if they have any, Apple might conceivably convince them to stop Palm. [Question for any lawyers: does this change by Palm "circumvent a technological measure that effectively controls access to a work protected under this title" in the sense of the DMCA?] Another way to get past the technical replay problem is to make the replayed string something that Palm can't legally replay, like a random section of the iPod firmware.

Even if we limit ourselves to technical approaches (which is much more fun) there are straightforward technical measures by which Apple could have built the system to make what Palm has done essentially impossible. For instance, they could have given every i{Pod,Phone} an asymmetric key pair and certificate and forced each device to authenticate prior to syncing. This would have made Palm's job very hard: even if they were to recover the keys from some devices, Apple could quickly blacklist those devices—including having an online blacklist which iTunes checks. Since the whole point of the exercise is to make things easy for the user, forcing them to constantly download fresh keys to their Pre seems like a real imposition.

However, it seems that Apple hasn't built anything like this into their systems and it's a bit of a challenge to do it now; we somehow need to initialize each device with a key and a certificate. There's of course no problem in loading new firmware and having it generate a key pair, certificate signing request, etc., and having it signed by Apple. But of course the Pre can do the same thing, so we've reduced it to a previously unsolved problem. One could imagine that Apple could force the key generation/certification process to happen online and torture the device with a bunch of forensics. Palm can of course try to defeat those, but since Apple just needs to change their servers which they can do rapidly, this makes Palm's job somewhat harder. And of course if we're willing to allow legal measures, Apple could force you to click through some license attesting that you have an Apple device, maybe check your serial number, etc. Ultimately, though, I'm not sure you can get past this bootstrapping problem with purely technical measures.


July 10, 2009

If you have any interest in the Bush Administration's warrantless wiretapping program, you should read the report prepared by the Office of the Inspector General of the DOD, DOJ, CIA, NSA, and ODNI. This is the unclassified summary of a somewhat longer classified report, but nevertheless there's some interesting information here. The high points include.
  • The President's Surveillance Program (PSP) comprised the Terrorist Surveillance Program (TSP) and still classified Other Intelligence Activities (OIA).
  • The TSP program appears to have included surveillance of "communications into and out of the United States where there was a reasonable basis to conclude that one party to the communication was a member of al-Qa'ida or related terrorist organizations. ... The Attorney General subsequently publicly acknowledged the fact that other intelligence activities were also authorized under the same Presidential Authorization, but the details of those activitied remain classified."
  • The program was periodically reauthorized and prior to each reauthorization, the NCTC would prepare a threat assessment justifying the need to reauthorize it:
    NCTC personnel involved in preparing the threat assessments told the ODNI OIG that the danger of a terrorist attack described in the threat assessments was sobering and "scary," resulting in the threat assessments becoming known by ODNI and IC personnel involved in the PSP as the "scary memos."
  • The Administration's legal justification for these activities relied heavily (it seems almost exclusively) on an analysis by John Yoo arguing that FISA couldn't constitutionally restrict the president's Article II wartime intelligence gathering activities and that these activities didn't violate the 4th amendment.
  • After Yoo left DOJ, new DOJ officials Jack Goldsmith, Patrick Philbin, and James Comey became concerned about the adequacy of Yoo's analysis. The timeline here is complicated but ultimately a standoff ensued between DOJ and the White House with the White House on the side of continuing the PSP. This was ultimately resolved, as far as I can tell, by the White House effectively telling the DOJ that the President had determined the position of the executive branch. Here's Albert Gonzales:
    Your memorandum appears to have been based on a misunderstanding of the President's expectations regarding the conduct of the Department of Justice. While the President was, and remains, interested in any thoughts the Department of Justice may have on alterneative ways to achieve effectively the goals of the activities authorized by the Presidential Authorization of March 11, 2004, the President has addressed definitively for the Executive Branch in the Presidential Authorization the interpretation of the law.

  • Despite the above, the administration ultimately modified the program, presumably along lines more acceptable to DOJ.
  • It's extremely hard to assess the extent to which the PSP was at all useful The OIG reports people fram various agencies calling it useful, but mostly as one tool among many, and there doesn't seem to have been any real attempt to quantify the importance of the program.

The second and sixth points will be especially familiar sounding to people who remember the extensive debate about controls on cryptography: extensive claims about how the dire consequences of not being able to listen to everyone's communications coupled with extremely limited evidence that that capability was actually that important. I'm not qualified to assess the legal questions about whether this program complied with FISA and/or the Constitution. However, obviously this program does have some impact on the privacy of US Citizens (and "reasonable basis" is a pretty low standard), so it would be nice if there were somewhat more evidence that that was a tradeoff worth making.