SYSSEC: August 2008 Archives

 

August 27, 2008

Wired complains about a "massive iPhone security hole", namely that the keyboard lock does not work as expected:
You're a smart, safety conscious iPhone user, right? You keep the phone set to require a 4-digit passcode every time it wakes up, so if you ever lose your baby, all your personal information is safe. But if you are running v2.0.2 of the iPhone operating system, you might as well not bother. A simple hack will get anybody past your PIN code with free access to all your mail, contacts and bookmarks. Ouch!

Acting on a tip from the Mac Rumors forums, Gizmodo's Jesus Diaz whipped up a video of the exploit in action, a ridiculously easy two-step process:

1. Tap emergency call.

2. Double tap the home button.

This drops you into the iPhones "favorites" section. From here you can make calls or send e-mail, and with a few steps you can browse to the Address Book and then on to Mail, Safari or the SMS application.

I'm not saying this is the best designed feature I've ever seen. Obviously, if you have a PIN lock on your phone you'd prefer to not have it be easily bypassed. That said, it's important to be realistic about what a PIN-based lock like this can do, even in principle, remembering that this person has your phone in hand. There are two things you might want to secure:

  • People making phone calls with your account.
  • Your data.

As far as people making phone calls, your account information is embedded in the SIM card, which an attacker can just pop out and cram into their phone. You can block this by installing a SIM PIN (the iPhone supports this) which needs to be entered every time you power on your phone, but it's not built into the keyboard/screen lock.

With regard to your personal data, remember that the iPhone stores it on the flash memory somewhere. In principle, it could be encrypted (though I don't think it is), but unless there is a hardware security module, the only source of keying material entropy is the PIN, and if someone takes an image of the flash memory, they can mount a dictionary attack on the PIN. Based on the iPhone breakdowns I've seen, there doesn't seem to be an HSM anyway. Interestingly, you don't seem to be able to extract the data from the iPhone by syncing to it, at least not the trivial way: iTunes prompts you for a PIN before syncing. Of course, I don't know if that's enforced on the phone or just in iTunes. If the latter, then you should be able to write your own program that sucks the data off without asking you for a PIN.

The bottom line here is that the iPhone isn't some sort of vault for your data. If you want it protected, use strong encryption or keep it on a device you don't plan to lose.

 

August 14, 2008

Declan McCullagh reports on the MBTA's claim that the MIT researchers have no first amendment right to publish their research:
First Amendment protection does not extend to speech that advocates a violation of law, where the advocacy "is directed to inciting or producing imminent lawless action and is likely to incite or produce such action." The Individual Defendants' conduct falls squarely within this well established zone of no protection.

First, unless restrained, the Individual Defendants would have given their Presentation, and related materials (which have not yet been made available) to one of the world's largest hacker conferences. Advocacy in favor of illegal behavior, in this context, is likely to incite or produce illegal behavior. Second, the Presentation, and likely the related code and materials, unequivocally constitute advocacy in favor of a violation of law.... the Individual Defendants are vigorously and energetically advocating illegal activity, and this advocacy, in the context of the DEFCON Conference, is both directed to inciting or producing imminent lawless action, and likely to produce such action. Therefore, the Individual Defendants enjoy no protections under the First Amendment.

I've reviewed the MIT group's slides, and while they do involve a certain level of hype, the general tone isn't that out of place in the security community. It didn't strike me as "advocacy in favor of illegal behavior". Rather, it simply described a set of vulnerabilities, some description of how they could be exploited, and the impact of exploitation. Obviously, this sort of disclosure could result in some illegal behavior, but that's a potential result of any paper describing vulnerabilities. Unless I'm missing something, the rule the MBTA is proposing would effectively allow the banning of publication of any security vulnerabilities. Incidentally, the bit about the "context of the DEFCON conference" is odd. Perhaps the MBTA would be so good as to provide a list of venues at which it's ok to publish your results. Full Disclosure? W00T? USENIX Security? The New York Times?

The Individual Defendants' DEFCON presentation constitutes commercial speech. Commercial speech is any "speech that proposes a commercial transaction." Here, the Presentation is full of marketing, and self-promotional statements. It is not a research paper. As commercial speech advertising illegal activity, it receives no First Amendment protection.

What a bizarre statement. Leaving aside the question of whether self-promotion is sufficient to make something commercial speech (I'm not a lawyer but my understanding is that it isn't), when was the last time you saw an research paper that wasn't full of marketing and self-promotional statements?

 

August 12, 2008

SF Gate has this article about the mysterious semi-disappearance of Clear (the airport Verified Identity bypass people's) laptop at SFO:
The Clear service speeds registered travelers through airport security lines. Verified Identity Pass operates the program at about 20 airports nationwide.

New enrollments in the program were suspended after a laptop with names, addresses and birthdates for people applying to the program disappeared from a locked Verified Identity Pass office at the airport. The files on the laptop were not encrypted, but were protected by two passwords, a company official said.

A preliminary investigation showed that the information was not compromised, said Steven Brill, CEO of Clear, but the TSA is still reviewing the results of its forensic examination of the computer.

In case you didn't know this already, multiple passwords don't add a lot of value when the attacker has physical possession of the computer. Passwords only protect access when the operating system is running. However, typically computers can be booted from some media other than the hard drive, e.g., CDROM or a USB stick. In that case, you can boot any operating system you want and read the laptop hard drive directly, regardless of what passwords there are. On many computers, you can configure the BIOS so that the machine can only be booted from the hard drive, and then some password is needed to reconfigure the BIOS. I can't tell whether this machine was configured this way. If it were, you could try guessing the password, or you could just open the case and read the hard drive directly in another machine. This, of course, is why you want to encrypt the hard drive.

I'd also be interested in hearing what forensics were performed. Neither of these procedures would leave much in the way of electronic evidence, especially if the computer was already off—both these attacks would require rebooting the computer, though of course the attacker could just let the battery run down, which would help cover up an intentional reboot. If you removed the hard drive, that might leave tool marks on the case, screws, etc. but then you'd have to know what tool marks were there before from assembly, repair, etc. In any case, it's not clear to me that one could really tell whether this sort of attack would be readily detectable.

 

August 11, 2008

In an interview, in today's WSJ, Steve Jobs confirms that the iPhone has a remote "kill switch":
Apple raised hackles in computer-privacy and security circles when an independent engineer discovered code inside the iPhone that suggested iPhones routinely check an Apple Web site that could, in theory trigger the removal of the undesirable software from the devices.

Mr. Jobs confirmed such a capability exists, but argued that Apple needs it in case it inadvertently allows a malicious program -- one that stole users' personal data, for example -- to be distributed to iPhones through the App Store. "Hopefully we never have to pull that lever, but we would be irresponsible not to have a lever like that to pull," he says.

I don't find this rationale very convincing. As far as I know, neither Windows nor OS/X has any sort of remote software deactivation feature, and we know that there are malicious programs out there that steal users' personal data. In fact, the situation is quite a bit better with the iPhone than with either of those two programs because (unlike the iPhone), these operating systems allow the user to install arbitrary software. The only ways that a user could get malicious software on their iPhone are if Apple distributes it through appstore or the user jailbreaks their phone—and it's hard to see why Apple needs to protect you if you've deliberately done something unauthorized. So, this seems less necessary for an iPhone than for a commodity PC.

While a switch like this might not be useful for routine malware, one could argue that because you're on a closed network (AT&T in the US), the network operator needs to be able to deactivate software that is a serious threat to the network (e.g., a rapidly spreading worm). However, unless you expect to be constantly plagued with such worms, then you don't really need this fine grained a kill switch—you just want to pull the phone off the network entirely. This is especially true since it seems unlikely that this feature will work in the face of truly malicious code. All you need is for there to be one iPhone privilege escalation vulnerability and the malware will simply be able to deactivate the remote check from happening at all, thus protecting itself. There's no reason to believe that iPhone's security is much better than that of your average software system, so such vulnerabilities are likely to exist.

What a switch like this really is good for, however, is letting Apple retroactively decide that a given app is something they don't want you running—even if you do want to run it—and take it away from you. That explanation seems a lot more consistent with Apple's general policy of deciding yes/or no on every app people might want to run.