It would be irresponsible not to have a lever

| Comments (0) | SYSSEC
In an interview, in today's WSJ, Steve Jobs confirms that the iPhone has a remote "kill switch":
Apple raised hackles in computer-privacy and security circles when an independent engineer discovered code inside the iPhone that suggested iPhones routinely check an Apple Web site that could, in theory trigger the removal of the undesirable software from the devices.

Mr. Jobs confirmed such a capability exists, but argued that Apple needs it in case it inadvertently allows a malicious program -- one that stole users' personal data, for example -- to be distributed to iPhones through the App Store. "Hopefully we never have to pull that lever, but we would be irresponsible not to have a lever like that to pull," he says.

I don't find this rationale very convincing. As far as I know, neither Windows nor OS/X has any sort of remote software deactivation feature, and we know that there are malicious programs out there that steal users' personal data. In fact, the situation is quite a bit better with the iPhone than with either of those two programs because (unlike the iPhone), these operating systems allow the user to install arbitrary software. The only ways that a user could get malicious software on their iPhone are if Apple distributes it through appstore or the user jailbreaks their phone—and it's hard to see why Apple needs to protect you if you've deliberately done something unauthorized. So, this seems less necessary for an iPhone than for a commodity PC.

While a switch like this might not be useful for routine malware, one could argue that because you're on a closed network (AT&T in the US), the network operator needs to be able to deactivate software that is a serious threat to the network (e.g., a rapidly spreading worm). However, unless you expect to be constantly plagued with such worms, then you don't really need this fine grained a kill switch—you just want to pull the phone off the network entirely. This is especially true since it seems unlikely that this feature will work in the face of truly malicious code. All you need is for there to be one iPhone privilege escalation vulnerability and the malware will simply be able to deactivate the remote check from happening at all, thus protecting itself. There's no reason to believe that iPhone's security is much better than that of your average software system, so such vulnerabilities are likely to exist.

What a switch like this really is good for, however, is letting Apple retroactively decide that a given app is something they don't want you running—even if you do want to run it—and take it away from you. That explanation seems a lot more consistent with Apple's general policy of deciding yes/or no on every app people might want to run.

Leave a comment