SYSSEC: July 2009 Archives

 

July 29, 2009

Wired reports on Apple's response to EFF's proposed DMCA exception for iPhone jailbreaking. I'm not qualified to have a position on the legal arguments Apple is advancing, but check out their technical arguments:
More generally, as Mr. Joswiak testified at the hearings in Palo Alto, a critical consideration in the development of the iPhone was to design it in such a way that a relationship of trust could be established with the telecommunication provider (AT&T in the case of users in the U.S.). Before partnering with Apple to provide voice and data services, it was critical to AT&T that the iPhone be secure against hacks that could allow malicious users, or even well- intentioned users, to wreak havoc on the network. Because jailbreaking makes hacking of the BBP software much easier, jailbreaking affords an avenue for hackers to accomplish a number of undesirable things on the network.

For example, each iPhone contains a unique Exclusive Chip Identification (ECID) number that identifies the phone to the cell tower. With access to the BBP via jailbreaking, hackers may be able to change the ECID, which in turn can enable phone calls to be made anonymously (this would be desirable to drug dealers, for example) or charges for the calls to be avoided. If changing the ECID results in multiple phones having the same ECID being connected to a given tower simultaneously, the tower software might react in an unknown manner, including possibly kicking those phones off the network, making their users unable to make phone calls or send/receive data. By hacking the BBP software through a jailbroken phone and taking control of the BBP software, a hacker can initiate commands to the cell tower software that may skirt the carrier's rules limiting the packet size or the amount of data that can be transmitted, or avoid charges for sending data. More pernicious forms of activity may also be enabled. For example, a local or international hacker could potentially initiate commands (such as a denial of service attack) that could crash the tower software, rendering the tower entirely inoperable to process calls or transmit data. In short, taking control of the BBP software would be much the equivalent of getting inside the firewall of a corporate computer - to potentially catastrophic result. The technological protection measures were designed into the iPhone precisely to prevent these kinds of pernicious activities, and if granted, the jailbreaking exemption would open the door to them.

This is an odd set of arguments: if what I want to do is bring down the cell network, I've got a lot of other options than hacking my iPhone. For instance, I could buy a less locked down phone or a standard programmable GSM development kit on the open market. In general, GSM chipsets and radios just aren't a controlled item. Second, if a misbehaving device is able to bring down a significant fraction of the cellular system, then this represents a serious design error in the network: a cell phone system is a distributed system with a very large number of devices under the control of potential attackers; you need to assume that some of them will be compromised and design the network so that it's resistant to partial compromise. The firewall analogy is particularly inapt here: you put untrusted devices outside the firewall, not inside. I'm not an expert on the design of GSM, but my impression is that it is designed to be robust against handset compromise. The designs for 3GPP I've seen certainly assume that the handsets can't be trusted.

That leaves us with more mundane applications where attackers want to actually use the iPhone in an unauthorized way. Mainly, this is network overuse, toll fraud, etc. (Anonymous calling isn't that relevant here, since you can just buy cheap prepaid cell phones at the 7/11. You'd think someone at Apple would have watched The Wire.) As far as toll fraud goes, I'm surprised to hear the claim that hacking the iPhone itself lets you impersonate other phones. My understanding was that authentication in the GSM network was primarily via the SIM card, which is provided by the carrier and isn't affected by phone compromise. [The GSM Security site sort of confirms this, but I know there are some EG readers who know more about GSM security than I do, so hopefully they will weigh in here.] It's certainly true that control of the iPhone will let you send traffic that the provider doesn't like, and the phone can be programmed to enforce controls on network usage, so this is probably getting closer to a relevant concern. On the other hand, controls like this can be enforced in the network in a way that can't be bypassed by tampering with the phone.

While I'm not that concerned about jailbreaking leading to the parade of horrors Apple cites here, it's arguable that Apple's insistence on locking down the platform has made the problem worse. What people want to do is primarily: (1) load new software on the phone and (2) unlock the phone so they can use it with other carriers. However, because Apple won't let you do either of these, a lot of effort has been put into breaking all the protections on the phone, which naturally leads to the development of expertise and tooling for breaking the platform in general. There's an analogy here to the observation (I think I heard Mark Kleiman make this) that minimum drinking ages lead to the development of a fake ID industry, which then makes it easier for criminals and terrorists to get fake IDs.

 

July 11, 2009

Miguel Helft has a somewhat confusing/confused article in today's NYT about Google's ChromeOS and moving applications into the cloud. In order to make sense of this, it's important to understand what we're talking about and what its properties are.

Your typical computer (say, a Windows machine) is loaded with an OS, whose job it is to let you run applications, talk to the various devices on the system, mediate access to shared resources, etc. Most of the work is done by a variety of applications that run on top of the operating system. Almost everything you ordinarily use (Word, IE, Firefox, Safari, Exchange, ...) are applications that run on top of the OS They rely on the OS to let them talk to all those resources, including the network, the disk, and to some extent the system memory. What we're talking about with a Web-based system is something where the operating system has been stripped down and only really runs one program: the Web browser. In the case of ChromeOS, the operating system is Linux and the browser is Chrome.

With that in mind, let's take a look at this article.

PCs can be clunky and difficult to maintain. They're slow to start up and prone to crashing, wiping out precious files and photographs. They are perennially vulnerable to virus attacks and require frequent upgrades. And if you lose your laptop, or worse, it's stolen, you lose not only your machine but everything stored in its circuitry that's not backed up - those files, contacts, e-mail messages, pictures and videos.

But what if a computer were nothing more than an Internet browser - a digital window pane onto the Web? You could forget about all the software that now powers your computer and everything it does. All of those functions would be done through the Web. And none of the information that's now inside your computer would be there anymore. Instead, it would all be on the cloud of connected computers that is the Internet.

There are a number of points here, so let me take them in turn: maintainability, availability, and security.

It's certainly true that PCs are a pain to maintain. This is especially true if you're running a PC with software from a variety of manufacturers, since any of those packages can need upgrading. However, it's not like you won't need to upgrade your ChromeOS machine as well: both Linux and Chrome will require periodic upgrades. Now, if Google does a good job, then those upgrades will be automatic and relatively painless—both Microsoft and Apple already do this, though I guess it's a matter of opinion how painless they are and of course it remains to be seen if ChromeOS's mechanisms are any better. The good news with a Web-based system is that the number of pieces of software you're running is strictly limited and that new functionality can be implemented by software that runs inside the browser, so you don't need to download or manage it.

As for the reliability/availability of your data, having your PC crash is of course a not-infrequent occurrence, and data certainly does get lost. If your data is stored on some machine on the Internet, then failures of your PC don't cause data loss. But the flip side of this feature is that if you don't have access to the Internet, then you may not be able to get at your data at all. It's possible to design networked systems that depend on remote storage but cache data on your local PC so you can work on it when you're disconnected—I routinely use systems like this for collaboration—but it's hard to make that sort of thing work purely in the browser context, since the whole point is that the software resides on the Web server, not on your local machine.

I'm less convinced by the security story. The basic way that your computer gets infected with malware is that a program with a vulnerability processes some malicious data supplied by an attacker. On a typical PC, there are only a few programs which an attacker can send data to: primarily the Web browser, mail client (unless you use Webmail), IM client (unless you use Web-based IM), word processor (unless you use Google docs), maybe a PDF previewer, spreadsheet, etc. Note how prominently the browser appears here; a putative web-based operating system will presumably be running a standard browser, so vulnerabilities in the browser still potentially lead to viruses. It's possible to run the browser in an isolated environment which resets itself after each invocation (think VMware here), but you could at least in principle do the same thing on a commodity PC. Fundamentally, the security of a system like this depends on the security of the browser, which is to a great extent the situation with commodity PCs as well.

Speaking of security, I should mention that the following seems pretty implausible:

Any device, anywhere - from a desktop PC to a mobile phone - could give users instant access to all their files and programs so long as it had a Web browser. At the same time, new kinds of devices would be possible, from portable computers that are even lighter than today's thinnest PCs, to, say, a Web-connected screen in a hotel room that would give guests full access to their digital lives.

Now, obviously it's technically possible to build a Web-based system where you can remotely access all your data from a machine in your hotel room, but that's not really something you would want; remember that you have no idea what software is running on the PC in your hotel room, so you have no way of knowing that it's not just pretending to run ChromeOS (or whatever), but actually stealing your password and all your data. [Technical note: it could also be virtualized, running a keylogger, or an inline keyboard logger, etc.] I can see that you might want to have a very lightweight machine that you carry around and that does most of its thinking in the cloud—to some extent that's what an iPhone already is—but it really needs to be a device you control.

Moving on...

In the past few years, phones have started to act more like computers, and devices like the iPhone have whetted consumers' appetite for a combination of power and simplicity. Now that power and simplicity could migrate from the phone to the PC.

"The experience that we have on our iPhones and smart phones prefigures what the PC will become," said Nicholas Carr, the author of "The Big Switch," a book about cloud computing.

This is a particularly odd argument. Originally the iPhone was precisely this sort of Web-based system: you couldn't load your own software and the only extensibility point was that you could write new web applications. It quickly became clear that due to intentional restrictions in the browser environment (largely intended as security features) this was a really inadequate way of adding new functionality, which was one of the major original motivations for people to jailbreak the iPhone. Then, of course, the app store became available and now all sorts of new functionality is added by loading new programs onto the iPhone operating system, just like you would if you were running a regular PC (except, of course, from having to get all the software via Apple). If anything the iPhone seems like an argument against this concept, not for it.

 

July 9, 2009

Andy Zmolek of Avaya reports on VoIP security research company VoIPshield's new policy requiring vendors to pay for full details of bugs in their products. He quotes from a letter VoIPShield sent him:
"I wanted to inform you that VoIPshield is making significant changes to its Vulnerabilities Disclosure Policy to VoIP products vendors. Effective immediately, we will no longer make voluntary disclosures of vulnerabilities to Avaya or any other vendor. Instead, the results of the vulnerability research performed by VoIPshield Labs, including technical descriptions, exploit code and other elements necessary to recreate and test the vulnerabilities in your lab, is available to be licensed from VoIPshield for use by Avaya on an annual subscription basis.

"It is VoIPshield's intention to continue to disclose all vulnerabilities to the public at a summary level, in a manner similar to what we've done in the past. We will also make more detailed vulnerability information available to enterprise security professionals, and even more detailed information available to security products companies, both for an annual subscription fee."

In comments, Rick Dalmazzi from VoIPshield responded at length. Quoting some of it:

VoIPshield has what I believe to be the most comprehensive database of VoIP application vulnerabilities in existence. It is the result of almost 5 years of dedicated research in this area. To date that vulnerability content has only been available to the industry through our products, VoIPaudit Vulnerability Assessment System and VoIPguard Intrusion Prevention System.

Later this month we plan to make this content available to the entire industry through an on-line subscription service, the working name of which is VoIPshield "V-Portal" Vulnerability Information Database. There will be four levels of access (casual observer; security professional; security products vendor; and VoIP products vendor), each with successively more detailed information about the vulnerabilities. The first level of access (summary vulnerability information, similar to what's on our website presently) will be free. The other levels will be available for an annual subscription fee. Access to each level of content will be to qualified users only, and requests for subscription will be rigorously screened.

So no, Mr. Zmolek, Avaya doesn't "have to" pay us for anything. We do not "require" payment from you. It's Avaya's choice if you want to acquire the results of years of work by VoIPshield. It's a business decision that your company will have to make. VoIPshield has made a business decision to not give away that work for free.

It turns out that the security industry "best practice" of researchers giving away their work to vendors seems to work "best" for the vendors and not so well for the research companies, especially the small ones who are trying to pioneer into new areas.

As a researcher myself—though in a different area—I can certainly understand Dalmazzi's desire to monetize the results of his company's research. One of my friends used to quote Danny DeVito from Heist on this point: "Everybody needs money. That's why they call it money." That said, I think his defense of this policy elides some important points.

First, security issues are different from ordinary research results. Suppose, for instance, that Researcher had discovered a way to significantly improve the performance of Vendor's product. They could tell Vendor and offer to sell it to them. At this point, Vendor's decision matrix would look like this:

Not BuyBuy
0V - C

Where V is the value of the performance improvement to them and C is the price they pay to Researcher for the information. Now, if Researcher is willing to charge a low enough price, they have a deal and it's a win-win. Otherwise, Vendor's payoff is zero. In no case is Vendor really worse off.

The situation with security issues is different, however. As I read this message, Researcher will continue to look for issues in Vendor's products regardless of whether Vendor pays them. They'll be disclosing this vulnerabilities in progressively more detail to people who pay them progressively more money. Regardless of what vetting procedure Researcher uses (and "qualified users" really doesn't tell us that much, especially as "security professional" seems like a pretty loose term), the probability that potential attackers will end up in possession of detailed vulnerability information seems pretty high. First, information like this tends to leak out. Second, even a loose description of where a vulnerability is in a piece of software really helps when you go to find it for yourself, so even summary information increases the chance that someone will exploit the vulnerability. We need to expand our payoff matrix as follows:

Not BuyBuy
Not Disclose0V - C
Disclose-D?

The first line of the table, corresponding to a scenario in which Researcher doesn't disclose the vulnerability to anyone besides Vendor, looks the same as the previous payoff matrix: Vendor can decide whether or not to buy the information depending on whether it's worth it to them or not to fix the issue [and it's quite plausible that it's not worth it to them, as I'll discuss in a minute.] However, the bottom line on the table looks quite different: if Researcher discloses the issue, then this increases the chance that someone else will develop an exploit and attack Vendor's customers, thus costing Vendor D. This is true regardless of whether or not Vendor chooses to pay Researcher for more information on the issue. If Vendor chooses to pay Researcher, they get an opportunity to mitigate this damage to some extent by rolling out a fix, but their customers are still likely suffering some increased risk due to the disclosure. I've marked the lower right (Buy/Disclose) cell with a ? because the costs here are a bit hard to calculate. It's natural to think it's V - C - D but it's not clear that that's true, since presumably knowing the details of the vulnerability is of more value if you know it's going to be released—though by less than D, since you'd be better off if you knew the details but nobody else did. In any case, from Vendor's perspective the top row of the matrix dominates the bottom row.

The point of all this is that the situation with vulnerabilities is more complicated: Researcher is unilaterally imposing a cost on Vendor by choosing to disclose vulnerabilities in their system and they're leaving it up to Vendor whether they would like to minimize that cost by paying Researcher some money for details on the vulnerability. So it's rather less of a great opportunity to be allowed to pay for vulnerability details than it is to be offered a cool new optimization.

The second point I wanted to make is that Dalmazzi's suggetion that VoIPshield is just doing Avaya's QA for them and that they should have found this stuff through their own QA processes doesn't really seem right:

Final note to Mr. Zmolek. From my discussions with enterprise VoIP users, including your customers, what they want is bug-free products from their vendors. So now VoIP vendors have a choice: they can invest in their own QA group, or they can outsource that function to us. Because in the end, a security vulnerability is just an application bug that should have been caught prior to product release. If my small company can do it, surely a large, important company like Avaya can do it.

All software has bugs and there's no evidence that it's practical to purge your software of security vulnerabilities by any plausible QA program, whether that program consists of testing, code audits, or whatever. This isn't to express an opinion on the quality of Avaya's code, which I haven't seen; I'm just talking about what seems possible given the state of the art. With that in mind, we should expect that with enough effort researchers will be able to find vulnerabilities in any vendor's code base. Sure, the vendor could find some vulnerabilities too, but the question is whether they can find enough bugs that researchers can't find any. There's no evidence that that's the case.

Finally, I should note that from the perspective of general social welfare, disclosing vulnerabilities to a bunch of people who aren't the vendor but not the vendor seems fairly suboptimal. The consequence is that there's a substantial risk of attack which the vendor can't mitigate. Of course, this isn't the researcher's preferred option—they would rather collect money from the vendor as well—but if they have to do it occasionally in order to maintain a credible negotiating position, that has some fairly high negative externalities. Obviously, this argument doesn't apply to researchers who always give the vendor full information. There's an active debate about the socially optimal terms of disclosure, but I think it's reasonably clear that a situation where vulnerabilities are frequently disclosed to a large group of people but not to the vendors isn't really optimal.

Acknowledgement: Thanks to Hovav Shacham for his comments on this post.