More on the cybersecurity bill

| Comments (0) | SYSSEC
From Section 6 of the Cyber security bill:

(3) SOFTWARE SECURITY.--The Institute shall establish standards for measuring the software security using a prioritized list of software weaknesses known to lead to exploited and exploitable vulnerabilities. The Institute will also establish a separate set of such standards for measuring security in embedded software such as that found in industrial control systems.

Now, not to say that this is totally impossible, but it's not like it's a straightforward matter of standardization like defining a set of screw thread gauges. The problem here is that we don't have a meaningful model for the severity of security vulnerabilities, CVSS notwithstanding, let alone for the probability that they will be exploited. Quoting myself:

I certainly agree that it's useful to have a common nomenclature and system for describing the characteristics of any individual vulnerability, but I'm fairly skeptical of the value of the CVSS aggregation formula. In general, it's pretty straightforward to determine linear values for each individual axis, and all other things being equal, if you have a vulnerability A which is worse on axis X than vulnerability B, then A is worse than B. However, this only gives you a partial ordering of vulnerability severity. In order to get a complete ordering, you need some kind of model for overall severity. Building this kind of model requires some pretty serious econometrics.

CVSS does have a formula which gives you a complete ordering but the paper doesn't contain any real explanation for where that formula comes from. The weighting factors are pretty obviously anchor points (.25, .333, .5) so I'm guessing they were chosen by hand rather than by some kind of regression model. It's not clear, at least to me, why one would want this particular formula and weighting factors rather than some other ad hoc aggregation function or just someone's subjective assessment.

Even if we assume that something like CVSS works, we just have the same problem writ large. Say we have two systems, one with three vulnerabilities ranked moderate, and another with one vulnerability ranked severe. Which system is more secure? I don't even know how to go about answering this question without a massive amount of research. We don't even know how to answer the question of the probability of a single vulnerability being exploited, let alone the probability that a system with some vulnerability profile will be exploited.

This isn't to say, of course, that NIST can't come up with some formula for ranking systems based on their vulnerability profiles. After all, you could just invent some ad hoc formula for combining vulnerabilities with arbitrarily chosen weights. But it wouldn't be anything principled and while it would be "objective", it's not clear it would be meaningful. That said, this is an awful specific proposal for some lawmaker or his staff to come up with on their own; I wonder who suggested to them that this was a good plan.

Leave a comment