SYSSEC: April 2009 Archives

 

April 8, 2009

The WSJ reports that there has been significant penetration of the US power grid and other infrastructure networks:
The spies came from China, Russia and other countries, these officials said, and were believed to be on a mission to navigate the U.S. electrical system and its controls. The intruders haven't sought to damage the power grid or other key infrastructure, but officials warned they could try during a crisis or war.

"The Chinese have attempted to map our infrastructure, such as the electrical grid," said a senior intelligence official. "So have the Russians."

...

The U.S. electrical grid comprises three separate electric networks, covering the East, the West and Texas. Each includes many thousands of miles of transmission lines, power plants and substations. The flow of power is controlled by local utilities or regional transmission organizations. The growing reliance of utilities on Internet-based communication has increased the vulnerability of control systems to spies and hackers, according to government reports.

So, obviously this is bad.

The first question you should be asking at this point is why these infrastructure systems are connected to the Internet at all. Protecting a computer in an environment where the attacker is allowed to transmit arbitrary traffic is an extremely difficult problem. I'm not sure that anyone I know would feel comfortable guaranteeing that they could secure a computer under conditions of concerted attack by a dedicated attacker. This doesn't mean that nobody should ever connect their computer to the Internet. After all, it's not like the entire reources of some national intelligence agency are going to be trained on the server where your blog is hosted. But the situation is different with things like the electrical grid which are attractive national-scale attack targets. [And rumor in the system security community is that these targets are not that well secured.]

It's natural to set up a totally closed network with separate cables, fiber, etc., but I'm not sure how much that actually helps. If you're going to connect geographically distributed sites, then that's a lot of cable to protect, so you need to worry about attackers cutting into the cable at some point in the middle of nowhere and injecting traffic at that point. The next step is to use crypto: if you have point to point links then you can use simple key management between them and it's relatively simple to build hardware-based link encryptors which reject any traffic which wasn't protected with the correct key. Obviously you still need to worry about subversion of the encryptors, but it's a much harder attack target than subversion of a general purpose computer running some cort of crypto or firewall or whatever.

Unfortunately, this is only a partial answer because you still have to worry about what happens if one end of the link gets compromised. At that point, the attacker can direct traffic to the other end of the link, so we're back to the same problem of securing the end systems, but at least the attack surface is a lot smaller because someone first has to get into one of the systems. So, you need some kind of defense in depth where the end systems are hardened behind the link devices.

Ideally, of course, you wouldn't network these systems at all, but I suspect that's pretty much a nonstarter: the grid is pretty interdependent and the control networks probably need to be as well. Most likely the best we can do here is try to have as many airgaps and choke points as possible to try to make it hard to get into the system in the first place and then make it hard for malware to spread.

P.S. It's not a computer security issue per se, but it's worth observing that the electrical grids have non-computational cascading failure modes. See, for instance, the Wikipedia article on the 2003 blackout. This implies that even if you have excellent informational isolation, you still need to worry about localized informational attacks leading to large scale failures by propagation through the grid rather than propagation through the computer network.

 

April 6, 2009

From Section 6 of the Cyber security bill:

(3) SOFTWARE SECURITY.--The Institute shall establish standards for measuring the software security using a prioritized list of software weaknesses known to lead to exploited and exploitable vulnerabilities. The Institute will also establish a separate set of such standards for measuring security in embedded software such as that found in industrial control systems.

Now, not to say that this is totally impossible, but it's not like it's a straightforward matter of standardization like defining a set of screw thread gauges. The problem here is that we don't have a meaningful model for the severity of security vulnerabilities, CVSS notwithstanding, let alone for the probability that they will be exploited. Quoting myself:

I certainly agree that it's useful to have a common nomenclature and system for describing the characteristics of any individual vulnerability, but I'm fairly skeptical of the value of the CVSS aggregation formula. In general, it's pretty straightforward to determine linear values for each individual axis, and all other things being equal, if you have a vulnerability A which is worse on axis X than vulnerability B, then A is worse than B. However, this only gives you a partial ordering of vulnerability severity. In order to get a complete ordering, you need some kind of model for overall severity. Building this kind of model requires some pretty serious econometrics.

CVSS does have a formula which gives you a complete ordering but the paper doesn't contain any real explanation for where that formula comes from. The weighting factors are pretty obviously anchor points (.25, .333, .5) so I'm guessing they were chosen by hand rather than by some kind of regression model. It's not clear, at least to me, why one would want this particular formula and weighting factors rather than some other ad hoc aggregation function or just someone's subjective assessment.

Even if we assume that something like CVSS works, we just have the same problem writ large. Say we have two systems, one with three vulnerabilities ranked moderate, and another with one vulnerability ranked severe. Which system is more secure? I don't even know how to go about answering this question without a massive amount of research. We don't even know how to answer the question of the probability of a single vulnerability being exploited, let alone the probability that a system with some vulnerability profile will be exploited.

This isn't to say, of course, that NIST can't come up with some formula for ranking systems based on their vulnerability profiles. After all, you could just invent some ad hoc formula for combining vulnerabilities with arbitrarily chosen weights. But it wouldn't be anything principled and while it would be "objective", it's not clear it would be meaningful. That said, this is an awful specific proposal for some lawmaker or his staff to come up with on their own; I wonder who suggested to them that this was a good plan.