Why is remote software verification hard?

| Comments (1) |
One problem that seems to come up a lot in computer security is remotely verifying what software is running on a given computer. There are a number of contexts in which this is important, for instance:

  • Verifying that voting machines are running the correct software.
  • Allowing content providers to verify that customers are running software with the right DRM.
  • Preventing the use of cheat software in MMORPGs. [*].

Obviously, this is easy if you're not worried that the software is malicious, but if you have to worry about malice, it's well known in the computer science community that outside of some special cases (basically, having trusted hardware on the target machine), this problem is basically insoluble. The reasons why are interesting and involve some fundamental principles about computer science, but often aren't obvious to laypeople.

Let's start with a simple and obviously wrong approach—though one that would work fine if you're not worried about malice—have a protocol where the verifier can ask the target what software it's running. So, the verifier would say "What software are you running?" and the target would say "FooSoft 1.1." Clearly, if the target is running malicious software, that software can easily lie and return any value of its choice, including the correct value, so this test doesn't really work.

Another approach that people often think of is to ask the target for a copy of the software its running. The verifier can then compare the returned date to a locally stored reference copy [aside for crypto people: it's sufficient to store a digest of the reference copy] and if they match, then the target is running the software. This technique is actually used in some systems; for instance the Hart Voting system (see [IRSW08]; Issue 11). However, it's equally clearly flawed. Just as in the previous approach the malicious target software could return any version string, in this case the malicious software would return a much longer string, consisting of a copy of the correct software. So, for instance, a virus which wanted to hide from verification would first make a copy of the real software somewhere else on the disk and then modify the operating system (or whatever) to install itself.

So, clearly malicious software can return any static value, so this is useless as an integrity check. What about a dynamic check. For instance, the verifier could run a local copy of the software and verify that the remote target reacts the same as the local copy to a variety of inputs. This would presumably catch some kinds of malware, but a smart piece of malware could emulate the correct software and return the correct answers. Now, you might say that it's really hard to write a piece of malware that would perfectly emulate the right software, but you don't have to. What you can do instead is just run the real software in an emulator or virtual machine. This depends on a fundamental fact about computers: within some constraints, any computer can emulate any other computer. I.e., it can arrange that given the same program and inputs, it can produce the same outputs. [for technical people: any Universal Turing Machine can emulate any other Universal Turing Machine.]

Having said that, I'm misrepresenting a little bit. The systems for which this is literally true are Turing Machines, which are a mathematical model of a computer, with an infinite amount of memory. If you're working with a real world machine, which has a fixed resources, then there are situations in which emulation is imperfect. For instance:

  • Virtual machines and emulators aren't as fast as running hardware on the bare metal. So, for instance, if you knew what hardware the target machine had, and you had a very high resolution timer, you might be able distinguish a real machine from an emulator, VM, or patched version of the target software (see [SPDK04].) In practice, these conditions very rarely apply, and even when they do, they're often imperfect. For instance, one might be able to couple the emulator with an optimizer which improved the performance of the program under test so that the net system was as fact as it was on the bare metal [technical note 1: [SPDK04] argue that they can optimize the check code to the point where there's no room for optimization. It's not clear this is true in general, especially on a complex architecture like x86]. [technical note 2: it might actually be faster, in which case you could use timing loops to get matching timing.]
  • Another potential approach is to exploit the fact that computers have a limited amount of memory. For instance, you could arrange that the target computer had just enough memory (or disk) that it could store the target program. Since the emulator takes up space, running the emulator plus the target won't fit in memory so perhaps the system won't work, or will require the use of virtual memory (causing swapping and timing errors). The problem here for the verifier is that most pieces of software are not this finely tuned to the target hardware—this isn't at all convenient to do, and requires work for each new release of the software and hardware. Moreover, we can again use optimization techniques: the emulator can compress the target software, leaving some memory for itself to run in.

The bottom line, then is that it's extraordinarily hard to verify the software running on a computer. The only techniques that work reliably require unmediated access to the system hardware, either locally (e.g., by extracting the hard drive) or by some trusted agent attached to the target machine and which can cryptographically attest to the verifier about the contents of memory. In settings where neither of these are available, we don't know of any solutions to this problem, and as far as I know, most computer scientists believe none are possible.

1 Comments

The only techniques that work reliably require unmediated access to the system hardware, either locally (e.g., by extracting the hard drive) or by some trusted agent attached to the target machine and which can cryptographically attest to the verifier about the contents of memory.

Note that, of course, this does not tell you what the software running on the system will actually do, at most it will tell you that the software running there is an exact copy of some other piece of software. I will not magically detect whether the software is actually written to do what you expect it to be doing; cf http://underhanded.xcott.com/

Leave a comment