The state's highest court ruled that defendants in drunken-driving cases have the right to make prosecutors turn over the computer "source code" that runs the Intoxilyzer breath-testing device to determine whether the device's results are reliable.But there's a problem: Prosecutors can't turn over the code because they don't have it.
The Kentucky company that makes the Intoxilyzer says the code is a trade secret and has refused to release it, thus complicating DWI prosecutions.
"There's going to be significant difficulty to prosecutors across the state to getting convictions when we can't utilize evidence to show the levels of the defendant's intoxication," said Dakota County Attorney James Backstrom.
"In the short term, it's going to cause significant problems with holding offenders accountable because of this problem of not being able to obtain this source code."
I can't find the original filings, which include an affidavit from David Wagner, so I'm not sure I'm seeing the best argument for this position. That said, however, I'm not sure that source code analysis is really the best way to determine whether breathalyzers are accurate.
At a high level a breathalyzer is a sensor apparently either an IR spectrometer or some sort of electrochemical fuel cell gizmo attached to a microprocesser and a display. The microprocessor reads the output of the sensor, does some processing, and emits a reading. Obviously, there are a lot of things that can go wrong here, and this page describes a bunch of problems in the source code of another machine, mostly that there seems to be a bunch of ad hoccery in the way the measurements are handled. For instance:
3. Results Limited to Small, Discrete Values: The A/D converters measuring the IR readings and the fuel cell readings can produce values between 0 and 4095. However, the software divides the final average(s) by 256, meaning the final result can only have 16 values to represent the five-volt range (or less), or, represent the range of alcohol readings possible. This is a loss of precision in the data; of a possible twelve bits of information, only four bits are used. Further, because of an attribute in the IR calculations, the result value is further divided in half. This means that only 8 values are possible for the IR detection, and this is compared against the 16 values of the fuel cell.
So, maybe this is bad and maybe it isn't. But it's not clear that you can determine the answer by examining the source code. Rather, you want to ask what the probability is that a system constructed this way would produce an inaccurate reading. If, for instance, the A/D converters have an inherent error rate/variance that's large compared to the sensitivity that they read out in, then it's not crazy to divide down to some smaller number of significant digits—though I might be tempted to do it later in the process. More to the point, any piece of software you look at closely is going to be chock full of errors of various kinds, but it's pretty hard to tell whether they are going to actually impact performance without some careful analysis.
On the flip side, actually reading the source code is a pretty bad way of finding errors. First, it's not very efficient in terms of finding bugs. I've written and reviewed a lot of source code and it's just really hard to get any but the most egregious bugs out with that kind of technique. Second, even if we find things that could have gone wrong (missed interrupts, etc.) it's very hard to determine whether they caused problems in any particular case. [Note that you could improve your ability to recover from some kinds of computational error by logging the raw data as well as whatever readings the system produces.] Third: there are a lot of non-software things that can go wrong. In particular, you need to establish that what the sensors is are reading actually correspond to the alcohol level in the breath, that that actually corresponds to blood alcohol level, that the sensors are reading accurately, etc.
Stepping up a level, it's not clear what our policy should be about how to treat evidence from software-based systems; all software contains bugs of one kind or another (and we haven't even gotten to security vulnerabilities yet). If that's going to mean that all software-based systems are useless for evidentiary purposes, the world is going to get odd pretty fast.