Security: Airport: October 2007 Archives


October 6, 2007

I have to admit that I was initially pretty skeptical of U. Buffalo's proposed automatic terrorist threat assessment tool but now that Cory Doctorow—my lodestar to the reflexive geek position—has rubbished it (he compares it to phrenology), I figured I'd take another look. The basic idea seems to be to apply machine learning to videos of suspects being interviewed:
"We are developing a prototype that examines a video in a number of different security settings, automatically producing a single, integrated score of malfeasance likelihood," he said.

A key advantage of the UB system is that it will incorporate machine learning capabilities, which will allow it to "learn" from its subjects during the course of a 20-minute interview.

That's critical, Govindaraju said, because behavioral science research has repeatedly demonstrated that many behavioral clues to deceit are person-specific.

"As soon as a new person comes in for an interrogation, our program will start tracking his or her behaviors, and start computing a baseline for that individual 'on the fly'," he said.

The researchers caution that no technology, no matter how precise, is a substitute for human judgment.

"No behavior always guarantees that someone is lying, but behaviors do predict emotions or thinking and that can help the security officer decide who to watch more carefully," said Frank.

He noted that individuals often are randomly screened at security checkpoints in airports or at border crossings.

The question of whether this will work involves two subquestions:

  • Is this possible in principle?
  • Are our machine learning techniques up to the job?

It's certainly widely believed that techniques like this work in principle, and in fact can be made to work by human interviewers. After all, the police regularly use interviews to attempt to figure out whether suspects are guilty, and interviews are the basis of El Al's vaunted security measures. That said, there's data that suggests that humans aren't that great at detecting lies either. So, I'd say the jury is still out on whether it's possible to detect terrorists by observing their behavior in interviews. But certainly believing that it will work wouldn't put you outside of mainstream opinion.

That leaves us with the question of whether our current machine learning techniques can do the job. That seems a bit less likely; even our facial recognition technology doesn't really work that well and this seems like a rather harder problem. But that's why this a research project and being done at the University of Buffalo as opposed to being contracted out to Lockheed Martin.