By John Rennie
Posting in Cities
In his inaugural column, columnist John Rennie argues that advanced biometric technology is still far too immature to reliably detect guilt in criminals.
"Who knows what evil lurks in the hearts of men?" began one of the most famous of the classic radio mystery serials. "The Shadow knows!" If so, the Shadow is still the only one who does. Despite decades of advances in biometric technologies and neuroscience, investigators have at best only a rudimentary and untrustworthy ability to find telltale changes in the body or in brain activity that might betray guilt or bad intentions. That doesn't stop the authorities from continuing to look, however.
Witness the U.S. government's recent dabbling in Future Attribute Screening Technology (FAST). With thermal cameras, microphones and a laser radar that can measure heart rate and perspiration, FAST is designed to surreptitiously scan airport travelers for nervous behaviors, rapid blinking, or any other signs that might indicate intentions to commit violent terrorist acts -- what the system's developers call "malintent." The Department of Homeland Security acknowledged running preliminary tests of the technology in northeastern airports early this year. In September it tested a prototype more thoroughly at another undisclosed location: working with a cohort of 140 volunteers, FAST looked for ones who had been instructed to cause a disruption. Predictably, that news kicked off an uproar over FAST's potential for civil liberties abuse, invasive public surveillance, and likeliness to entrap the innocent. (Comparisons to the science-fiction movie Minority Report, with its telepathic technology for predicting "pre-crime," also came in right on cue.)
But such scruples aside, how well did FAST do? A public eager to thwart terrorism will tolerate a lot of inconvenience and loss of privacy in the name of safety, as the 10 years since 9/11 have shown. The details of the tests weren't made public, but in a widely reported statement, DHS spokesman John Verrico said with some apparent approval that FAST was 78 percent successful on detection of malintent and 80 percent on deception.
Even if we put aside reservations about self-reported scores on trials under unspecified conditions and grant that FAST is a technology in its infancy, that track record doesn't inspire confidence. No one should be satisfied with a screening method that lets through more than one out of every five would-be plane bombers. Far more annoying, however, is that we don't know exactly what the rates of false positives and false negatives were. A system that missed 20 percent of the terrorists in an airport would be bad but terrorists are rare, so disastrous mistakes would be few. But a system that snared 20 percent of innocent travelers as terrorist suspects would destroy air travel overnight.
To be fair, no one has suggested that FAST is even close to readiness, and its current performance might turn out to be far better than anything yet seen. But the point remains that its negative failure rate -- its tendency to misidentify innocent people as harboring "malintent" -- will need to be vanishingly small for it to be useful.
Wanted: a mind-reading machine
FAST is only the latest in a series of methods and technologies employed to probe suspects for scientific evidence of guilt, deceit, or criminal designs, stretching back to the earliest days of the polygraph lie detector in 1921. Nowadays, practically everyone knows that polygraphs are unreliable, particularly with highly nervous suspects, and that savvy ones can often stymie a reading by tensing muscles, twitching or pressing against a tack in a shoe. Nevertheless, though not all court jurisdictions allow polygraph evidence, police investigators often use them routinely, and the machines also have many uses outside the legal system in business.
Paul Ekman, psychology professor emeritus at the University of California Medical School in San Francisco, won renown in the 1970s for his development of a "facial action coding system" reading tiny, highly fleeting expressions that he maintains can allow a skilled, trained observer to read a person's cloaked feelings and thoughts with more than 70 percent accuracy. Celebrated in a well-known 2002 Malcolm Gladwell article in The New Yorker, Ekman has consulted with law enforcement agencies and helped to train agents for the Transportation Security Administration. (The 2010-2011 FOX network television program Lie to Me was loosely based on Ekman and his work.)
Ekman has adherents among his peers, but he also has sharp critics. Psychologists Charles Honts of Boise State University, David Raskin of the University of Utah in Salt Lake City, and Charles Bond of Texas Christian University have questioned the empirical foundations of Ekman's work. As journalist Sharon Weinberger pointed out in her news story "Airport Security: Intent to deceive" in Nature earlier this year, peer-reviewed studies of Ekman’s methods are mostly several decades old, and he avoids publishing the full details of his more recent work in peer-reviewed journals (reportedly in the interest of U.S. security).
But what about brain scanning methods? Faces and bodies may be undependable, but surely evidence for evil-doing ought to be there inside the skull: criminal intentions, malevolent thoughts and guilty memories must correspond to some neurological activity that is, at least in theory, detectable by methods such as functional magnetic resonance imaging (fMRI).
The principle may be true, but it’s a long way from practice. Various studies have indeed shown that deceptive behavior seems to involve an uptick in activity in the areas of the brain known as the prefrontal cortex and the anterior cingulate. Yet that insight suffers from severe limitations.
Inadequate as evidence
For example, a 2009 study by Joshua D. Greene and Joseph M. Paxton of Harvard University that appeared in Proceedings of the National Academy of Sciences found that fMRI brain scans could very clearly distinguish a group of people who frequently lied about a coin-tossing challenge from a group who rarely if ever lied. Yet that result only meant that it could distinguish the average readings from one group from those of the other. The scans were worthless for distinguishing whether individuals were generally liars or whether any single answer they gave was a lie.
The rest of the scientific literature tells much the same story. As neuroscientist Anthony Wagner of Stanford University wrote in "Can Neuroscience Identify Lies?," his 2010 comprehensive review, "no relevant published data ... unambiguously answer whether fMRI-based neuroscience methods can detect lies at the individual-subject level."
So far, the courts have wisely declined to accept brain-scan evidence for lie-detecting purposes. In the precedent-setting federal case U.S. vs. Semrau, the court considered but eventually disallowed fMRI scans that the defendant’s attorneys had offered as evidence that he had not knowingly defrauded the government. Nevertheless, in its decision, the court allowed that brain scans might become admissible when the scientific community had greater confidence in such interpretations of them, and when the error rates for the method were better known.
Moreover, the courts' willingness to admit brain scan data as evidence of deceit or truthfulness might not hinge on scientists’ satisfaction. As Francis X. Shen and Owen D. Jones of the Vanderbilt University School of Law wrote in a symposium paper on "Brain Scans as Evidence" published earlier this year: "It is essential to recognize that law's concern is not solely whether the techniques are up to the justifiably robust standards of science. Law’s concern is whether the techniques are meaningfully better than the next best alternative technique currently deployed in the legal process -- which is often having a group of untrained jurors sit passively as they watch and listen to witnesses."
Shen and Jones also noted that, lie detection aside, the legal system could still consider brain scans highly relevant to decisions about future behavior (such as at parole hearings) or estimations of mental state. So there seems little doubt that society will remain keenly interested in trying apply FAST, fMRI and other technologies to fighting terrorism and judging criminal intent: the desire for such methods to work may outstrip scientific uncertainties, now and for the foreseeable future, about their appropriateness for that purpose.
This past September, probably right around the time FAST was being tested and about 57 years after The Shadow left radio, CBS premiered its new TV program Person of Interest. The show's premise involves omnipresent computerized surveillance technology that constantly watches everyone and, by analyzing those oceans of data, pinpoints which people will soon be involved in a violent crime. The catch is that the system doesn’t predict whether those individuals will be the perpetrators of that crime or its victims. As a commentary on the state of the applicable science, that's about right.
Photos: Dept. of Homeland Security/Transportation Security Administration
Nov 7, 2011
Homeland Security doesn't just do counter-terrorism. Immigration & Customs Enforcement is 1 of it's agencies. Biometric tech isn't mature enough, and maybe should never, ever be enough to determine guilt or innocence by itself, but that doesn't mean it's useless either. For example, someone who subconsciously displays nervousness when asked to present ID or when their baggage goes through scan might, in combination with other things, get a more thorough vette by human security.
MURMUR test is evolution of and includes P300-only test. it detects the brain's spark of recognition, prior to attempts at conscious suppression. if crime scene info is controlled from the outset, only the perpetrator will know certain details. an innocent suspect will understand what is being shown, a guilty one will recognize what is being shown... essential difference. admissible in court. http://brainfingerprinting.com/Chemistry.php
It is government bureaucrats and their ilk who are "eager to thwart terrorism" and "will tolerate a lot of inconvenience and loss of privacy in the name of safety." Data from more objective sources proves that the flying public is far past fed up with the whole boondoggle. The terrorism card has been played to created a multi-billion dollar bureaucracy and an even larger potential for private-sector profits. There is talk of shutting down the ATF in the wake of the "Fast and Furious" scandal (after transferring the responsible parties to the safety of jobs in other DOJ fiefdoms) but I submit that the DHS - if not the bulk of the DOJ itself - should be shown the door. They have consistently and repeatedly lied to both the public and Congress on countless issues, from gun-running to unlawful surveillance, and false swearing of affidavits to legitimize even more unlawful searches and wiretapping. Where does it end?
Acting nervous or suspicious is enough to get unwanted attention but having low paid airport security think they can spot a a possible terrorist is silly. I rarely fly anywhere anymore because of the weird security requirements that change frequently (but not for the better). I would fail a thought crime detector because I find it incredibly stupid to frisk old ladies as if they were hardened terrorists. The reliance on technology to fix problems is silly to believe that all the parameters of human behavior can be programmed into a device and have it eliminate all the mistakes of human security. There are always counter measures against any technologic advance, as pointed out in the article that polygraph tests can be gamed if you know how. There is also the aspect of using technology to reinforce prejudicial suspicions that focus on a particular group and not on others. There also needs to be a way to challenge an accusation of distrust by the airport security in a way that does not punish the innocent for being nervous or agitated.