Researchers say fMRI can probe the workings of the brain as never before—revealing everything from when you tell a lie to how you fall in love—while critics counter that reports of digital mind readers are premature, and we should think twice before using Functional Magnetic Resonance Imaging (fMRI) in our public and private lives.
In the past decade, a wave of researchers using scans has laid bare the schematics of how our brains handle fear, memory, risk-taking, romantic love and other mental processes. The technology is going even further, pulling back the curtain guarding our most private selves, a nearly foolproof lie detector based on brain scanning.
The government, employers, even your spouse–might turn to technology to determine whether you are a law-abiding citizen, a promising new hire or a faithful partner. It raises interesting questions about Fourth and Fifth Amendment rights. Is [an involuntary fMRI scan] illegal search and seizure since something was taken from you without your permission? And how do you protect your right not to incriminate yourself if people have a way of asking your brain questions, and you can’t say no or refuse to answer? These are some serious questions we have to begin to ask.
The underlying technology involved in functional magnetic resonance imaging has been around for decades. What’s new is the growing sophistication in how it is being used. Inside a massive doughnut-shaped magnet, an fMRI scanner generates powerful fields that interact with the protons inside a test subject’s body. The hemoglobin molecules in red blood cells, for instance, exhibit different magnetic properties depending on whether they are carrying a molecule of oxygen. Since regions of the brain use more oxygen when they’re active, an fMRI scanner can pinpoint which areas are busiest at a given moment.These can be correlated with our existing anatomical understanding of the brain’s functions–and, as our knowledge of these functions improves, so does the accuracy of neuroimaging data.
With fMRI, then, researchers can see what is going on across the entire brain, almost in real time, without danger or discomfort to the test subject. “It’s like being an astronomer in the 16th century after the invention of the telescope,” says Joshua Freedman, a psychiatrist at the University of California, Los Angeles. “For millennia, very smart people tried to make sense of what was going on up in the heavens, but they could only speculate about what lay beyond unaided human vision. Then, suddenly, a new technology let them see directly what was there.”
In the last decade, an explosion of brain-scan studies that tapped into fMRI’s astonishing abilities has greatly enhanced neuroscience’s understanding of how the mind works. Some experiments have revealed vast differences between how our mental apparatus operates and how we perceive it to function. Other studies support common-sense intuition.
Feroze B. Mohamed, an associate professor of radiology at Philadelphia’s Temple University, conducted an experiment in which he instructed some test subjects to fire a pistol and then falsely answer questions about the event while undergoing fMRI. Compared to others who truthfully said they did not fire a weapon, the liars showed increased activity in twice as many brain regions, including those associated with memory, judgment, planning, sentence processing and inhibition. The findings lend credence to what we’ve all realized at one time or another: It takes a lot more effort to lie than to tell the truth.
In the wake of Sept. 11, the potential for fMRI to distinguish liars from truth-tellers generated particular interest as the U.S. government sought more reliable ways to extract information from detainees in the global war on terror. The Pentagon’s Defense Academy for Credibility Assessment at Fort Jackson, S.C., formerly the Polygraph Institute, has financed over 20 projects aimed at developing improved lie detectors. DARPA, the Pentagon’s high-tech research arm, also jumped into fMRI work. “Researchers, funded by the Department of Defense,” a recent article in the Cornell Law Review noted, “have developed technologies that may render the `dark art’ of interrogation unnecessary.”
Entrepreneurs, meanwhile, are looking for civilian applications. In 2006, a California company called No Lie MRI, which had conducted a DARPA-funded study, began touting its commercial lie-detection services, offering $10,000 brain scans that it says can determine whether subjects are telling the truth. Among the first customers: an arson suspect who wanted to establish his innocence.
Even some of fMRI’s most enthusiastic supporters recognize that using the technology in this way could pose gigantic risks to civil liberties. Joel Huizenga, chief executive officer of No Lie MRI, says he anticipates a potential backlash against his firm–and welcomes it. “There should be controversy,” he says. “If I were the next Joe Stalin, I could use this technology to figure out who my friends and enemies are very simply, so I’d know who to shoot.” To allay concerns, No Lie only scans those who ask to be scanned: “We will only test individuals who come forward of their own free will,” Huizenga says. “We don’t want to be forcing anyone’s head into the machine.”
Huizenga’s firm may advocate strict limits on the technology, but there’s no reason to expect that other companies will. What if employers want to use this technology as part of a standard job interview? How about a classroom scanner to detect plagiarism and other forms of cheating? What if airport security agents could screen our state of mind along with our luggage?
Such applications might be “hypothetical,” of course, but their implications are already being hotly debated by bioethicists and legal scholars. The Cornell Law Review article asserts that “fMRI is one of the few technologies to which the now clichéd moniker of `Orwellian’ legitimately applies.” The article goes on to conclude that “fMRI’s use remains legally questionable” and that “the involuntary use of fMRI scanning in interrogation most likely violates International Humanitarian Law.”
Since 2001, several companies have sprung up offering to decode thoughts for the benefit of retailers. One pioneering firm, The BrightHouse Institute for Thought Sciences, in Atlanta, claims to be the first neuromarketing research firm to land a Fortune 500 client–though it wouldn’t identify the company.
Consumer advocates worry that corporations will use fMRI to hone ever more insidiously effective marketing campaigns. In 2004, the executive director of Commercial Alert, a group co-founded by Ralph Nader, sent a letter to members of the U.S. Senate committee that oversees interstate commerce, noting that marketers were using fMRI “… not to heal the sick but rather to probe the human psyche for the purpose of influencing it … in a democracy such as ours, should anyone have such power to manipulate the behavior of the rest of us?”
Bitmapping the Brain
Recent fMRI studies have enabled scientists to expand the intricate cartography that represents the mind at work. –E.M.
|* Truth Machine||* Proof of Purchase||* Big Love|
|For many, establishing guilt or innocence is fMRI’s holy grail.
Study: Temple University
Protocol: Six graduate students were asked to fire a gun loaded with blanks, then lie about their actions. Five students who didn’t fire a gun were told to be truthful. Could fMRI scans reveal who was lying?
Results: Fourteen areas of the brain, including the anterior cingulate cortex (top yellow dot) and the hippocampus (bottom), were active when subjects lied; seven areas were active when subjects told the truth.
|One controversial use of fMRI is neuroeconomics –the study of mental and neural processes that drive economic decisions.
Study: Carnegie Mellon University, Stanford University, MIT Sloan School of Management
Protocol: Twenty-six adults were given $20 each to spend on consumer items. Could researchers predict intent to purchase based on brain regions registering activity?
Results: When areas of the brain associated with product preference and evaluation of gains and losses–the nucleus accumbens (right red dot) and the medial prefrontal cortex (left), respectively–were activated, the person bought a product. Accuracy rate: 60 percent.
|Love might be nothing more than a chemical reaction.
Study: State University of New York, Stony Brook; Albert Einstein College of Medicine; Rutgers University
Protocol: Researchers asked 17 young men and women to look at photos of the people they professed to love, then analyzed their brain activity in an fMRI scanner.
Results: Early stage romantic love is about motivation and reward, since it lights up subcortical reward regions like the right ventral tegmental area (top blue dot) and dorsal caudate area (bottom). Subjects in more extended romantic love showed more activity in the ventral pallidum (middle), which indicates attachment, in prairie voles–and, scientists surmise, in humans.
|* The Oops Factor||* Better to Give|
|What happens when you make a costly mistake?
Study: University of Michigan
Protocol: Scientists asked 12 adults to complete 360 visually based tests that carried monetary rewards and penalties between 25 cents and $2.
Results: When subjects made errors with consequences–in this case, losing money–the rostral anterior cingulate cortex (rACC, orange dot) was much more active. It was less active when mistakes carried no penalty. The rACC’s involvement suggests the importance of emotions in making decisions.
|Does our brain think paying taxes is actually satisfying?
Study: University of Oregon
Protocol: Scientists gave 19 women $100 each, then scanned their brains as they watched their money go to a charity, via mandatory taxation and voluntary contribution.
Results: The caudate nucleus (right green dot) and nucleus accumbens (left), the same regions that fire when basic needs like hunger and social contact are met, were activated when subjects saw some of their tax money go to charity; activity was even greater when they gave money of their own accord. Scientists cite this as tentative proof of altruism.
William Uttal, a professor emeritus of psychology at the University of Michigan who has written a book about fMRI’s potential shortcomings, points out that researchers don’t know how brain activity correlates to the mechanisms of thought. “The big problem is that the brain is far more complex than we understand,” he says. “With this MRI stuff, it’s very, very easy to misunderstand and to simplify things that are much more complicated.”
The most withering criticism centers on using fMRI scans as lie detectors. “Some people claim they can show you pictures of suspected terrorists, and even if you say you don’t know them, they can tell by looking at an fMRI scan whether you know them or not,” says Yale’s Andy Morgan. “Well, a positive result doesn’t necessarily mean you’re lying, because no one’s done studies involving faces that look alike. A familiar-seeming face may give you the same response as one you actually know.” And, regardless of Huizenga’s promise that his No Lie staffers won’t force anyone’s head into an fMRI machine, such assurances might not be necessary: Current scanning technology does not work with nonconsenting subjects. In fact, even tiny movements inside the scanner can negate results.
Unfortunately, any doubts about fMRI accuracy hardly lessen its potential for misuse. For decades, polygraph tests have been widely used despite their flaws. (Even proponents of polygraphy admit a 10 percent failure rate.) And junk science has long been rife in the courtroom. Earlier this year, law professor Brandon L. Garrett of the University of Virginia published a study analyzing 200 cases in which innocent people were wrongly convicted of a crime. In 55 percent of the cases, he found that jurors had been presented with faulty forensic evidence. “I personally am quite concerned,” says Vanderbilt’s Frank Tong. “If brain scans were admissible in court, and became popular enough, then even if they were not mandatory they would become in a sense obligatory. Because if you didn’t voluntarily undergo it, then there would be the question, `Why didn’t you take the test?'”
No doubt many brain-scan applications that critics most fear will never come to pass, and others as yet unseen will arise. What’s certain is that the technology will be transformative, with hardly an area in the public or private spheres that won’t be affected. Like it or not, the new brain science is here, and the world inside our heads is never again going to be completely private.