Perusing Mark Liberman’s Language Log, I came across a post about an awesome study (PDF). Yale psychology student Deena Skolnick Weisberg and her colleagues noticed that people seemed to like psychological explanations that contained a certain amount of neuroscience.
Weisberg and her colleagues came up with a list of interesting psychological phenomena. For each one, they created two explanations: a “good” one corresponding to the usual explanation given, and a “bad” one, which was usually circular. They found that their subjects (who were not particularly knowledgeable about psychology) were capable of distinguishing the bad explanations from the good ones.
The researchers changed the explanations, adding a few words of neuroscience that was consistent with the explanation. They were very careful to ensure that it was identical to the explanation given, so that it added no new information. The subjects who received the explanations with neuroscience were much less capable of distinguishing the bad explanations from the good ones. Specifically, the researchers write, “the addition of such neuroscience information encouraged them to judge the explanations more favorably, particularly the bad explanations. That is, extraneous neuroscience information makes explanations look more satisfying than they actually are, or at least more satisfying than they otherwise would be judged to be.”
The study was repeated with students taking an introductory neuroscience course; unfortunately, “a semester’s worth of instruction is not enough to dispel the effect of neuroscience information on judgments of explanations.” It was repeated again with experts in neuroscience, who were found to be immune to this effect.
The authors conclude, “Since it is unlikely that the popularity of neuroscience findings in the public sphere will wane any time soon, we see in the current results more reasons for caution when applying neuroscientific findings to social issues.” In other words, be skeptical and try to compensate for this effect.