Science’s under-discussed problem with confirmation bias

– Molly Banta

Confirmation bias in the media is rampant and recognized. Why don't we acknowledge that peer-reviewed scientific papers are subject to the same human manipulations?

We are all subject to confirmation bias — a cherry picked selection of the information we receive — in our choices of media, often without even realizing it. Sometimes, it’s not only realized, but embraced: political liberals have MSNBC, and conservatives have Fox News. The human being, as Warren Buffett once helpfully explained, is best at “interpreting all new information so that their prior conclusions remain intact.”

This human tendency is not limited to the media. Science, oft sold as the clear-headed, unbiased answer to confirmation bias, is open to the same human manipulations, purposeful and accidental — and significantly more often than we might guess.

Research scientists are under pressure to get published in the most prominent journals possible, and their chances increase considerably if they find positive (thus “impactful”) results. For journals, the appeal is clear, writes Philip Ball for Nautilus: they’ll make a bigger splash if they discover some new truth, rather than if they simply refuted old findings. The reality is that science rarely produces data so appealing.

The quest for publication has led some scientists to manipulate data, analysis, and even their original hypotheses. In 2014, John Ioannidis, a Stanford professor conducting researching on research (or ‘meta-research’), found that across the scientific field, “many new proposed associations and/or effects are false or grossly exaggerated.” Ioannidis, who estimates that 85 percent of research resources are wasted, claims that the frequency of positive results well exceeds how often one should expect to find them. He pleads with the academic world to put less emphasis on “positive” findings.

Ironically, the scientific method is meant to combat confirmation bias: scientists are encouraged to search primarily for falsifying evidence, then confirmation of their hypothesis. The rigors of science, however, are often outweighed by the realities of getting and keeping a job. With their academic careers and tenure contingent on getting published, scientists have moved from testing “How am I wrong?” to simply asking “How am I right?” “At present, we mix up exploratory and confirmatory research,” Brian Nosek, a psychologist with the University of Virginia, told Philip Ball. “You can’t generate hypotheses and test them with the same data.”

Though the scientific method’s elementary steps call for a preconceived hypothesis and purpose, these are rarely written down or explicitly defined. The “Chrysalis effect,” as psychologists at the University of Iowa call the bias, is when scientists retroactively formulate their hypotheses, allowing for a cleaner, but biased presentation of data and analysis. When asked to explicitly define the purpose and hypothesis of their studies before beginning their experiments, researchers have been amazed by how conscious or subconscious decisions have caused their analysis to deviate significantly from their original objective. “Having this awareness helps me to separate which results I trust and which ones I trust less,” Susann Fiedler, a behavioral economist from Germany, told Nautilus.

Perhaps more alarming than the retroactive manipulation of research is the effect of the original hypothesis and subtle biases of an experimenter on observable data. This can lead to problematic science, not because hypotheses or data are being manipulated, but because the data is wrong — biased from the point of collection. If the observer or collector intimately understands the experiment and naturally develops expectations or a preferred hypothesis, their observations will likely skew towards support their preferred hypothesis.

This phenomenon is apparent in the considerable difference between the findings of “blinded” scientists — those who have critical information that might cause a bias hidden from them — and non-blinded scientists.

Research on ants offers a constructive example. It is a common expectation that ants are significantly less aggressive among their own nestmates than when they are among ants from a different nest. In 156 published studies on aggression in ants — only 29 percent of which were conducted blind — “aggression among nestmates was three times more likely to be reported in blinded than non-blinded experiments.” Non-blinded experiments were less likely to report activity that fell outside what they believed to be regular, and reported twice as much aggression by ants that were surrounded by non-nestmates. Not only did researchers vulnerable to bias tend to ignore aggression among nestmates, they also exaggerated aggression between non-nestmates, creating much more definitive and exciting ‘discoveries’ than those found in blind experiments.

Protecting science from confirmation bias is relatively simple, at least in theory. In most cases, experimenters can be easily “blinded” simply by coding or concealing information too revealing of the hypothesis or sample being tested. Despite such an easy-to-meet standard, many scientific reports fall woefully short. When meta-researchers at the University of Texas studied 248 bias-prone articles on ecology, evolution, or behavior, they found that just 13.3 percent were conducted blind. Though blinding is not always practical, 78.6 percent of these studies could have been easily blinded, either by masking the samples or with a second naïve experimenter. Even in studies of special education, where the role of confirmation bias has been significantly researched, 75 percent of studies did not take any precautions.

It is easy to be wary of ‘studies’ with an obvious bias — whether from political “astroturf” groups or corporate ads that promise “four out of five doctors agree” that their toothpaste is the best. It is much more difficult to be cognizant of the subtleties of bias existing in observable data. Distinguished, peer-reviewed science is not immune to human influence — a fact that many scientists recognize, but which laypeople may not know.

* * *

The Source: Philip Ball, “The Trouble With Scientists,” Nautilus, May 14, 2015.

Photo courtesy of Flickr/acj1