The New England Journal of Medicine went out on a limb with an editorial that advised readers to let the data speak for themselves.
And ignore the funding source.
Jeffrey M. Drazen’s September 20 editorial says that a study’s “validity should ride on the study design, the quality of data accrual and analytic processes, and the fairness of results reporting.”
Not, he writes, on the funding source.
In other words, we shouldn’t be swayed by the fact that a study might be funded by an organization with a special interest.
Huh?
Drazen is writing on the heels of a study that showed physicians were less likely to think research was valid if it had been funded by a pharmaceutical company.
My guess is that the docs figured the funders might have a stake in the outcome.
The study found that internists (doctors trained in internal medicine) put more faith in studies that were funded by the NIH (National Institutes of Health) or by no source, rather than the pharmaceutical industry—even when the results were the same across all studies (regardless of source).
Should we be skeptical of studies that are funded by special interests?
Yes.
As scientists we think we are immune to pressures that bend and fold the data so that the results take shape the way we want.
Research is lumpy and temperamental: it rarely shape-shifts into a crisp origami with perfect edges.
As scientists we are also humans, subject to the same foibles as all muggles.
We want results from our studies. This desire can gently push us to frame our research and bend our results in ways we don’t even acknowledge.
We try to get the results we want.
What happens when you add to this picture a funding agency that also desires certain results?
One of the best descriptions about this phenomenon was written by Jonah Lehrer for the New Yorker.
Lehrer talks about the decline effect in scientific research.
He found that, in some cases, scientists just can’t repeat a study with the same robust results as earlier studies.
That is, an initial study might look promising and show large effects. But as other researchers replicate the study, the effects diminish—hence, the decline effect.
Lehrer quotes a scientist, who says, “We cannot escape the troubling conclusion that some—perhaps many—cherished generalities are at best exaggerated in their biological significance and at worst a collective illusion nurtured by strong a-priori beliefs often repeated.”
What that means is scientists bring their baggage with them—to the study—and such beliefs influence their results.
But when others repeat the studies, the results are less impressive, perhaps because the initial studies were swayed by a desire for a certain outcome.
Data don’t speak for themselves and facts are not always just facts.
And despite our best intentions, sometimes we frame the facts to suit our desires.
Read the editorial in NEJM in the September 20, 2012 issue at http://www.nejm.org/doi/full/10.1056/NEJMe1207121
Read Lehrer’s article in the December 13, 2010 New Yorker at http://www.newyorker.com/reporting/2010/12/13/101213fa_fact_lehrer
Great hook (title). It interested me. I liked the article as well.
LikeLike
Thanks!
LikeLike
We have to consider the source of funding, because as much as we might hate to admit it, it does affect the outcome of the study. Same goes for polls and surveys; I always want to know what questions were asked and who did they ask? How the question is formulated can influence the response.
LikeLike