The subject of retractions has been gaining a lot of steam in the media recently, with several recent studies (outlined nicely in this New York Times article) showing that retractions are on the rise, and misconduct and falsifying data are one of the most common reasons. Yesterday on Twitter, I noted that, while the language used to describe these retractions was very inclusive (“scientists,” “science articles,”) the fields that the retractions were in were largely limited to the biomedical and life sciences. A discussion ensued, with folks like Ed Yong pointing out that lack of evidence doesn’t necessarily indicate lack of fraud in other fields. At this stage, it would be useful to know whether this is a real or a perceived difference in misconduct rates, not only because the culture of how we do science may be relevant, but also because we need to make sure that the public doesn’t lose trust in science and scientists in general.
“Scientists” are not a monolithic group, and it generally irks me when we’re represented that way. Still, I’m increasingly convinced that misconduct is a problem that all of us should start taking seriously. As these retraction announcements have been coming out, I found myself shaking my head and chalking up the difference to biomedical and life science institutions and cultures. I was making the assumption that falsifying data was not something that was going to happen in my fields; it just didn’t fit with the ecologists, geologists, and climate scientists I know. I couldn’t imagine a world where people who cared about their research and its implications to the environmental problems we face would want to falsify results– it seems so counterintuitive. I trust my colleagues, for one. “I know people are collecting field data,” I thought. “I have access to their code. I just don’t see it happening.”
But after yesterday’s discussion, I’m thinking more deeply about this. Data can be manipulated. Statistics can be tweaked. I don’t want to mistrust my friends and fellow researchers, but a healthy dose of skepticism has helped me realize that “oh, it’ll never happen in my field!” is precisely the wrong way to think about this issue. As Ed Yong pointed out yesterday, it’s not like folks in retraction-heavy fields were thinking, “we’re really at risk and need to be on our guard against things like this happening!” There’s also a large continuum of misconduct. On one end, you’ve got falsification of entire datasets. On the other end, you delete a data cell or drop a sample from an analaysis because it stands between you and statistical significance (I’ve never, ever done this, but it’s not difficult to imagine). You know your data– that data point is weird, and it’s ruining the last year’s worth of work you’ve done. It’s such a simple fix, right? But it’s still wrong.
Obviously, this is a good time for a lot of self-reflection within the scientific community as a whole, and its many branches. One thing I pointed out is that there are very large institutional and cultural differences amongst various scientific disciplines when it comes to funding, publishing, important metrics for hiring, collaborations, data sharing, etc. In my fields (ecology and earth science), single-investigator studies (and single-authored papers) are rare, and open data is more common than in some other fields. If there is an actual (and not just perceived) difference in misconduct, do these cultural differences play a role? Not everyone publishes 170+ papers in twenty years (like anesthesiology researcher Dr. Yoshitaka Fujii, who is believed to hold the record number of retractions). Do things like research output expectations, cutthroat grant requirements, or other pressures contribute to increasing fraud? Are biomedical retractions more common because the immediate stakes– human health and safety– so much higher than, say, a new fossil discovery, and therefore biomedical papers are under more scrutiny, so falsifications are more likely to be caught? Have things always been this way, but improved communication is allowing us to catch cheaters more?
On one level, I hate this. I hate that there are people out there who are shaking the public’s trust, when that trust has been so hard-earned to begin with (and is often shaky anyway). As someone for whom research is an act of love, it makes me sick that people would just make things up for personal gain; it holds up the entire pursuit of learning about the universe (other than the fact we already know, that people lie). On the other hand, maybe science is due for a good shake-up. If this helps us take a good look at how science is funded, reviewed, published, disseminated, and shared, that’s a good thing.
And finally, I can’t talk about retractions without a nod to one of my favorite sites, Retraction Watch. I wish we didn’t need it, but I’m very glad it’s there.