Showing posts with label unreliable science. Show all posts
Showing posts with label unreliable science. Show all posts

Monday, October 14, 2013

Bad Science: Michael Eisen and sloppy peer review

The following is reposted with the kind permission of the author, Michael Eisen, a biologist at the University of California, Berkeley. The original appeared on his blog on October 3 with the title "I confess, I wrote the Arsenic DNA paper to expose flaws in peer-review at subscription based journals."

The ease with which substandard research papers slip through the reviewing system and into publication is worrying in its implications, since (for example) we are in the middle of very expensive programs to tackle what many scientists are telling us is the threat of climate change. Clearly more work needs to be done in quality control if we are to retain our confidence in scientific expertise.

In 2011, after having read several really bad papers in the journal Science, I decided to explore just how slipshod their peer-review process is. I knew that their business depends on publishing “sexy” papers. So I created a manuscript that claimed something extraordinary - that I’d discovered a species of bacteria that uses arsenic in its DNA instead of phosphorus. But I made the science so egregiously bad that no competent peer reviewer would accept it. The approach was deeply flawed – there were poor or absent controls in every figure. I used ludicrously elaborate experiments where simple ones would have done. And I failed to include a simple, obvious experiment that would have definitively shown that arsenic was really in the bacteria’s DNA. I then submitted the paper to Science, punching up the impact the work would have on our understanding of extraterrestrials and the origins of life on Earth in the cover letter. And what do you know? They accepted it!

My sting exposed the seedy underside of “subscription-based” scholarly publishing, where some journals routinely lower their standards – in this case by sending the paper to reviewers they knew would be sympathetic - in order to pump up their impact factor and increase subscription revenue. Maybe there are journals out there who do subscription-based publishing right – but my experience should serve as a warning to people thinking about submitting their work to Science and other journals like it.

OK – this isn’t exactly what happened. I didn’t actually write the paper. Far more frighteningly, it was a real paper that contained all of the flaws described above that was actually accepted, and ultimately published, by Science.

I am dredging the arsenic DNA story up again, because today’s Science contains a story by reporter John Bohannon describing a “sting” he conducted into the peer review practices of open access journals. He created a deeply flawed paper about molecules from lichens that inhibit the growth of cancer cells, submitted it to 304 open access journals under assumed names, and recorded what happened. Of the 255 journals that rendered decisions, 157 accepted the paper, most with no discernible sign of having actually carried out peer review. (PLOS ONE, rejected the paper, and was one of the few to flag its ethical flaws).

The story is an interesting exploration of the ways peer review is, and isn’t, implemented in today’s biomedical publishing industry. Sadly, but predictably, Science spins this as a problem with open access. Here is their press release:
Spoof Paper Reveals the “Wild West” of Open-Access Publishing
A package of news stories related to this special issue of Science includes a detailed description of a sting operation — orchestrated by contributing news correspondent John Bohannon — that exposes the dark side of open-access publishing. Bohannon explains how he created a spoof scientific report, authored by made-up researchers from institutions that don’t actually exist, and submitted it to 304 peer-reviewed, open-access journals around the world. His hoax paper claimed that a particular molecule slowed the growth of cancer cells, and it was riddled with obvious errors and contradictions. Unfortunately, despite the paper’s flaws, more open-access journals accepted it for publication (157) than rejected it (98). In fact, only 36 of the journals solicited responded with substantive comments that recognized the report’s scientificproblems. (And, according to Bohannon, 16 of those journals eventually accepted the spoof paper despite their negative reviews.) The article reveals a “Wild West” landscape that’s emerging in academic publishing, where journals and their editorial staffs aren’t necessarily who or what they claim to be. With his sting operation, Bohannon exposes some of the unscrupulous journals that are clearly not based in the countries they claim, though he also identifies some journals that seem to be doing open-access right.
Although it comes as no surprise to anyone who is bombarded every day by solicitations from new “American” journals of such-and-such seeking papers and offering editorial positions to anyone with an email account, the formal exposure of hucksters out there looking to make a quick buck off of scientists’ desires to get their work published is valuable. It is unacceptable that there are publishers – several owned by big players in the subscription publishing world – who claim that they are carrying out peer review, and charging for it, but no doing it.

But it’s nuts to construe this as a problem unique to open access publishing, if for no other reason than the study, didn’t do the control of submitting the same paper to subscription-based publishers (UPDATE: The author, Bohannon emailed to say that, while his original intention was to look at all journals, practical constraints limited him to OA journals, and that Science played no role in this decision). We obviously don’t know what subscription journals would have done with this paper, but there is every reason to believe that a large number of them would also have accepted the paper (it has many features in common with the arsenic DNA paper afterall). Like OA journals, a lot of subscription-based journals have businesses based on accepting lots of papers with little regard to their importance or even validity. When Elsevier and other big commercial publishers pitch their “big deal”, the main thing they push is the number of papers they have in their collection. And one look at many of their journals shows that they also will accept almost anything.

None of this will stop anti-open access campaigners (hello Scholarly Kitchen) from spinning this as a repudiation for enabling fraud. But the real story is that a fair number of journals who actually carried out peer review still accepted the paper, and the lesson people should take home from this story not that open access is bad, but that peer review is a joke. If a nakedly bogus paper is able to get through journals that actually peer reviewed it, think about how many legitimate, but deeply flawed, papers must also get through. Any scientist can quickly point to dozens of papers – including, and perhaps especially, in high impact journals – that are deeply, deeply flawed – the arsenic DNA story is one of many recent examples. As you probably know there has been a lot of smoke lately about the “reproducibility” problem in biomedical science, in which people have found that a majority of published papers report facts that turn out not to be true. This all adds up to showing that peer review simply doesn’t work.

And the real problem isn’t that some fly-by-night publishers hoping to make a quick buck aren’t even doing peer review (although that is a problem). While some fringe OA publishers are playing a short con, subscription publishers are seasoned grifters playing a long con. They fleece the research community of billions of dollars every year by convincing them of something manifestly false – that their journals and their “peer review” process are an essential part of science, and that we need them to filter out the good science – and the good scientists – from the bad. Like all good grifters playing the long con, they get us to believe they are doing something good for us – something we need. While they pocket our billions, with elegant sleight of hand, then get us to ignore the fact that crappy papers routinely get into high-profile journals simply because they deal with sexy topics.

But unlike the fly by night OA publishers who steal a little bit of money, the subscription publishers’ long con has far more serious consequences. Not only do they traffic in billions rather than thousands of dollars and denying the vast majority of people on Earth access to the findings of publicly funded research, the impact and glamour they sell us to make us willing participants in their grift has serious consequences. Every time they publish because it is sexy, and not because it is right, science is distorted. It distorts research. It distorts funding. And it often distorts public policy.

To suggest – as Science (though not Bohannon) are trying to do – that the problem with scientific publishing is that open access enables internet scamming is like saying that the problem with the international finance system is that it enables Nigerian wire transfer scams.

There are deep problems with science publishing. But the way to fix this is not to curtain open access publishing. It is to fix peer review.

First, and foremost, we need to get past the antiquated idea that the singular act of publication – or publication in a particular journal – should signal for all eternity that a paper is valid, let alone important. Even when people take peer review seriously, it is still just represents the views of 2 or 3 people at a fixed point in time. To invest the judgment of these people with so much meaning is nuts. And its far worse when the process is distorted – as it so often is – by the desire to publish sexy papers, or to publish more papers, or because the wrong reviewers were selected, or because they were just too busy to do a good job. If we had, instead, a system where the review process was transparent and persisted for the useful life of a work (as I’ve written about previously), none of the flaws exposed in Bohannon’s piece would matter.

All original material is copyright of its author. Fair use permitted. Contact via comment. Unless indicated otherwise, all internet links accessed at time of writing. Nothing here should be taken as personal advice, financial or otherwise. No liability is accepted for third-party content, whether incorporated in or linked to this blog; or for unintentional error and inaccuracy. The blog author may have, or intend to change, a personal position in any stock or other kind of investment mentioned.

Saturday, August 08, 2009

Bias

Thought-provoking article in The Guardian today, about how even medical research can end up with an unreliable consensus skewed by influential reviews:

A small number of review papers funnelled large amounts of traffic through the network. These acted like a lens, collecting and focusing citations on the papers supporting the hypothesis.

Worse, science can be "spun":

One paper reported no beta amyloid in three of five patients with IBM, and its presence in only a "few fibres" in the remaining two patients; but three subsequent papers cited this data, saying that it "confirmed" the hypothesis.

This is an exaggeration at best, but the power of the social network theory approach is to show what happened next: over the following 10 years these three supportive citations were the root of 7,848 supportive citation paths, producing chains of false claim in the network, amplifying the distortion.

This leads one on to consider the implications of social network theory. I suppose it's a talent for this that helped Mao and Stalin rise, but also it may explain how people in other fields (e.g. finance and banking, the media) can be both successful and dangerously dumb. (Remember Mao's bright idea of 1958, culling sparrows because they ate crops? The resulting explosion in the crop-gobbling insect population forced him to ask Russia for thousands of birds to restock).

And have you watched the celeb version of "Who wants to be a millionaire?" and been struck by the ignorance of some of them? Yet they know enough (of what they need to know) to make a sight more than most of us. The technique seems to be, get the job first, then learn how to do it from those around you. Duffers try to learn first, then apply for the post, by which time it's gone. Look at chancer Blair as against plod-towards-it Brown. (Some say that Blair has never read a book; but then, he doesn't need to. As Disraeli said, "When I want to read a book, I write one.")

Connected to social network theory is Cass Sunstein's notion of "group polarisation", where like-minded people get together, not only reinforcing their views but making them more extreme. I suppose this has implications not just for political caucuses and media advisers, but for how we choose our newspapers.

And what blogs we read.