The seven sins of peer review

Home Technologist 04 The seven sins of peer review

More than one million scientific articles are published every year. The process that was established to control their quality is increasingly being called into question.

Stylised drawing of eyes in a rectangular pattern

Jan Hendrik Schön, Yoshitaka Fujii, Woo Suk Hwang, Diederik Stapel: these scientists are famous not for their discoveries, but for having cheated the system. They manipulated – and in some cases even invented – experimental results to get their work published in such prestigious scientific journals as Science and Nature.

These fraudulent cases suggest problems with both the integrity of researchers and the quality control of journals. One of the system’s foundations is the process under which two or three scientific peers examine articles before publication. These are experts in the field, who then recommend whether a paper should be published.

Often criticised as slow and cumbersome, peer review seems archaic in the era of Web 2.0. Technologist presents both the problems and some possible solutions.

Problems

1. INEFFICIENCY Peer review does not always fulfil its primary purpose, which is to prevent the publication of erroneous results. This problem was brilliantly exposed in two studies, one in Science in 2013 and the other in the British Medical Journal in 2008, in which articles with intentionally erroneous results were sent to hundreds of journals, the majority of which accepted them for publication. The experts who were consulted either failed to detect or just overlooked the errors.

An efficient system should also select the most pertinent articles and promote high-quality research. Yet the system tends to stifle original thought; in fact, some research that ultimately proved Nobel-worthy was originally rejected. It is often difficult for reviewers to identify truly groundbreaking research because it contradicts established theories.

2. BIAS Consciously or not, experts tend to favour articles from renowned institutions. A 1982 study by Behavioural and Brain Sciences demonstrated that a previously accepted article originating from a prestigious university was often rejected when subsequently submitted under the authorship of scientists at second-tier institutions.

The experts themselves, who usually remain anonymous, also tend to be favourable to articles from colleagues with whom they have worked on a regular basis – even to the point of fraud. A 2014 survey by Nature revealed that some authors went so far as to create false identities to review their own articles or those of their friends.

3. SPEED The rhythm of scientific progress and the advancement of careers are accelerating, but publishing an article still takes as long as ever – from several months to more than a year if authors are asked to make corrections or challenge a rejection.

4. CULTURE The famous “publish-or-perish” culture encourages scientists to work on projects whose results are likely to be published quickly by reputable journals. These are often trendy subjects with practical applications and, above all, only positive results.

But science does not always work that way: progress often comes in small increments. As such, it is critical to share negative and positive results alike to ensure that scientists do not waste time on hypotheses that have already been dismissed. Reproducing existing results is an essential step in the scientific method, even if it does not lead to publication.

5. COST Peer review is founded on the unpaid work of thousands of university experts, as well as the paid work of journal staffs. Subscriptions are expensive; even open-access journals just transfer their costs to researchers, who have to pay to publish their articles. This is ultimately profitable for the publishers: Reed Elsevier, for example, has an operating margin of more than 30 per cent.

6. OBSOLESCENCE Publishing in a scientific journal is not the only way to disseminate research results. In the era of Web 2.0 and social networking, there are plenty of platforms on which scientists can write their articles, publish them online and respond to comments from not one or two peers but the entire scientific community. The current system is archaic in comparison because results cannot be updated or corrected quickly, nor can post-publication comments be taken into account.

7. A NECESSARY EVIL Despite the system’s drawbacks, many scientists still consider peer review a necessary evil. A 2011 report from the British Parliament described it as inefficient but irreplaceable. This is because the articles a scientist publishes in prestigious journals are critical to career advancement and peer review plays a critical role in their evaluation.

Solutions

UNIVERSAL JUDGEMENT In the peer review process, two or three experts are consulted. But tens, hundreds or potentially thousands of experts who may want to chime in on its strengths and weaknesses read a published scientific article. Participative evaluation in the form of online comments and feedback would make it possible to assemble and consider all these.

PUBLISH, THEN REVIEW An article could be published before it is reviewed, enabling the entire community to quickly read and assess its quality. In less than two months in 2011, for example, the scientific community published 60 articles on Arxiv.org (see below), affirming that neutrinos produced at CERN travelled faster than light – a much more rapid and complete response than traditional peer review.

LIFE AFTER PUBLICATION On some platforms, every online article can be commented, evaluated and even graded by experts. Authors then have the opportunity to respond to criticism, explain obscure points, and even modify their articles. In this context, the results remain up-to-date even after publication.

THE END OF ANONYMITY Revealing reviewers’ identities would instil a sort of social control, curbing the tendency towards cronyism. Such openness could even encourage peers to participate through comments, such as on specialised forums where experts who answer questions earn points – developing the “gamification” of peer review and the recognition for constructive criticism.

Sites like F1000.com already encourage a thousand recognised experts in a scientific field to publicly recommend articles they have read by explaining why they found the research interesting. solutions

THE EXAMPLE OF ARXIV.ORG Physicists, mathematicians and computer scientists use the Arxiv.org platform to distribute an open-access copy of a manuscript they are submitting to a journal. Founded in 1991 and funded by Cornell University, Arxiv.org now includes nearly one million articles.

The site allows quick sharing of results; each year about 100,000 new articles are published. Instead of waiting months for articles to appear in journals, the community stays up to date on research results in real time. Despite the absence of peer review before online publication, the site has published only a small number of articles with questionable content.

A POSSIBLE TRANSITION The system should not be changed abruptly. A process of online review could be developed in parallel to the current journal review, replacing it gradually. Collective review could begin with open-access articles that have already been published. This, among other things, would avoid copyright issues.

By Daniel Saraga

More on peer review: Do authors trust the reviewers?

SIMILAR ARTICLES

Plagiarism

Thanks to major European initiatives, scientific publishers are feeling the pressure to crack down on plagiarism.
Portrait photo of Diana Deca

A neuroscientist and her peers imagine a different system.