The sins of peer review
More than one million scientific articles are published every year. The process that was established to control their quality is increasingly being called into question.
Jan Hendrik Schön, Yoshitaka Fujii, Woo Suk Hwang, Diederik Stapel: these scientists are famous not for their discoveries, but for having cheated the system. They manipulated – and in some cases even invented – experimental results to get their work published in such prestigious scientific journals as Science and Nature.
These fraudulent cases suggest problems with both the integrity of researchers and the quality control of journals. One of the system’s foundations is the process under which two or three scientific peers examine articles before publication. These are experts in the field, who then recommend whether a paper should be published.
Often criticised as slow and cumbersome, peer review seems archaic in the era of Web 2.0. Technologist presents both the problems and some possible solutions.
Peer review does not always fulfil its primary purpose, which is to prevent the publication of erroneous results. This problem was brilliantly exposed in two studies, one in Science in 2013 and the other in the British Medical Journal in 2008, in which articles with intentionally erroneous results were sent to hundreds of journals, the majority of which accepted them for publication. The experts who were consulted either failed to detect or just overlooked the errors.
An efficient system should also select the most pertinent articles and promote high-quality research. Yet the system tends to stifle original thought; in fact, some research that ultimately proved Nobel-worthy was originally rejected. It is often difficult for reviewers to identify truly groundbreaking research because it contradicts established theories.
Consciously or not, experts tend to favour articles from renowned institutions. A 1982 study by Behavioural and Brain Sciences demonstrated that a previously accepted article originating from a prestigious university was often rejected when subsequently submitted under the authorship of scientists at second-tier institutions.
The experts themselves, who usually remain anonymous, also tend to be favourable to articles from colleagues with whom they have worked on a regular basis – even to the point of fraud. A 2014 survey by Nature revealed that some authors went so far as to create false identities to review their own articles or those of their friends.
The rhythm of scientific progress and the advancement of careers are accelerating, but publishing an article still takes as long as ever – from several months to more than a year if authors are asked to make corrections or challenge a rejection.
The famous “publish-or-perish” culture encourages scientists to work on projects whose results are likely to be published quickly by reputable journals. These are often trendy subjects with practical applications and, above all, only positive results. But science does not always work that way: progress often comes in small increments. As such, it is critical to share negative and positive results alike to ensure that scientists do not waste time on hypotheses that have already been dismissed. Reproducing existing results is an essential step in the scientific method, even if it does not lead to publication.
Peer review is founded on the unpaid work of thousands of university experts, as well as the paid work of journal staffs. Subscriptions are expensive; even open-access journals just transfer their costs to researchers, who have to pay to publish their articles. This is ultimately profitable for the publishers: Reed Elsevier, for example, has an operating margin of more than 30%.
Publishing in a scientific journal is not the only way to disseminate research results. In the era of Web 2.0 and social networking, there are plenty of platforms on which scientists can write their articles, publish them online and respond to comments from not one or two peers but the entire scientific community. The current system is archaic in comparison because results cannot be updated or corrected quickly, nor can post-publication comments be taken into account.
A necessary evil
Despite the system’s drawbacks, many scientists still consider peer review a necessary evil. A 2011 report from the British Parliament described it as inefficient but irreplaceable. This is because the articles a scientist publishes in prestigious journals are critical to career advancement and peer review plays a critical role in their evaluation.
In the peer review process, two or three experts are consulted. But tens, hundreds or potentially thousands of experts who may want to chime in on its strengths and weaknesses read a published scientific article. Participative evaluation in the form of online comments and feedback would make it possible to assemble and consider all these.
Publish, then review
An article could be published before it is reviewed, enabling the entire community to quickly read and assess its quality. In less than two months in 2011, for example, the scientific community published 60 articles on Arxiv.org (see below), affirming that neutrinos produced at CERN travelled faster than light – a much more rapid and complete response than traditional peer review.
Life after publication
On some platforms, every online article can be commented, evaluated and even graded by experts. Authors then have the opportunity to respond to criticism, explain obscure points, and even modify their articles. In this context, the results remain up-to-date even after publication.
The end of anonymity
Revealing reviewers’ identities would instil a sort of social control, curbing the tendency towards cronyism. Such openness could even encourage peers to participate through comments, such as on specialised forums where experts who answer questions earn points – developing the “gamification” of peer review and the recognition for constructive criticism.
Sites like F1000.com already encourage a thousand recognised experts in a scientific field to publicly recommend articles they have read by explaining why they found the research interesting. The Example of Arxiv.org Physicists, mathematicians and computer scientists use the Arxiv.org platform to distribute an open-access copy of a manuscript they are submitting to a journal. Founded in 1991 and funded by Cornell University, Arxiv.org now includes nearly one million articles. The site allows quick sharing of results; each year about 100,000 new articles are published. Instead of waiting months for articles to appear in journals, the community stays up to date on research results in real time. Despite the absence of peer review before online publication, the site has published only a small number of articles with questionable content.
A possible transition
The system should not be changed abruptly. A process of online review could be developed in parallel to the current journal review, replacing it gradually. Collective review could begin with open-access articles that have already been published. This, among other things, would avoid copyright issues.
“Do authors trust the reviewers?”
A neuroscientist and her peers imagine a different system
Together with her colleague Nikolaus Kriegeskorte, neuroscientist Diana Deca has gathered, analysed and summarised 18 visions imagined by scientists around the world to replace peer-review. Open evaluation could allow many experts to voice their views but any radical change should come gradually, says the researcher from Technische Universität München.
Technologist: Many scientists agree that peer review has its flaws. Why don’t we replace it?
Diana Deca: A lot of researchers still like it – and all of them need it. You cannot tell a young researcher today not to publish in the best journal they can. It’s simply too important for their career, as hiring committee and funding agencies count the number of articles you’ve published in high-profile journals. We’re talking about several million researchers who would have to change the way they think about their work.
T. How can we make the transition?
D. D. It’s still useful to have pre-publication peer-review, but open evaluation after publication would allow many experts to publish their comments. It should be introduced gradually and tested to see what works best.
T. What’s the best system to replace peer-review?
D. D. Our conclusion is that reviewers would have to log in. One idea would be to allow each of them to clearly state their preferences, for example the potential impact of the research, its novelty or its reliability. This way, individual users could define their own personal weightings. It would also really help if researchers would upload the raw data from their studies, to allow other scientists to do their own analysis.
T. What challenges do you see in changing the system?
D. D. Designing an open review is complicated. Many options are possible: the comments can be anonymous or not, quantitative or qualitative, and could be voted on by other reviewers. It could create a huge centralised – and expensive – website. We can’t avoid the main question raised by peer-review: do authors trust the reviewers?
T. Are you working to build such new tools?
D. D. Yes, we and others are. But creating the new website requires a great deal of time, energy and funding. We’re full-time researchers, so we are open to have more people joining.
Global lists are a key source of information for students choosing a university. But how relevant are they to the learning experience?
When Galileo is fully functional in 2020, it will provide the most precise navigation ever, even at the North and South Poles.