There are many flaws in our current system Peer Review. Going by my previous articles, you may think that it’s dying. But it’s not over yet. Smart people have been working on ways to fix Peer Review.
The problem is not that science is broken. Science is fine. It’s research papers that are broken. More specifically, the system that publishes science is now getting in the way of it. It doesn’t have to be that way. The Peer Review process was never written down in stone. The current system has wasn’t even that popular until fifty years ago. It is possible to change it without science blinking out of existence. It’s been done before, and can be done again.
Scientists, by their nature, are problem solvers. Many smart people have been coming up with their own solutions to the problems with Peer Review.
Here is a tour through the three big ideas that might change the way Peer Review gets done in the future.
Strengthening Peer Reviewers
Surely the best place to start is with the peer reviewers themselves ? What would happen if we trained them to critique scientific articles better ? That might sound strange. We are talking about experts here. They should already know the scientific process back to front. Wouldn’t it be patronising to teach them experimental design and statistics all over again? Surely they would know how to critique obvious mistakes… right ?
The British Medical Journal decided to put this to the test. They took a sample of reviewers and divided them into three groups. One group would get face to face training in how to assess papers. The second group would receive a self teaching package. The last group got no training at all. All three groups received a scientific paper containing nine deliberate errors. So how well did each group do ?
The untrained peer reviewers could only find 2 out of 9 errors in each of these papers. That’s pretty damning. So let’s look at the groups that were trained to spot the specific errors that would turn up in this paper.
They managed to catch .. 3 out of 9 errors. So training can improve the quality of peer reviewers. But not by much. Not by nearly enough.
The main assumption of this paper was that ignorance caused mistakes to pass Peer Review. That is a factor, but as we can see, it’s not a big one. To build a better reviewer, we need to know what makes them good. We still aren’t exactly clear on that.
Younger scientists who are just starting out tend to make better reviewers. Which doesn’t make sense if the skills needed for review are entirely intellectual. If that were the case, older and more experienced scientists would be better.
But research tends to show that the opposite occurs. It could be that young scientists working at the coalface of science can empathise with the people they are reviewing. It could even be as simple as them having more time to mull over scientific papers than their superiors.
If it is time that determines the quality of a review, then journals need to get better at bargaining for time from their reviewers. Some publishers offer their best peer reviewers discounts. On books, subscriptions, and even on the fees to publish papers. But some have gone beyond that, offering incentives that aren’t a barely concealed up-sell.
eLife has started compensating its reviewers for their time with actual money. Not much money, but enough to let the reviewers know that they are not being taken for granted. It’s too early to tell whether paid reviewers are better, or whether eLife has just managed to snag the best reviewers by offering a better deal.
Double Blind Peer Review
Not too long ago, the science twittersphere was set on fire by yet another peer review scandal. “PLOS One” had rejected work submitted by Dr Ingleby and Dr Head. The two researchers were surprised to when they discovered the reason for their rejection.
Reviewer comments are occasionally less than pleasant, especially on papers that have been rejected. But most of the ire tends to focus on the substance of the paper itself. This time the ire focused on the nature of the authors. The peer reviewer’s problem was that Dr Ingleby and Dr Head’s first names were Fiona and Megan. They were women.
“It would probably also be beneficial to find one or two male biologists to work with (or at least obtain internal peer review from, but better yet as active co-authors)” to prevent the manuscript from “drifting too far away from empirical evidence into ideologically biased assumptions”
Oh, but the reviewer didn’t stop at criticising the paper. What followed was a rant that went like this :
Perhaps it is not so surprising that on average male doctoral students co-author one more paper than female doctoral students, just as, on average, male doctoral students can probably run a mile a bit faster than female doctoral students
Wowed by the strength of this negative review, PLOS One rejected the paper. When Dr Ingleby and Dr Head tried to appeal, they were rejected once more. It was only after this final insult that they blew the lid on this entire festering affair on Twitter. It was only then that PLOS One did something.
They fired both the peer reviewer and the editor who rejected the article based on those comments. Dr Ingleby and Dr Head got to re-submit their paper.
This debacle shows how some peer reviewers will happily discriminate against the authors of a paper rather than its contents.
This kind of prejudice works both ways. Certain labs can get past review simply on the strength of their reputation. No need to actually critique their work, because every assumes they’ll get published anyway.
This is why there have been calls for journals to bring in “Double Blind” peer review. The idea is simple. The reviewer should only be allowed to judge a paper by its contents. The gender, race and institution of authors should no longer matter. Concealing those details takes them out of the equation. Reviewers only need to concentrate on the scientific quality. The authors never need to know the reviewers identity, and vice versa.
The journal “Behavioural Ecology” switched to Double-Blind review in 2001, with surprising results. People noticed that after 2001, the journal published more work by women. As you can imagine , this finding was immediately controversial, with two main counterarguments.
For a start, it’s only one journal. It’s hard to figure out whether other factors didn’t play a role. In short, it’s too early to throw a ticker tape parade, because there isn’t enough evidence for it yet.
The other counterpoint is that the numbers of female authors didn’t increase by all that much. It doesn’t look like it’s “the” solution to the gender equality problem in science. But none of these critiques seems to say that double blind peer review is bad, merely that we need to wait and see.
The biggest argument against Double Blind Peer Review is a practical one. Individual scientific fields tend to be incredibly small. Scientists have a good idea of what their peers are doing when they meet at conferences. Let’s say you’ve just heard a colleague explain the new antibiotic they discovered. Later you get sent an paper about the discovery of that antibiotic. You don’t need to be Sherlock Holmes to figure out who the author is. In tests, 30%-40% of double blinded reviewers could identify of the author of a paper they’ve been sent. Which means that the double blinding might not work on many occasions.
Which makes it difficult to argue that double blind peer review will ever fully work as intended. The main benefit of double blinding articles is that there is no way it can possibly harm peer review. In cases where the blinding fails, you get peer review the way it’s done now.
Which is why Nature has decided to offer double blind peer review options for its journals. According to internal surveys, about a fifth of their submissions use it, and people seem pretty happy with it. It’s still an opt in system, so famous labs can still profit in this system. Reviewers do not have the choice of blinding themselves to prevent any bias in what they write. It’s not a perfect system, but if Nature keeps leading the way, it could be something we see more of in the future.
We may never know the identity of the sexist reviewer who panned Ingleby and Head’s paper. If they had been blinded , maybe they would have been fairer. But what if the opposite had occurred. What if the reviewer had to make his review knowing that it would be public ?
What if, instead of introducing an extra layer of anonymity, we took that anonymity away ? What if we made reviewers accountable ?
Open Peer Review
When an article is published with obvious errors or plagiarism, a question inevitably arises; How did this happen ?
For most published research, we will never know. We will never know what the peer reviewers actually said, or did not say. There is no accountability.
At this moment, reviewers are only held to account by the journal editors. The problem is that these journal editors are often very overworked. For every published paper you see, there are so many more that do not see the light of day. Editors have to sift through an avalanche of articles before they select what to publish. The pressure is high, and many editors don’t have the time to double check the comments of peer reviewers.
This is where Open Peer Review comes in. By exposing the reviewer comments to the world, that accountability expands beyond the editor. The public suddenly gets to see various dissenting opinions surrounding an article before publication. It makes reviewers accountable to everyone. Which can be a double edged sword. Whilst it means that we can punish bad review, and reward good critique, there are good reasons why reviewers need anonymity. Even constructive critique of a work can earn you enemies. In an academic world that’s already cut-throat when the powers that be merely ignore you, making an enemy can be a career killer.
Which is why opening up peer review can make it difficult to attract reviewers. When Nature attempted their own experiment with open peer review, the uptake was so low that they eventually abandoned it.
Which is why journals like the BMJ are enacting a slightly different form of peer review. They know that readers don’t care about the identities of the Peer Reviewers is irrelevant. Which is why they have opted to keep the authors anonymous, and made the reviews public. It carries the benefits of open peer review, without it’s main problem.
Open peer review exposes the logic that causes articles to be published. It allows us to see the critique side by side with the actual article. That allows us to make more informed decisions about that article.
That is the most important point here. Open Peer Review allows the reader to take more responsibility for what they read.
This is by no means an exhaustive list of the kinds of things happening on the publisher side of peer review. That’s because the most interesting occurrences have only begun to happen in recent years, and those have not been lead by the journals. Readers are no longer taking a passive role in accepting science articles. They are finding new ways of holding journals to account, and carving out new avenues for discussion. The real surprise is that this was all a surprise at all. Because in this new era, the very act of reading is it’s own revolution. So join me next time, when we delve into the world of “Post-Publication” peer review.