The scientific peer review system needs improvement. The goals of the system – to ensure that scientific work is technically accurate and contextualized – are important in building public trust in the work we do do. They also help to improve the quality and presentation of our work.
However, peer review is slow and inefficient. It takes time for journal editors to find scientists willing to review manuscripts, time for those scientists to carefully carry out their work, and more time for editors to synthesize this feedback and come to decisions on acceptance for publication, revision, or rejection. Because the whole system functions on volunteered time and good will, daily obligations can take predence, and delays are inevitable. Some journals like Ecology Letters promise turnarounds in under eight weeks, and others like Science in cases of transformative or urgent work (e.g. on the origin of the recent Ebola outbreak) can be even faster. But in most cases, the process drags on for months, slowing down scientific progress. The problem gets worse if a manuscript is rejected. The authors must then take their feedback and begin the process all over again at another journal, with a new editor, and a new set of reviewers. I myself have a manuscript that has bounced from three journals over two years and is still not acceptable for publication. In other cases I have waited half of a year to even receive a preliminary decision.
Imaginably the process would be much more efficient if the new target journal could draw on the feedback and comments of the previous reviewers and editors. This would reduce unnecessary duplication of effort for all parties and let everyoone spend more time (and public money) doing science rather than reviewing it. The process would also be much more efficient if authors could have their work simultaneously considered by multiple journals in a marketplce, with the peer review process decoupled from the editorial and publication process. Historically, neither of these things have happened, and we muddle along with our sequential volunteer-based system.
Recently a company called Axios Review has tried to change this model. They offer a service to scientists in which they independently obtain peer reviews of manuscripts, then negotiate with journals to find one that wants the manuscript for publication.
Axios looks like a great idea, but I think their implementation causes more problems than solutions. The reason is that they charge a fee scientists to use the service. The fee is currently $250. Reviewers and journal editors do not receive any of this; the company (now a non-profit) retains the money for their management efforts. It looks like money well spent – papers sent through the service have a very high acceptance rate at target journals, according to the Axios website. However this process pushes us towards a two-tier model, building a divide between researchers who have the resources available to pay for this extra level of treatment, and those who do not. It puts those with money at the front of the line, and those without money at the back – even if their work may be of equal quality. A pay-to-publish system builds inequality and does not reduce it. If we are dedicated to improving access to science and scientific careers in our own country and especially in developing countries, I do not think we can support this model. $250 may not seem like a lot of money to the head of a well-funded American research group, but it is a large sum to an American graduate student with no other project funding, or indeed to many researchers outside of the developed world. Do we leave them to slow peer review, low manuscript acceptance rates, and a self-reinforcing system that disadvantages their work? I do not think we can.
I had a constructive conversation with Tim Vines, Axios managing editor, about their system. He made several good arguments. First, he pointed out that their solution clearly does streamline an inefficient process. I agree. Second, he argued that the process of managing peer review does incur some costs, which typically have been averaged across all scientists at the level of journal subscription fees. In this model, they are being passed on to the author instead. This is true, but I think it is fundamentally the equivalent of a regressive taxation system. It is similar to what has happened with open-access publishing, where still in many cases the author must take on the burden of paying to publish. Third, he suggested that if the service gains enough traction with authors it may eventually be possible to flip the revenue model back on to publishers. But until now, publisher-pays models (e.g. Peerage of Science) have not gained much traction. This is fair, too, but the road may be slow.
What would be better? At minimum, I think Axios needs to offer a sliding price scale or fee waivers to reflect the range of financial resources available to different scientists – or an option for well-funded groups to pay-forward these fees for others. More broadly, I think it would be reasonable for journals to request a copy of the peer reviews received by authors during an earlier stage of the process – and to consider making a decision on manuscripts without further peer review. This is difficult as editors often want to know reviewers’ identities, and may not be able to trust that authors have not edited these reviews. A between-journal transfer system (for example, built into the EditorialManager system) could solve this if the journal community wanted it. I also think a community-supported system in which journals could compete to accept manuscripts would also be useful (e.g. see Peerage of Science, Expresso). Both these solutions would not pass along financial costs directly to the authors, as the Axios system currently does. If they could modify their revenue model, then I would be much more supportive of their work. Indeed, I admire their innovative efforts to make change – they have received excellent feedback from several ecologists (1) (2) – but their change is not the change we should ultimately be looking for. Perhaps it will take this sort of innovation to catalyze future change, as is happening slowly with open-access publication. However I am not sure I want to be part of the first wave, and think it is important to remember our values, however they may align with market forces.
I normally review fifteen to twenty manuscripts per year, to match the total number of scientists required for reviewing my own work. This is only what is fair for all of us; no more or less. But when I was asked a few weeks ago to review for Axios, I declined. And I will continue to do so until we find a more equitable way to improve the current system.