By Francois Massol, Denis Bourguet, Benoît Facon, Dominique Gravel, Thomas Guillemaud, Ruth Hufbauer, and Cyrille Violle
Fig. 1: How the Peer Community system works (© T. Guillemaud and I. Gounand)
Something is rotten in academia. Telltale symptoms of the problem include plagiarism, scientific misconduct leading to retraction, the focus on numbers of publications rather than their quality, manipulation of citation rates, the intense pursuit of publication in high impact journals, and the sometimes shallow examination of accomplishments by hiring committees or when reviewing research grant applications. The problems with academia are multifaceted. The intense competition for positions and funding rewards numbers of publications and grant dollars brought in, rather than advances in understanding. Individual researchers cannot change this state of affairs without uniting to improve the system. While many issues need to be addressed (indeed, we do not even touch on sexism, racism, and abuses of power in the litany above), there is one revolution that researchers can start immediately: we can change the model of academic publication.
The current system of publishing in traditional journals needs to be reformed for at least five different reasons. First, publishing is incredibly costly to the readers, to the authors, or sometimes to both. For readers or institutions to gain access to publications, or for authors to pay for the public to have free access, subscriptions and page charges are paid either out-of-pocket or with taxpayer money. Paying to access the results of research that is funded by public grants is particularly indefensible. The irrationality of this system is brought into sharper focus as funds allocated to pay for publications are not available for research, teaching or training.
Second, publishing companies rely upon the work of researchers as authors, editors, and reviewers, but do not provide any money for this work. Rather, the researchers are paid by their institutions or universities. However, while serving the publishing companies, those researchers are not doing research or teaching. In other words, the business of selling researchers their own results is sustained by work provided for free by those researchers. Journals affiliated with scientific societies can use the funds to support events for their community (e.g. conferences and workshops), but this is the exception rather than the rule.
Third, the peer-review system, which should improve the quality of published works, fails to do so. Reviewers and editors both are overloaded with the huge volume of work submitted. This leads to reviews that can sometimes be far too succinct to ensure scientific quality. As Kravitz and Baker (2011) eloquently explained, “the feedback provided by reviewers is not focused on scientific merit but on whether to publish in a particular journal, which is generally of little use to authors and an opaque and noisy basis for prioritizing the literature”. Furthermore, the system is not transparent enough. Most often editorial correspondence remains anonymous, and when it is not, the goal is not always to highlight the strengths and weaknesses of a study, but rather to make reviewing work accountable on CVs (e.g. the recent Publons initiative).
Fourth, the publishing system is dominated by a few big, for-profit, companies. The strategic decisions of these academic mammoths can endanger certain disciplines or might decrease the access of poor universities to research results as publishing costs are steadily increasing despite substantial profit margins. Additionally, publishing companies now influence every aspect of the production of academic knowledge, from evaluating research institutions to finding reviewers of grant applications through networking scientists or organizing research.
Fifth, the shift towards the economic model of open access means that almost anything can get published, whether it is high quality or not. The open access movement is laudable in many respects – research findings should be a greater good, accessible by anyone. But because the authors pay and the publishers benefit, this sets up a system that is intrinsically corruptible. Publishing companies have an economic interest in accepting as many articles as possible, not necessarily worthy research, and this appetite finds a match in the underlying “publish or perish” environment that researchers find themselves in. That this environment inadvertently selects for poor quality and even fraudulent publications is being made more explicit in some regions, with bonuses being offered to researchers that publish in high-profile journals. While that type of incentive is not purposely geared to reduce the quality of research done, the quality will inevitably go down as the pressure to get something, anything, into certain journals increases.
Pushing the logic of open access venues for publishing research one step further, manuscripts are more and more commonly deposited by researchers in open archives such as arxiv.org, bioRxiv.org or preprints.org, making research results available more quickly than when submitted to a journal, as well as free of charge. This immediate availability of “preprints” also allows the use of social networks to comment on the results, thus promoting contact between science and the public. However, preprints are not formally evaluated by the scientific community, making it difficult to discern whether the studies are valid and relevant. This is a problem and we are not the only ones to highlight it.
Overall, keeping the old system means continuing to pay to read results of research paid for by taxpayers and reviewed and edited for free by other researchers, which is untenable. Transitioning to open access journals means having to pay to publish, expands the opportunity for journals to profit off of authors’ “publishing money” and increases selection for poor quality and fraudulent publications. Thus, open access is also an untenable long-term solution. Posting papers on public archives implies an absence of gatekeepers (editors and referees) separating the wheat from the chaff, and thus is also not an adequate solution. Given that the work of editors and referees is not paid for by journals, be they open access or behind a paywall, the problem of separating the wheat from the chaff is actually quite straightforward to resolve.
After much brainstorming, discussion and one full year of trial, we are happy to propose a completely free solution for authors and readers, called Peer Community In (PCI), https://peercommunityin.org. In a nutshell, this project aims to establish communities of researchers (Peer Communities) in different fields of science to evaluate articles. If the research is deemed to be sound and worthy, it is then validated and highlighted with a public recommendation to the broader field. These communities would serve as the “gatekeepers” needed to filter the volume of preprints submitted on public repositories.
Evolutionary biologists started with the creation of a Peer Community in Evolutionary Biology in 2017, with already more than 350 registered recommenders and 46 recommendations of papers. In January 2018, a Peer Community in Paleontology and a Peer Community in Ecology were launched. In the coming months, a Peer Community in Computational Statistics is going to see the light.
The process for a new manuscript is as follows (there is also a short video available here).
- The authors of a manuscript deposit it into an open archive;
- The authors then submit a request that the manuscript undergoes an evaluation coordinated by a relevant member of the existing Peer Communities. The only condition for requesting review by a Peer Community is that the preprint is neither already published nor submitted for review to a journal;
- A researcher within the relevant Peer Community In (called “recommenders” as they have the privilege of being able to recommend the manuscripts that are high quality to the research community) handles the preprint and obtains at least two reviews of the manuscript, and the standard revision and re-review occurs;
- If the recommender deems the work to be valid, as well as interesting to the field, they will publish a recommendation of the manuscript. Recommendations are short essays that highlight the merits of the manuscript (e.g. https://tinyurl.com/example-recommentation). Along with the recommendation (signed), the reviews (signed or anonymous) and responses to reviews are also published. The recommendations themselves have a DOI and can be cited;
- Importantly, evaluation and recommendation of a preprint by a Peer Community will not prevent subsequent submission of the article for publication in a journal.
The whole process is offered for free with no fees for authors (for the recommendation of the preprint they submit) or readers (they can freely access the preprint, recommendations and associated reviews).
This new system has some similarity with conventional academic journals. However, Peer Communities do not publish scientific articles. They do publish recommendations, reviews and commentaries on preprints posted in open archives. The manuscript is already deposited in an open archive and so already disclosed. One of the major differences between Peer Community and a conventional journal is that Peer Communities can count several hundred recommenders (as do Peer Community in Ecology and Peer Community in Evolutionary Biology) instead of ten or so associate editors in a classic journal. Thus, recommenders handle fewer preprints (and without obligation to do so) than associate editors handle manuscripts, which are often obligatorily assigned to them. Consequently, when PCI recommenders handle preprints, they are often closer to the topic of the preprints and thus potentially can more easily motivate colleagues to review it.
Because recommendation of a preprint by a Peer Community does not prevent it from being submitted for publication in an academic journal, the Peer Community In project can coexist with the current journal system. Indeed, most journals today accept submission of articles that have been deposited as preprints on open archives (to check your favorite journal, use sherpa/romeo’s handy service). Furthermore, reviews and recommendations by Peer Community In can be used by journals, avoiding duplicate review efforts. Actually, this has already happened for preprints recommended by Peer Community in Evolutionary Biology.
We would like to tackle three concerns that have been raised previously in discussions of Peer Community In. The first one is based on the fact that this new system does not use impact factors or other notoriety metrics. The argument is that this could prevent scientific colleagues from using the services of Peer Community In. Notwithstanding the fact that impact factors are not going to be given to any form of preprint anytime soon because preprint repositories are not asking for it and are not publishing entities per se, one should first ask the question: why should we need an impact factor? A vast majority of scientists agree that impact factors and other journal notoriety metrics are inane and dangerous, as evinced by the many propositions of alternative metrics. At the same time, there is an implicit agreement that these metrics are needed because evaluation committees use them. However, evaluation committees are not an abstraction but are made up of scientists who mostly view impact factors as outdated tools. If speed and efficiency of these committees are of the essence, what would work better to assess the value of a small selection of papers than the matching list of paper recommendations, signed by known recommenders? Besides, this argument overlooks the fact that good papers will still get cited, even if posted on preprint archives, and those citations will likely appear on open access databases such as google scholar.
A second concern that has been raised is that Peer Community In might not scale up easily because it lacks the service of a managing editor (or a board thereof). The concern goes along lines that will be familiar to readers of the Scholarly Kitchen blog, i.e. that managing editors are essential because they screen articles for potential cases of misconduct (plagiarism included), check manuscript formatting, check that all files needed for review are present, etc. However, the system used by Peer Community In takes care of the classic needs of paper management. Indeed, preprint repositories now routinely use plagiarism checks before accepting a paper to be deposited (for instance, bioRxiv has an automatic plagiarism check) and it is the duty of referees to look for more subtle cases of scientific misconduct. The presentation of preprints is completely up to the authors, which removes all formatting needs entirely. But, most importantly, the system can be scaled up very easily because Peer Community In does not have an obligation to find a recommender for each and every paper that is submitted. For a paper to be recommended at Peer Community In, the essential step is for authors to submit a manuscript that covers a decently interesting topic, has all the necessary material to be reviewed and is nicely presented. A poorly presented manuscript does indeed risk not finding a recommender willing to handle the preprint. So rather than paying to have managing editors sift through the sea of bad submissions, these will simply sort themselves out through competition for recommenders’ attention. By having a large and diverse group of recommenders, there is a wide variety of topics covered, so solid science, well-prepared, should get reviewed, avoiding dramatic swings as scientific fads arise.
Third, authors and readers have expressed concern that Peer Community In provides an opportunity for cronyism, and is too much of a “club” rather than an open network of researchers. Several characteristics limit this risk. It is straightforward for experts to join the group of recommenders simply by asking. The managing board of a PCI evaluates expertise, and generally accepts those holding a terminal degree and conducting research in the relevant field. Thus, participation is quite open. In agreement with the code of ethical conduct of the Peer Community, recommenders and reviewers must declare that they have no conflict of interest with the authors or the content of the preprint they handle or evaluate. Reviewers are solicited by the recommenders and cannot be proposed by the authors, which limits the risks of ring review and faked reviewers. To further prevent cronyism, all recommenders are strictly limited to five recommendations per year. Finally, the guidelines for forming a Peer Community state that those initiating the effort must be make deliberate and honest efforts in recruiting recommenders to include researchers in the relevant field from all around the world, and to attain a good gender balance. This should prevent any Peer Community from being overly influenced by a few well-known personalities.
In summary, the idea behind the Peer Community In project is to establish a free and public review system to evaluate preprints and to validate and highlight preprints of good quality. While the project is in its infancy, it has the support of several French research organizations (including the National Institute of Agricultural Research [INRA], the French National Center for Scientific Research [CNRS]), the French Society for Ecology and Evolution, the Spanish Society for Evolutionary Biology and the American Society for the Study of Evolution. The ultimate goal is that the Peer Community In recommendations will be recognized by the international scientific community, including research institutions and funding agencies, as a high-quality label, thus rendering submissions to conventional journals optional. We hope this endeavor will help solve some of the problems facing academia today by removing the financial burden of publication, by redirecting scientists’ focus from impact factors to meaningful recommendations (and thus probably changing the way publication lists are appraised) and by decreasing the overall time spent reviewing papers as submissions will not jump from one journal to the next in search of a publication venue.
While high-cost commercial publication is not the only challenge academia faces, it is a crucial one. Peer Community In is a start towards replacing the usurpers with rigorous and open scholarly exchange.
Author’s Biography: The authors are researchers in ecology and evolution who got together to launch some of the first Peer Communities (@PCI_Ecology, @PCIEvolBiol) after spending years being appalled at the practices of the publishing industry.