Fit may also include basic assessments, such as confirming that the submission is a legitimate scientific paper that falls into one of the journal’s accepted article types. The “fit” assessment considers whether the manuscript aligns with the journal’s aims and scope and is typically performed by journal editors or administrators. Peer review includes three areas of assessment: journal fit, research and reporting quality, and compliance. How can automated screening help peer review? Given these developments, it is important to consider the strengths and limitations of automated screening and how one might responsibly integrate these tools into the editorial process. Table 1 provides a brief overview of some tools that have been used to screen preprints or papers. While this approach was adopted to support authors and readers in assessing the flood of COVID-19 preprints, it demonstrates the feasibility and potential of widespread automated screening. The ScreenIT pipeline, which includes a growing set of automated tools, has been used to post public reports on more than 23,000 bioRxiv and medRxiv COVID-19 preprints. Our experience suggests that automated screening is most powerful when many tools are applied simultaneously to assess various aspects of reporting. Some publishers are experimenting with using automated tools to check for factors such as statistical reporting errors, ethics statements, blinding, randomization and sample size calculations. Automated tools could help to fill this gap. While preprints allow scientists to receive feedback before publishing their work in a journal, comments on preprints are uncommon. The growing adoption of preprints offers another opportunity to use automated tools to help authors to improve their papers. Journals could potentially use screening tools to improve reporting before sending papers to reviewers, or enhance peer review by drawing reviewers’ attention to opportunities for improvement. Publishers have been using automated tools to detect plagiarism for more than a decade. Īutomated screening in academic publishing is not new, and may offer a unique opportunity to improve scientific papers. The lack of a comprehensive reviewer training system may contribute to problems with peer review. Evidence that peer review substantially improves reporting, or catches errors or questionable research practices, is limited. Published papers are routinely missing information needed to assess the risk of bias. Inadequate reporting is also common in published studies, making it difficult for reviewers to evaluate manuscripts. Correcting errors after publication is extremely burdensome hence, focusing on prevention may be more efficient. This may include papers with fundamental flaws in design, analysis or inference, or fraudulent papers. While many papers benefit from peer review, problematic papers are still published. Rising publication rates have increasingly strained this system. Peer review is a cornerstone of scientific publishing that, ideally, provides high quality assessments on large numbers of submitted manuscripts. ![]() Automated screening tools cannot replace peer review, but may aid authors, reviewers, and editors in improving scientific papers. Strategies for responsible use of automated tools in peer review may include setting performance criteria for tools, transparently reporting tool performance and use, and training users to interpret reports. Editors and peer reviewers are essential for assessing journal fit and the overall quality of a paper, including the experimental design, the soundness of the study’s conclusions, potential impact and innovation. Tools cannot yet determine whether the methods used are suitable to answer the research question, or whether the data support the authors’ conclusions. However, existing tools cannot understand or interpret the paper in the context of the scientific literature. Tools can screen many papers quickly, and may be particularly helpful in assessing compliance with journal policies and with straightforward items in reporting guidelines. ![]() ![]() Automated screening tools could potentially enhance peer review by helping authors, journal editors, and reviewers to identify beneficial practices and common problems in preprints or submitted manuscripts. The rising rate of preprints and publications, combined with persistent inadequate reporting practices and problems with study design and execution, have strained the traditional peer review system.
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |