scholarly journals Metascience on Peer Review: Testing the Effects of a Study’s Originality and Statistical Significance in a Field Experiment

2020 ◽  
Vol 3 (1) ◽  
pp. 53-65
Author(s):  
Malte Elson ◽  
Markus Huff ◽  
Sonja Utz

Peer review has become the gold standard in scientific publishing as a selection method and a refinement scheme for research reports. However, despite its pervasiveness and conferred importance, relatively little empirical research has been conducted to document its effectiveness. Further, there is evidence that factors other than a submission’s merits can substantially influence peer reviewers’ evaluations. We report the results of a metascientific field experiment on the effect of the originality of a study and the statistical significance of its primary outcome on reviewers’ evaluations. The general aim of this experiment, which was carried out in the peer-review process for a conference, was to demonstrate the feasibility and value of metascientific experiments on the peer-review process and thereby encourage research that will lead to understanding its mechanisms and determinants, effectively contextualizing it in psychological theories of various biases, and developing practical procedures to increase its utility.

2019 ◽  
Author(s):  
Malte Elson ◽  
Markus Huff ◽  
Sonja Utz

Peer review has become the gold standard in scientific publishing as a selection method and a refinement scheme for research reports. However, despite its pervasiveness and conferred importance, relatively little empirical research has been conducted to document its effectiveness. Further, there is evidence that factors other than a submission’s merits can substantially influence peer reviewers’ evaluations. We report the results of a metascientific field experiment on the effect of the originality of a study and the statistical significance of its primary outcome on reviewers’ evaluations. The general aim of this experiment, which was carried out in the peer-review process for a conference, was to demonstrate the feasibility and value of metascientific experiments on the peer-review process and thereby encourage research that will lead to understanding its mechanisms and determinants, effectively contextualizing it in psychological theories of various biases, and developing practical procedures to increase its utility.


2019 ◽  
Author(s):  
Malte Elson ◽  
Markus Huff ◽  
Sonja Utz

Peer review has become the gold standard in scientific publishing as a selection method and a refinement scheme for research reports. However, despite its pervasiveness and conferred importance, relatively little empirical research has been conducted to document its effectiveness. Further, there is evidence that factors other than a submission’s merits can substantially influence peer reviewers’ evaluations. We report the results of a metascientific field experiment on the effect of the originality of a study and the statistical significance of its primary outcome on reviewers’ evaluations. The general aim of this experiment, which was carried out in the peer-review process for a conference, was to demonstrate the feasibility and value of metascientific experiments on the peer-review process and thereby encourage research that will lead to understanding its mechanisms and determinants, effectively contextualizing it in psychological theories of various biases, and developing practical procedures to increase its utility.


2018 ◽  
Author(s):  
Cody Fullerton

For years, the gold-standard in academic publishing has been the peer-review process, and for the most part, peer-review remains a safeguard to authors publishing intentionally biased, misleading, and inaccurate information. Its purpose is to hold researchers accountable to the publishing standards of that field, including proper methodology, accurate literature reviews, etc. This presentation will establish the core tenants of peer-review, discuss if certain types of publications should be able to qualify as such, offer possible solutions, and discuss how this affects a librarian's reference interactions.


Author(s):  
Ann Blair Kennedy, LMT, BCTMB, DrPH

  Peer review is a mainstay of scientific publishing and, while peer reviewers and scientists report satisfaction with the process, peer review has not been without criticism. Within this editorial, the peer review process at the IJTMB is defined and explained. Further, seven steps are identified by the editors as a way to improve efficiency of the peer review and publication process. Those seven steps are: 1) Ask authors to submit possible reviewers; 2) Ask reviewers to update profiles; 3) Ask reviewers to “refer a friend”; 4) Thank reviewers regularly; 5) Ask published authors to review for the Journal; 6) Reduce the length of time to accept peer review invitation; and 7) Reduce requested time to complete peer review. We believe these small requests and changes can have a big effect on the quality of reviews and speed in which manuscripts are published. This manuscript will present instructions for completing peer review profiles. Finally, we more formally recognize and thank peer reviewers from 2018–2020.


F1000Research ◽  
2016 ◽  
Vol 5 ◽  
pp. 683 ◽  
Author(s):  
Marco Giordan ◽  
Attila Csikasz-Nagy ◽  
Andrew M. Collings ◽  
Federico Vaggi

BackgroundPublishing in scientific journals is one of the most important ways in which scientists disseminate research to their peers and to the wider public. Pre-publication peer review underpins this process, but peer review is subject to various criticisms and is under pressure from growth in the number of scientific publications.MethodsHere we examine an element of the editorial process ateLife, in which the Reviewing Editor usually serves as one of the referees, to see what effect this has on decision times, decision type, and the number of citations. We analysed a dataset of 8,905 research submissions toeLifesince June 2012, of which 2,750 were sent for peer review, using R and Python to perform the statistical analysis.ResultsThe Reviewing Editor serving as one of the peer reviewers results in faster decision times on average, with the time to final decision ten days faster for accepted submissions (n=1,405) and 5 days faster for papers that were rejected after peer review (n=1,099). There was no effect on whether submissions were accepted or rejected, and a very small (but significant) effect on citation rates for published articles where the Reviewing Editor served as one of the peer reviewers.ConclusionsAn important aspect ofeLife’s peer-review process is shown to be effective, given that decision times are faster when the Reviewing Editor serves as a reviewer. Other journals hoping to improve decision times could consider adopting a similar approach.


2019 ◽  
Author(s):  
Damian Pattinson

In recent years, funders have increased their support for early sharing of biomedical research through the use of preprints. For most, such as the COAlitionS group of funders (ASAPbio 2019) and the Gates foundation, this takes the form of active encouragement, while for others, it is mandated. But despite these motivations, few authors are routinely depositing their work as a preprint before submitting to a journal. Some journals have started offering authors the option of posting their work early at the point at which it is submitted for review. These include PLOS, who offer a link to BiorXiv, the Cell journals, who offer SSRN posting through ‘Sneak Peak’, and Nature Communications, who offer posting to any preprint and a link from the journal page called ‘Under Consideration’. Uptake has ranged from 3% for the Nature pilot, to 18% for PLOS (The Official Plos Blog 2018). In order to encourage more researchers to post their work early, we have been offering authors who submit to BMC Series titles the opportunity to post their work as a preprint on Research Square, a new platform that lets authors share and improve their research. To encourage participation, authors are offered a greater amount of control and transparency over the peer review process if they opt in. First, they are given a detailed peer review timeline which updates in real time every time an event occurs on their manuscript (reviewer invited, reviewer accepts etc). Second, they are encouraged to share their preprint with colleagues, who are able to post comments on the paper. These comments are sent to the editor when they are making their decision. Third, authors can suggest potential peer reviewers, recommendations which are also passed onto the editor to vet and invite. Together, these incentives have had a positive impact on authors choosing to post a preprint. Among the journals that offer this service, the average opt-in rate is 40%. This translates to over 3,000 manuscripts (as of July 2019) that have been posted to Research Square since the launch of the service in October 2018. In this talk I will demonstrate the functionality of Research Square, and provide demographic and discipline data on which areas are most and least likely to post.


BMC Medicine ◽  
2019 ◽  
Vol 17 (1) ◽  
Author(s):  
Anthony Chauvin ◽  
Philippe Ravaud ◽  
David Moher ◽  
David Schriger ◽  
Sally Hopewell ◽  
...  

Abstract Background The peer review process has been questioned as it may fail to allow the publication of high-quality articles. This study aimed to evaluate the accuracy in identifying inadequate reporting in RCT reports by early career researchers (ECRs) using an online CONSORT-based peer-review tool (COBPeer) versus the usual peer-review process. Methods We performed a cross-sectional diagnostic study of 119 manuscripts, from BMC series medical journals, BMJ, BMJ Open, and Annals of Emergency Medicine reporting the results of two-arm parallel-group RCTs. One hundred and nineteen ECRs who had never reviewed an RCT manuscript were recruited from December 2017 to January 2018. Each ECR assessed one manuscript. To assess accuracy in identifying inadequate reporting, we used two tests: (1) ECRs assessing a manuscript using the COBPeer tool (after completing an online training module) and (2) the usual peer-review process. The reference standard was the assessment of the manuscript by two systematic reviewers. Inadequate reporting was defined as incomplete reporting or a switch in primary outcome and considered nine domains: the eight most important CONSORT domains and a switch in primary outcome(s). The primary outcome was the mean number of domains accurately classified (scale from 0 to 9). Results The mean (SD) number of domains (0 to 9) accurately classified per manuscript was 6.39 (1.49) for ECRs using COBPeer versus 5.03 (1.84) for the journal’s usual peer-review process, with a mean difference [95% CI] of 1.36 [0.88–1.84] (p < 0.001). Concerning secondary outcomes, the sensitivity of ECRs using COBPeer versus the usual peer-review process in detecting incompletely reported CONSORT items was 86% [95% CI 82–89] versus 20% [16–24] and in identifying a switch in primary outcome 61% [44–77] versus 11% [3–26]. The specificity of ECRs using COBPeer versus the usual process to detect incompletely reported CONSORT domains was 61% [57–65] versus 77% [74–81] and to identify a switch in primary outcome 77% [67–86] versus 98% [92–100]. Conclusions Trained ECRs using the COBPeer tool were more likely to detect inadequate reporting in RCTs than the usual peer review processes used by journals. Implementing a two-step peer-review process could help improve the quality of reporting. Trial registration Clinical.Trials.govNCT03119376 (Registered April, 18, 2017).


F1000Research ◽  
2016 ◽  
Vol 5 ◽  
pp. 683 ◽  
Author(s):  
Marco Giordan ◽  
Attila Csikasz-Nagy ◽  
Andrew M. Collings ◽  
Federico Vaggi

BackgroundPublishing in scientific journals is one of the most important ways in which scientists disseminate research to their peers and to the wider public. Pre-publication peer review underpins this process, but peer review is subject to various criticisms and is under pressure from growth in the number of scientific publications.MethodsHere we examine an element of the editorial process ateLife, in which the Reviewing Editor usually serves as one of the referees, to see what effect this has on decision times, decision type, and the number of citations. We analysed a dataset of 8,905 research submissions toeLifesince June 2012, of which 2,747 were sent for peer review. This subset of 2747 papers was then analysed in detail.  ResultsThe Reviewing Editor serving as one of the peer reviewers results in faster decision times on average, with the time to final decision ten days faster for accepted submissions (n=1,405) and five days faster for papers that were rejected after peer review (n=1,099). Moreover, editors acting as reviewers had no effect on whether submissions were accepted or rejected, and a very small (but significant) effect on citation rates.ConclusionsAn important aspect ofeLife’s peer-review process is shown to be effective, given that decision times are faster when the Reviewing Editor serves as a reviewer. Other journals hoping to improve decision times could consider adopting a similar approach.


2017 ◽  
Vol 33 (1) ◽  
pp. 129-144 ◽  
Author(s):  
Jay C. Thibodeau ◽  
L. Tyler Williams ◽  
Annie L. Witte

ABSTRACT In the new research frontier of data availability, this study develops guidelines to aid accounting academicians as they seek to evidence data integrity proactively in the peer-review process. To that end, we explore data integrity issues associated with two emerging data streams that are gaining prominence in the accounting literature: online labor markets and social media sources. We provide rich detail surrounding academic thought about these data platforms through interview data collected from a sample of former senior journal editors and survey data collected from a sample of peer reviewers. We then propound a set of best practice considerations that are designed to mitigate the perceived risks identified by our assessment.


2018 ◽  
Vol 115 (12) ◽  
pp. 2952-2957 ◽  
Author(s):  
Elizabeth L. Pier ◽  
Markus Brauer ◽  
Amarette Filut ◽  
Anna Kaatz ◽  
Joshua Raclaw ◽  
...  

Obtaining grant funding from the National Institutes of Health (NIH) is increasingly competitive, as funding success rates have declined over the past decade. To allocate relatively scarce funds, scientific peer reviewers must differentiate the very best applications from comparatively weaker ones. Despite the importance of this determination, little research has explored how reviewers assign ratings to the applications they review and whether there is consistency in the reviewers’ evaluation of the same application. Replicating all aspects of the NIH peer-review process, we examined 43 individual reviewers’ ratings and written critiques of the same group of 25 NIH grant applications. Results showed no agreement among reviewers regarding the quality of the applications in either their qualitative or quantitative evaluations. Although all reviewers received the same instructions on how to rate applications and format their written critiques, we also found no agreement in how reviewers “translated” a given number of strengths and weaknesses into a numeric rating. It appeared that the outcome of the grant review depended more on the reviewer to whom the grant was assigned than the research proposed in the grant. This research replicates the NIH peer-review process to examine in detail the qualitative and quantitative judgments of different reviewers examining the same application, and our results have broad relevance for scientific grant peer review.


Sign in / Sign up

Export Citation Format

Share Document