The reliability of peer review for manuscript and grant submissions: A cross-disciplinary investigation

1991 ◽  
Vol 14 (1) ◽  
pp. 119-135 ◽  
Author(s):  
Domenic V. Cicchetti

AbstractThe reliability of peer review of scientific documents and the evaluative criteria scientists use to judge the work of their peers are critically reexamined with special attention to the consistently low levels of reliability that have been reported. Referees of grant proposals agree much more about what is unworthy of support than about what does have scientific value. In the case of manuscript submissions this seems to depend on whether a discipline (or subfield) is general and diffuse (e.g., cross-disciplinary physics, general fields of medicine, cultural anthropology, social psychology) or specific and focused (e.g., nuclear physics, medical specialty areas, physical anthropology, and behavioral neuroscience). In the former there is also much more agreement on rejection than acceptance, but in the latter both the wide differential in manuscript rejection rates and the high correlation between referee recommendations and editorial decisions suggests that reviewers and editors agree more on acceptance than on rejection. Several suggestions are made for improving the reliability and quality of peer review. Further research is needed, especially in the physical sciences.

2021 ◽  
Vol 27 (2) ◽  
Author(s):  
Stephen A. Gallo ◽  
Karen B. Schmaling ◽  
Lisa A. Thompson ◽  
Scott R. Glisson

AbstractThe primary goal of the peer review of research grant proposals is to evaluate their quality for the funding agency. An important secondary goal is to provide constructive feedback to applicants for their resubmissions. However, little is known about whether review feedback achieves this goal. In this paper, we present a multi-methods analysis of responses from grant applicants regarding their perceptions of the effectiveness and appropriateness of peer review feedback they received from grant submissions. Overall, 56–60% of applicants determined the feedback to be appropriate (fair, well-written, and well-informed), although their judgments were more favorable if their recent application was funded. Importantly, independent of funding success, women found the feedback better written than men, and more white applicants found the feedback to be fair than non-white applicants. Also, perceptions of a variety of biases were specifically reported in respondents’ feedback. Less than 40% of applicants found the feedback to be very useful in informing their research and improving grantsmanship and future submissions. Further, negative perceptions of the appropriateness of review feedback were positively correlated with more negative perceptions of feedback usefulness. Importantly, respondents suggested that highly competitive funding pay-lines and poor inter-panel reliability limited the usefulness of review feedback. Overall, these results suggest that more effort is needed to ensure that appropriate and useful feedback is provided to all applicants, bolstering the equity of the review process and likely improving the quality of resubmitted proposals.


2019 ◽  
Author(s):  
Stephen A. Gallo ◽  
Karen B. Schmaling ◽  
Lisa A. Thompson ◽  
Scott R. Glisson

AbstractBackgroundFunding agencies have long used panel discussion in the peer review of research grant proposals as a way to utilize a set of expertise and perspectives in making funding decisions. Little research has examined the quality of panel discussions and how effectively they are facilitated.MethodsHere we present a mixed-method analysis of data from a survey of reviewers focused on their perceptions of the quality and facilitation of panel discussion from their last peer review experience.ResultsReviewers indicated that panel discussions were viewed favorably in terms of participation, clarifying differing opinions, informing unassigned reviewers, and chair facilitation. However, some reviewers mentioned issues with panel discussions, including an uneven focus, limited participation from unassigned reviewers, and short discussion times. Most reviewers felt the discussions affected the review outcome, helped in choosing the best science, and were generally fair and balanced. However, those who felt the discussion did not affect the outcome were also more likely to evaluate panel communication negatively, and several reviewers mentioned potential sources of bias related to the discussion. While respondents strongly acknowledged the importance of the chair in ensuring appropriate facilitation of the discussion to influence scoring and to limit the influence of potential sources of bias from the discussion on scoring, nearly a third of respondents did not find the chair of their most recent panel to have performed these roles effectively.ConclusionsIt is likely that improving chair training in the management of discussion as well as creating review procedures that are informed by the science of leadership and team communication would improve review processes and proposal review reliability.


2020 ◽  
Author(s):  
Stephen Gallo ◽  
Karen Schmaling ◽  
Lisa Thompson ◽  
Scott Glisson

AbstractThe primary goal of the peer review of research grant proposals is to evaluate their quality for the funding agency. An important secondary goal is to provide constructive feedback to applicants for their resubmissions. However, little is known about whether review feedback achieves this goal. In this paper, we present a mixed methods analysis of responses from grant applicants regarding their perceptions of the effectiveness and appropriateness of peer review feedback they received from grant submissions. Overall, 56%-60% of applicants determined the feedback to be appropriate (fair, well-written, and well-informed), although their judgments were more favorable if their recent application was funded. Importantly, independent of funding success, women found the feedback better written than men, and more white applicants found the feedback to be fair than non-white applicants. Also, perceptions of a variety of biases were specifically reported in respondents’ feedback. Less than 40% of applicants found the feedback to be very useful in informing their research and improving grantsmanship and future submissions. Further, negative perceptions of the appropriateness of review feedback were positively correlated with more negative perceptions of feedback usefulness. Importantly, respondents suggested that highly competitive funding pay-lines and poor inter-panel reliability limited the usefulness of review feedback. Overall, these results suggest that more effort is needed to ensure that appropriate and useful feedback is provided to all applicants, bolstering the equity of the review process and likely improving the quality of resubmitted proposals.


2013 ◽  
pp. 130-151 ◽  
Author(s):  
A. Muravyev

In this paper we attempt to classify Russian journals in economics and related disciplines for their scientific significance. We show that currently used criteria, such as a journal’s presence in the Higher Attestation Committee’s list of journals and the Russian Science Citation Index (RSCI) impact factor, are not very useful for assessing the academic quality of journals. Based on detailed data, including complete reference lists for 2010—2011, we find significant differentiation of Russian journals, including among those located at the top of the RSCI list. We identify two groups of Russian journals, tentatively called category A and B journals, that can be regarded as the most important from the viewpoint of their contribution to the economic science.


2010 ◽  
Vol 96 (1) ◽  
pp. 20-29
Author(s):  
Jerry C. Calvanese

ABSTRACT Study Objective: The purpose of this study was to obtain data on various characteristics of peer reviews. These reviews were performed for the Nevada State Board of Medical Examiners (NSBME) to assess physician licensees' negligence and/or incompetence. It was hoped that this data could help identify and define certain characteristics of peer reviews. Methods: This study examined two years of data collected on peer reviews. The complaints were initially screened by a medical reviewer and/or a committee composed of Board members to assess the need for a peer review. Data was then collected from the peer reviews performed. The data included costs, specialty of the peer reviewer, location of the peer reviewer, and timeliness of the peer reviews. Results: During the two-year study, 102 peer reviews were evaluated. Sixty-nine percent of the peer-reviewed complaints originated from civil malpractice cases and 15% originated from complaints made by patients. Eighty percent of the complaint physicians were located in Clark County and 12% were located in Washoe County. Sixty-one percent of the physicians who performed the peer reviews were located in Washoe County and 24% were located in Clark County. Twelve percent of the complaint physicians were in practice in the state for 5 years or less, 40% from 6 to 10 years, 20% from 11 to 15 years, 16% from 16 to 20 years, and 13% were in practice 21 years or more. Forty-seven percent of the complaint physicians had three or less total complaints filed with the Board, 10% had four to six complaints, 17% had 7 to 10 complaints, and 26% had 11 or more complaints. The overall quality of peer reviews was judged to be good or excellent in 96% of the reviews. A finding of malpractice was found in 42% of the reviews ordered by the medical reviewer and in 15% ordered by the Investigative Committees. There was a finding of malpractice in 38% of the overall total of peer reviews. The total average cost of a peer review was $791. In 47% of the peer reviews requested, materials were sent from the Board to the peer reviewer within 60 days of the original request and 33% took more than 120 days for the request to be sent. In 48% of the reviews, the total time for the peer review to be performed by the peer reviewer was less than 60 days. Twenty seven percent of the peer reviews took more than 120 days to be returned. Conclusion: Further data is needed to draw meaningful conclusions from certain peer review characteristics reported in this study. However, useful data was obtained regarding timeliness in sending out peer review materials, total times for the peer reviews, and costs.


Author(s):  
TO Jefferson ◽  
P Alderson ◽  
F Davidoff ◽  
E Wager

Author(s):  
Jeasik Cho

This book provides the qualitative research community with some insight on how to evaluate the quality of qualitative research. This topic has gained little attention during the past few decades. We, qualitative researchers, read journal articles, serve on masters’ and doctoral committees, and also make decisions on whether conference proposals, manuscripts, or large-scale grant proposals should be accepted or rejected. It is assumed that various perspectives or criteria, depending on various paradigms, theories, or fields of discipline, have been used in assessing the quality of qualitative research. Nonetheless, until now, no textbook has been specifically devoted to exploring theories, practices, and reflections associated with the evaluation of qualitative research. This book constructs a typology of evaluating qualitative research, examines actual information from websites and qualitative journal editors, and reflects on some challenges that are currently encountered by the qualitative research community. Many different kinds of journals’ review guidelines and available assessment tools are collected and analyzed. Consequently, core criteria that stand out among these evaluation tools are presented. Readers are invited to join the author to confidently proclaim: “Fortunately, there are commonly agreed, bold standards for evaluating the goodness of qualitative research in the academic research community. These standards are a part of what is generally called ‘scientific research.’ ”


Logistics ◽  
2021 ◽  
Vol 5 (1) ◽  
pp. 6
Author(s):  

Peer review is the driving force of journal development, and reviewers are gatekeepers who ensure that Logistics maintains its standards for the high quality of its published papers [...]


2021 ◽  
Vol 11 (2) ◽  
pp. 138
Author(s):  

Peer review is the driving force of journal development, and reviewers are gatekeepers who ensure that Brain Sciences maintains its standards for the high quality of its published papers [...]


Sign in / Sign up

Export Citation Format

Share Document