scholarly journals Grant Review Feedback: Appropriateness and Usefulness

2020 ◽  
Author(s):  
Stephen Gallo ◽  
Karen Schmaling ◽  
Lisa Thompson ◽  
Scott Glisson

AbstractThe primary goal of the peer review of research grant proposals is to evaluate their quality for the funding agency. An important secondary goal is to provide constructive feedback to applicants for their resubmissions. However, little is known about whether review feedback achieves this goal. In this paper, we present a mixed methods analysis of responses from grant applicants regarding their perceptions of the effectiveness and appropriateness of peer review feedback they received from grant submissions. Overall, 56%-60% of applicants determined the feedback to be appropriate (fair, well-written, and well-informed), although their judgments were more favorable if their recent application was funded. Importantly, independent of funding success, women found the feedback better written than men, and more white applicants found the feedback to be fair than non-white applicants. Also, perceptions of a variety of biases were specifically reported in respondents’ feedback. Less than 40% of applicants found the feedback to be very useful in informing their research and improving grantsmanship and future submissions. Further, negative perceptions of the appropriateness of review feedback were positively correlated with more negative perceptions of feedback usefulness. Importantly, respondents suggested that highly competitive funding pay-lines and poor inter-panel reliability limited the usefulness of review feedback. Overall, these results suggest that more effort is needed to ensure that appropriate and useful feedback is provided to all applicants, bolstering the equity of the review process and likely improving the quality of resubmitted proposals.

2021 ◽  
Vol 27 (2) ◽  
Author(s):  
Stephen A. Gallo ◽  
Karen B. Schmaling ◽  
Lisa A. Thompson ◽  
Scott R. Glisson

AbstractThe primary goal of the peer review of research grant proposals is to evaluate their quality for the funding agency. An important secondary goal is to provide constructive feedback to applicants for their resubmissions. However, little is known about whether review feedback achieves this goal. In this paper, we present a multi-methods analysis of responses from grant applicants regarding their perceptions of the effectiveness and appropriateness of peer review feedback they received from grant submissions. Overall, 56–60% of applicants determined the feedback to be appropriate (fair, well-written, and well-informed), although their judgments were more favorable if their recent application was funded. Importantly, independent of funding success, women found the feedback better written than men, and more white applicants found the feedback to be fair than non-white applicants. Also, perceptions of a variety of biases were specifically reported in respondents’ feedback. Less than 40% of applicants found the feedback to be very useful in informing their research and improving grantsmanship and future submissions. Further, negative perceptions of the appropriateness of review feedback were positively correlated with more negative perceptions of feedback usefulness. Importantly, respondents suggested that highly competitive funding pay-lines and poor inter-panel reliability limited the usefulness of review feedback. Overall, these results suggest that more effort is needed to ensure that appropriate and useful feedback is provided to all applicants, bolstering the equity of the review process and likely improving the quality of resubmitted proposals.


2019 ◽  
Author(s):  
Stephen A. Gallo ◽  
Karen B. Schmaling ◽  
Lisa A. Thompson ◽  
Scott R. Glisson

AbstractBackgroundFunding agencies have long used panel discussion in the peer review of research grant proposals as a way to utilize a set of expertise and perspectives in making funding decisions. Little research has examined the quality of panel discussions and how effectively they are facilitated.MethodsHere we present a mixed-method analysis of data from a survey of reviewers focused on their perceptions of the quality and facilitation of panel discussion from their last peer review experience.ResultsReviewers indicated that panel discussions were viewed favorably in terms of participation, clarifying differing opinions, informing unassigned reviewers, and chair facilitation. However, some reviewers mentioned issues with panel discussions, including an uneven focus, limited participation from unassigned reviewers, and short discussion times. Most reviewers felt the discussions affected the review outcome, helped in choosing the best science, and were generally fair and balanced. However, those who felt the discussion did not affect the outcome were also more likely to evaluate panel communication negatively, and several reviewers mentioned potential sources of bias related to the discussion. While respondents strongly acknowledged the importance of the chair in ensuring appropriate facilitation of the discussion to influence scoring and to limit the influence of potential sources of bias from the discussion on scoring, nearly a third of respondents did not find the chair of their most recent panel to have performed these roles effectively.ConclusionsIt is likely that improving chair training in the management of discussion as well as creating review procedures that are informed by the science of leadership and team communication would improve review processes and proposal review reliability.


2020 ◽  
Vol 7 ◽  
pp. 238212052093660 ◽  
Author(s):  
Troy Camarata ◽  
Tony A Slieman

Constructive feedback is an important aspect of medical education to help students improve performance in cognitive and clinical skills assessments. However, for students to appropriately act on feedback, they must recognize quality feedback and have the opportunity to practice giving, receiving, and acting on feedback. We incorporated feedback literacy into a case-based concept mapping small group-learning course. Student groups engaged in peer review of group-constructed concept maps and provided written peer feedback. Faculty also provided written feedback on group concept maps and used a simple rubric to assess the quality of peer feedback. Groups were provided feedback on a weekly basis providing an opportunity for timely improvement. Precourse and postcourse evaluations along with peer-review feedback assessment scores were used to show improvement in both group and individual student feedback quality. Feedback quality was compared to a control student cohort that engaged in the identical course without implementing peer review or feedback assessment. Student feedback quality was significantly improved with feedback training compared to the control cohort. Furthermore, our analysis shows that this skill transferred to the quality of student feedback on course evaluations. Feedback training using a simple rubric along with opportunities to act on feedback greatly enhanced student feedback quality.


2018 ◽  
Vol 115 (12) ◽  
pp. 2952-2957 ◽  
Author(s):  
Elizabeth L. Pier ◽  
Markus Brauer ◽  
Amarette Filut ◽  
Anna Kaatz ◽  
Joshua Raclaw ◽  
...  

Obtaining grant funding from the National Institutes of Health (NIH) is increasingly competitive, as funding success rates have declined over the past decade. To allocate relatively scarce funds, scientific peer reviewers must differentiate the very best applications from comparatively weaker ones. Despite the importance of this determination, little research has explored how reviewers assign ratings to the applications they review and whether there is consistency in the reviewers’ evaluation of the same application. Replicating all aspects of the NIH peer-review process, we examined 43 individual reviewers’ ratings and written critiques of the same group of 25 NIH grant applications. Results showed no agreement among reviewers regarding the quality of the applications in either their qualitative or quantitative evaluations. Although all reviewers received the same instructions on how to rate applications and format their written critiques, we also found no agreement in how reviewers “translated” a given number of strengths and weaknesses into a numeric rating. It appeared that the outcome of the grant review depended more on the reviewer to whom the grant was assigned than the research proposed in the grant. This research replicates the NIH peer-review process to examine in detail the qualitative and quantitative judgments of different reviewers examining the same application, and our results have broad relevance for scientific grant peer review.


2001 ◽  
Vol 33 (3) ◽  
pp. 605-612
Author(s):  
Mary A. Marchant

AbstractThis article seeks to demystify the competitive grant recommendation process of scientific peer review panels. The National Research Initiative Competitive Grants Program (NRICGP) administered by the U.S. Department of Agriculture-Cooperative State Research, Extension, and Education Service (USDA-CSREES) serves as the focus of this article. This article provides a brief background on the NRICGP and discusses the application process, the scientific peer review process, guidelines for grant writing, and ways to interpret reviewer comments if a proposal is not funded. The essentials of good grant writing discussed in this article are transferable to other USDA competitive grant programs.


2018 ◽  
Author(s):  
Stephen A Gallo ◽  
Lisa A Thompson ◽  
Karen B Schmaling ◽  
Scott R Glisson

AbstractScientific peer reviewers play an integral role in the grant selection process, yet very little has been reported on the levels of participation or the motivations of scientists to take part in peer review. AIBS developed a comprehensive peer review survey that examined the motivations and levels of participation of grant reviewers. The survey was disseminated to 13,091 scientists in AIBS’s proprietary database. Of the 874 respondents, 76% indicated they had reviewed grant applications in the last 3 years; however, the number of reviews was unevenly distributed across this sample. Higher review loads were associated with respondents who had submitted more grant proposals over this time period, some of whom were likely to be study section members for large funding agencies. The most prevalent reason to participate in a review was to give back to the scientific community (especially among frequent grant submitters) and the most common reason to decline an invitation to review was lack of time. Interestingly, few suggested that expectation from the funding agency was a motivation to review. Most felt that review participation positively influenced their careers through improving grantsmanship and exposure to new scientific ideas. Of those who reviewed, respondents reported dedicating 2-5% of their total annual work time to grant review and, based on their self-reported maximum review loads, it is estimated they are participating at 56%-89% of their capacity, which may have important implications regarding the sustainability of the system. Overall, it is clear that participation in peer review is uneven and in some cases near capacity, and more needs to be done to create new motivations and incentives to increase the future pool of reviewers.


2012 ◽  
Vol 65 (1) ◽  
pp. 47-52 ◽  
Author(s):  
Mikael Fogelholm ◽  
Saara Leppinen ◽  
Anssi Auvinen ◽  
Jani Raitanen ◽  
Anu Nuutinen ◽  
...  

2010 ◽  
Vol 450 ◽  
pp. 581-584 ◽  
Author(s):  
David W.C. Ashworth

Students working on semester-long projects in international teams, such as the European Project Semester programme at the Ingeniørhøjskolen i København – University College (IHK), face many challenges, not the least of which is communication between different cultures. The supervisor pays a key role in supporting a project team and monitoring its effectiveness. One of the key tools employed is a self and peer review assessment, undertaken twice by each team member during the semester. The assessment considers the quantity and quality of the contribution made by each team member and their participation in teamworking activities. The supervisor uses the assessment to monitor teamworking and to give constructive feedback and advice where needed. Comparison of the responses from self and peer review assessments was undertaken and the findings are presented. Limited results over a 3 year period were analysed and compared with Autumn 2009 semester results and conclusions drawn.


1991 ◽  
Vol 14 (1) ◽  
pp. 119-135 ◽  
Author(s):  
Domenic V. Cicchetti

AbstractThe reliability of peer review of scientific documents and the evaluative criteria scientists use to judge the work of their peers are critically reexamined with special attention to the consistently low levels of reliability that have been reported. Referees of grant proposals agree much more about what is unworthy of support than about what does have scientific value. In the case of manuscript submissions this seems to depend on whether a discipline (or subfield) is general and diffuse (e.g., cross-disciplinary physics, general fields of medicine, cultural anthropology, social psychology) or specific and focused (e.g., nuclear physics, medical specialty areas, physical anthropology, and behavioral neuroscience). In the former there is also much more agreement on rejection than acceptance, but in the latter both the wide differential in manuscript rejection rates and the high correlation between referee recommendations and editorial decisions suggests that reviewers and editors agree more on acceptance than on rejection. Several suggestions are made for improving the reliability and quality of peer review. Further research is needed, especially in the physical sciences.


2021 ◽  
pp. 1-52
Author(s):  
Junwen Luo ◽  
Thomas Feliciani ◽  
Martin Reinhart ◽  
Judith Hartstein ◽  
Vineeth Das ◽  
...  

Abstract Using a novel combination of methods and datasets from two national funding agency contexts, this study explores whether review sentiment can be used as a reliable proxy for understanding peer reviewer opinions. We measure reviewer opinions via their review sentiments both on specific review subjects and on proposals’ overall funding worthiness with three different methods: manual content analysis and two dictionary-based sentiment analysis algorithms (TextBlob and VADER). The reliability of review sentiment to detect reviewer opinions is addressed by its correlation with review scores and proposals’ rankings and funding decisions. We find in our samples that 1) review sentiments correlate with review scores or rankings positively, and the correlation is stronger for manually coded than for algorithmic results; 2) manual and algorithmic results are overall correlated across different funding programmes, review sections, languages, and agencies, but the correlations are not strong; 3) manually coded review sentiments can quite accurately predict whether proposals are funded, whereas the two algorithms predict funding success with moderate accuracy. Results suggest that manual analysis of review sentiments can provide a reliable proxy of grant reviewer opinions, whereas the two SA algorithms can be useful only in some specific situations. Peer Review https://publons.com/publon/10.1162/qss_a_00156


Sign in / Sign up

Export Citation Format

Share Document