Peer review of grant applications

The Lancet ◽  
1998 ◽  
Vol 352 (9133) ◽  
pp. 1063-1064 ◽  
Author(s):  
Michael Swift
2006 ◽  
Vol 54 (1) ◽  
pp. 13-19 ◽  
Author(s):  
Theodore A. Kotchen ◽  
Teresa Lindquist ◽  
Anita Miller Sostek ◽  
Raymond Hoffmann ◽  
Karl Malik ◽  
...  

The Lancet ◽  
1997 ◽  
Vol 349 (9044) ◽  
pp. 63 ◽  
Author(s):  
HA Waldron

2018 ◽  
Vol 38 (2) ◽  
pp. 216-229 ◽  
Author(s):  
Stephen Gallo ◽  
Lisa Thompson ◽  
Karen Schmaling ◽  
Scott Glisson

2014 ◽  
Author(s):  
Kevin Boyack ◽  
Mei-Ching Chen ◽  
George Chacko

The National Institutes of Health (NIH) is the largest source of funding for biomedical research in the world. This funding is largely effected through a competitive grants process. Each year the Center for Scientific Review (CSR) at NIH manages the evaluation, by peer review, of more than 55,000 grant applications. A relevant management question is how this scientific evaluation system, supported by finite resources, could be continuously evaluated and improved for maximal benefit to the scientific community and the taxpaying public. Towards this purpose, we have created the first system-level description of peer review at CSR by applying text analysis, bibliometric, and graph visualization techniques to administrative records. We identify otherwise latent relationships across scientific clusters, which in turn suggest opportunities for structural reorganization of the system based on expert evaluation. Such studies support the creation of monitoring tools and provide transparency and knowledge to stakeholders


BMJ Open ◽  
2020 ◽  
Vol 10 (8) ◽  
pp. e035058 ◽  
Author(s):  
Anna Severin ◽  
Joao Martins ◽  
Rachel Heyard ◽  
François Delavy ◽  
Anne Jorstad ◽  
...  

ObjectivesTo examine whether the gender of applicants and peer reviewers and other factors influence peer review of grant proposals submitted to a national funding agency.SettingSwiss National Science Foundation (SNSF).DesignCross-sectional analysis of peer review reports submitted from 2009 to 2016 using linear mixed effects regression models adjusted for research topic, applicant’s age, nationality, affiliation and calendar period.ParticipantsExternal peer reviewers.Primary outcome measureOverall score on a scale from 1 (worst) to 6 (best).ResultsAnalyses included 38 250 reports on 12 294 grant applications from medicine, architecture, biology, chemistry, economics, engineering, geology, history, linguistics, mathematics, physics, psychology and sociology submitted by 26 829 unique peer reviewers. In univariable analysis, male applicants received more favourable evaluation scores than female applicants (+0.18 points; 95% CI 0.14 to 0.23), and male reviewers awarded higher scores than female reviewers (+0.11; 95% CI 0.08 to 0.15). Applicant-nominated reviewers awarded higher scores than reviewers nominated by the SNSF (+0.53; 95% CI 0.50 to 0.56), and reviewers from outside of Switzerland more favourable scores than reviewers affiliated with Swiss institutions (+0.53; 95% CI 0.49 to 0.56). In multivariable analysis, differences between male and female applicants were attenuated (+0.08; 95% CI 0.04 to 0.13) whereas results changed little for source of nomination and affiliation of reviewers. The gender difference increased after September 2011, when new evaluation forms were introduced (p=0.033 from test of interaction).ConclusionsPeer review of grant applications at SNSF might be prone to biases stemming from different applicant and reviewer characteristics. The SNSF abandoned the nomination of peer reviewers by applicants. The new form introduced in 2011 may inadvertently have given more emphasis to the applicant’s track record. We encourage other funders to conduct similar studies, in order to improve the evidence base for rational and fair research funding.


2018 ◽  
Vol 115 (12) ◽  
pp. 2952-2957 ◽  
Author(s):  
Elizabeth L. Pier ◽  
Markus Brauer ◽  
Amarette Filut ◽  
Anna Kaatz ◽  
Joshua Raclaw ◽  
...  

Obtaining grant funding from the National Institutes of Health (NIH) is increasingly competitive, as funding success rates have declined over the past decade. To allocate relatively scarce funds, scientific peer reviewers must differentiate the very best applications from comparatively weaker ones. Despite the importance of this determination, little research has explored how reviewers assign ratings to the applications they review and whether there is consistency in the reviewers’ evaluation of the same application. Replicating all aspects of the NIH peer-review process, we examined 43 individual reviewers’ ratings and written critiques of the same group of 25 NIH grant applications. Results showed no agreement among reviewers regarding the quality of the applications in either their qualitative or quantitative evaluations. Although all reviewers received the same instructions on how to rate applications and format their written critiques, we also found no agreement in how reviewers “translated” a given number of strengths and weaknesses into a numeric rating. It appeared that the outcome of the grant review depended more on the reviewer to whom the grant was assigned than the research proposed in the grant. This research replicates the NIH peer-review process to examine in detail the qualitative and quantitative judgments of different reviewers examining the same application, and our results have broad relevance for scientific grant peer review.


Sign in / Sign up

Export Citation Format

Share Document