scholarly journals Measuring bias, burden and conservatism in research funding processes

F1000Research ◽  
2019 ◽  
Vol 8 ◽  
pp. 851 ◽  
Author(s):  
Susan Guthrie ◽  
Daniela Rodriguez Rincon ◽  
Gordon McInroy ◽  
Becky Ioppolo ◽  
Salil Gunashekar

Background: Grant funding allocation is a complex process that in most cases relies on peer review. A recent study identified a number of challenges associated with the use of peer review in the evaluation of grant proposals. Three important issues identified were bias, burden, and conservatism, and the work concluded that further experimentation and measurement is needed to assess the performance of funding processes. Methods: We have conducted a review of international practice in the evaluation and improvement of grant funding processes in relation to bias, burden and conservatism, based on a rapid evidence assessment and interviews with research funding agencies. Results: The evidence gathered suggests that efforts so far to measure these characteristics systematically by funders have been limited. However, there are some examples of measures and approaches which could be developed and more widely applied. Conclusions: The majority of the literature focuses primarily on the application and assessment process, whereas burden, bias and conservatism can emerge as challenges at many wider stages in the development and implementation of a grant funding scheme. In response to this we set out a wider conceptualisation of the ways in which this could emerge across the funding process.

2007 ◽  
Vol 2 (1) ◽  
pp. 23 ◽  
Author(s):  
Lynn L. Langille ◽  
Theresa Mackenzie

Purpose - Difficulty in securing research funding has been cited as one barrier to the involvement of more librarians and information professionals in conducting original research. This article seeks to support the work of librarians who wish to secure research funding by describing some key approaches to the creation of successful grant applications. Approach - The authors draw on more than 15 years experience in supporting the development of successful research grant proposals. Twelve grant-writing best practices or ‘key approaches’ are described, and a planning timeline is suggested. Conclusions - Use of these best practices can assist researchers in creating successful research grant proposals that will also help streamline the research process once it is underway. It is important to recognize the competitive nature of research grant competitions, to obtain feedback from an internal review panel, and to use feedback from funding agencies to strengthen future grant applications.


2018 ◽  
Author(s):  
Stephen A Gallo ◽  
Lisa A Thompson ◽  
Karen B Schmaling ◽  
Scott R Glisson

AbstractScientific peer reviewers play an integral role in the grant selection process, yet very little has been reported on the levels of participation or the motivations of scientists to take part in peer review. AIBS developed a comprehensive peer review survey that examined the motivations and levels of participation of grant reviewers. The survey was disseminated to 13,091 scientists in AIBS’s proprietary database. Of the 874 respondents, 76% indicated they had reviewed grant applications in the last 3 years; however, the number of reviews was unevenly distributed across this sample. Higher review loads were associated with respondents who had submitted more grant proposals over this time period, some of whom were likely to be study section members for large funding agencies. The most prevalent reason to participate in a review was to give back to the scientific community (especially among frequent grant submitters) and the most common reason to decline an invitation to review was lack of time. Interestingly, few suggested that expectation from the funding agency was a motivation to review. Most felt that review participation positively influenced their careers through improving grantsmanship and exposure to new scientific ideas. Of those who reviewed, respondents reported dedicating 2-5% of their total annual work time to grant review and, based on their self-reported maximum review loads, it is estimated they are participating at 56%-89% of their capacity, which may have important implications regarding the sustainability of the system. Overall, it is clear that participation in peer review is uneven and in some cases near capacity, and more needs to be done to create new motivations and incentives to increase the future pool of reviewers.


eLife ◽  
2017 ◽  
Vol 6 ◽  
Author(s):  
Sarah Shailes

Funding agencies use many different criteria and peer review strategies to assess grant proposals.


2019 ◽  
Author(s):  
Stephen A. Gallo ◽  
Karen B. Schmaling ◽  
Lisa A. Thompson ◽  
Scott R. Glisson

AbstractBackgroundFunding agencies have long used panel discussion in the peer review of research grant proposals as a way to utilize a set of expertise and perspectives in making funding decisions. Little research has examined the quality of panel discussions and how effectively they are facilitated.MethodsHere we present a mixed-method analysis of data from a survey of reviewers focused on their perceptions of the quality and facilitation of panel discussion from their last peer review experience.ResultsReviewers indicated that panel discussions were viewed favorably in terms of participation, clarifying differing opinions, informing unassigned reviewers, and chair facilitation. However, some reviewers mentioned issues with panel discussions, including an uneven focus, limited participation from unassigned reviewers, and short discussion times. Most reviewers felt the discussions affected the review outcome, helped in choosing the best science, and were generally fair and balanced. However, those who felt the discussion did not affect the outcome were also more likely to evaluate panel communication negatively, and several reviewers mentioned potential sources of bias related to the discussion. While respondents strongly acknowledged the importance of the chair in ensuring appropriate facilitation of the discussion to influence scoring and to limit the influence of potential sources of bias from the discussion on scoring, nearly a third of respondents did not find the chair of their most recent panel to have performed these roles effectively.ConclusionsIt is likely that improving chair training in the management of discussion as well as creating review procedures that are informed by the science of leadership and team communication would improve review processes and proposal review reliability.


Physics Today ◽  
2006 ◽  
Vol 59 (7) ◽  
pp. 24-24
Author(s):  
Toni Feder

1982 ◽  
Vol 26 (3) ◽  
pp. 279-291 ◽  
Author(s):  
Stuart Macdonald ◽  
Tom Mandeville ◽  
Don Lamberton

This paper is based on a research report published at the University of Queensland in November 1980, which emanated from research commissioned by the University's Research Committee and carried out by the authors. The study was concerned with the problem of distributing resources available for research and concluded that there was not an efficient use of such resources in the University of Queensland. Part of the study considered attempts to increase efficiency by funding those research projects which seemed to possess most merit. Such policy is becoming more common in Australian universities and this is understandable during a period of financial stringency. However, the policy seems to ignore the substantial costs associated with applying for merit grants, and to assume that any scheme funding the most deserving research automatically improves the efficiency of research funding. That is not necessarily so. Most research funding in Australian universities is provided in the form of staff salaries. When staff time is occupied by the merit application and assessment process, it is not available for research. Consequently there is a cost to research, a cost that is not widely appreciated and one which may well exceed the benefits of ill-considered merit schemes.


Complexity ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-8
Author(s):  
Mohammad Reza Mahmoudi ◽  
Marzieh Rahmati ◽  
Zulkefli Mansor ◽  
Amirhosein Mosavi ◽  
Shahab S. Band

The productivity of researchers and the impact of the work they do are a preoccupation of universities, research funding agencies, and sometimes even researchers themselves. The h-index (h) is the most popular of different metrics to measure these activities. This research deals with presenting a practical approach to model the h-index based on the total number of citations (NC) and the duration from the publishing of the first article (D1). To determine the effect of every factor (NC and D1) on h, we applied a set of simple nonlinear regression. The results indicated that both NC and D1 had a significant effect on h ( p  < 0.001). The determination of coefficient for these equations to estimate the h-index was 93.4% and 39.8%, respectively, which verified that the model based on NC had a better fit. Then, to record the simultaneous effects of NC and D1 on h, multiple nonlinear regression was applied. The results indicated that NC and D1 had a significant effect on h ( p  < 0.001). Also, the determination of coefficient for this equation to estimate h was 93.6%. Finally, to model and estimate the h-index, as a function of NC and D1, multiple nonlinear quartile regression was used. The goodness of the fitted model was also assessed.


2019 ◽  
Author(s):  
Melanie J Hopkins

“Electronic publishing” can mean a variety of things, but for the dissemination of scientific results, there are two major categories: 1) materials that have not gone through peer-review, such as community-database entries, presentations from conferences, and manuscripts posted on preprint servers; and 2) materials that have gone through peer-review and are subsequently posted online. In the latter case, the process of peer-review is usually managed by a body of editors associated with a journal. If a manuscript is published by such a journal, the reader can be assured that it went through the peer-review process successfully. In the last decade or so, journals have started to abandon printed issues of peer-reviewed articles and are now publishing exclusively online; there have also been a proliferation of new online-only journals. Concurrently, there has been a shift towards open-access publishing, which, while making scientific studies more broadly available, has also transferred the financial burden from the reader or subscriber to the authors and funding agencies. Lastly, there has been a shift in how manuscripts on preprint servers are viewed, and it is increasingly common in many scientific fields for authors to post a finalized manuscript to a preprint server prior to submission to a journal. This talk will describe the “Peer Community In” (PCI) Project, which is a non-profit organization that was established in response to these major shifts in scientific publishing. The PCI Project is comprised of communities of researchers working in different fields (including paleontology), who peer review and recommend research articles publicly available on preprint servers. The goal is to promote rigorous scientific study by providing an alternative to traditional avenues for peer-reviewed publishing.


Sign in / Sign up

Export Citation Format

Share Document