scholarly journals The peer review process for awarding funds to international science research consortia: a qualitative developmental evaluation

F1000Research ◽  
2018 ◽  
Vol 6 ◽  
pp. 1808 ◽  
Author(s):  
Stefanie Gregorius ◽  
Laura Dean ◽  
Donald C Cole ◽  
Imelda Bates

Background: Evaluating applications for multi-national, multi-disciplinary, dual-purpose research consortia is highly complex. There has been little research on the peer review process for evaluating grant applications and almost none on how applications for multi-national consortia are reviewed. Overseas development investments are increasingly being channelled into international science consortia to generate high-quality research while simultaneously strengthening multi-disciplinary research capacity. We need a better understanding of how such decisions are made and their effectiveness.Methods: An award-making institution planned to fund 10 UK-Africa research consortia. Over two annual rounds, 34 out of 78 eligible applications were shortlisted and reviewed by at least five external reviewers before final selections were made by a face-to-face panel. We used an innovative approach involving structured, overt observations of award-making panel meetings and semi-structured interviews with panel members to explore how assessment criteria concerning research quality and capacity strengthening were applied during the peer review process. Data were coded and analysed using pre-designed matrices which incorporated categories relating to the assessment criteria.Results: In general the process was rigorous and well-managed. However, lack of clarity about differential weighting of criteria and variations in the panel’s understanding of research capacity strengthening resulted in some inconsistencies in use of the assessment criteria. Using the same panel for both rounds had advantages, in that during the second round consensus was achieved more quickly and the panel had increased focus on development aspects.Conclusion: Grant assessment panels for such complex research applications need to have topic- and context-specific expertise. They must also understand research capacity issues and have a flexible but equitable and transparent approach. This study has developed and tested an approach for evaluating the operation of such panels and has generated lessons that can promote coherence and transparency among grant-makers and ultimately make the award-making process more effective.

F1000Research ◽  
2017 ◽  
Vol 6 ◽  
pp. 1808 ◽  
Author(s):  
Stefanie Gregorius ◽  
Laura Dean ◽  
Donald C Cole ◽  
Imelda Bates

Background: Evaluating applications for multi-national, multi-disciplinary, dual-purpose research consortia is highly complex. There has been little research on the peer review process for evaluating grant applications and almost none on how applications for multi-national consortia are reviewed. Overseas development investments are increasingly being channelled into international science consortia to generate high-quality research while simultaneously strengthening multi-disciplinary research capacity. We need a better understanding of how such decisions are made and their effectiveness. Methods: An award-making institution planned to fund 10 UK-Africa research consortia. Over two annual rounds, 34 out of 78 eligible applications were shortlisted and reviewed by at least five external reviewers before final selections were made by a face-to-face panel. We used an innovative approach involving structured, overt observations of award-making panel meetings and semi-structured interviews with panel members to explore how assessment criteria concerning research quality and capacity strengthening were applied during the peer review process. Data were coded and analysed using pre-designed matrices which incorporated categories relating to the assessment criteria. Results: In general the process was rigorous and well-managed. However, lack of clarity about differential weighting of criteria and variations in the panel’s understanding of research capacity strengthening resulted in some inconsistencies in use of the assessment criteria. Using the same panel for both rounds had advantages, in that during the second round consensus was achieved more quickly and the panel had increased focus on development aspects. Conclusion: Grant assessment panels for such complex research applications need to have topic- and context-specific expertise. They must also understand research capacity issues and have a flexible but equitable and transparent approach. This study has developed and tested an approach for evaluating the operation of such panels and has generated lessons that can promote coherence and transparency among grant-makers and ultimately make the award-making process more effective.


F1000Research ◽  
2017 ◽  
Vol 6 ◽  
pp. 1808
Author(s):  
Stefanie Gregorius ◽  
Laura Dean ◽  
Donald C Cole ◽  
Imelda Bates

Background: Evaluating applications for multi-national, multi-disciplinary, dual-purpose research consortia is highly complex. There has been little research on the peer review process for evaluating grant applications and almost none on how applications for multi-national consortia are reviewed. Overseas development investments are increasingly being channelled into international science consortia to generate high-quality research while simultaneously strengthening multi-disciplinary research capacity. We need a better understanding of how such decisions are made and their effectiveness.Methods: An award-making institution planned to fund 10 UK-Africa research consortia. Over two annual rounds, 34 out of 78 eligible applications were shortlisted and reviewed by at least five external reviewers before final selections were made by a face-to-face panel. We used an innovative approach involving structured, overt observations of award-making panel meetings and semi-structured interviews with panel members to explore how assessment criteria concerning research quality and capacity strengthening were applied during the peer review process. Data were coded and analysed using pre-designed matrices which incorporated categories relating to the assessment criteria.Results: In general the process was rigorous and well-managed. However, lack of clarity about differential weighting of criteria and variations in the panel’s understanding of research capacity strengthening resulted in some inconsistencies in use of the assessment criteria. Using the same panel for both rounds had advantages, in that during the second round consensus was achieved more quickly and the panel had increased focus on development aspects.Conclusion: Grant assessment panels for such complex research applications need to have topic- and context-specific expertise. They must also understand research capacity issues and have a flexible but equitable and transparent approach. This study has developed and tested an approach for evaluating the operation of such panels and has generated lessons that can promote coherence and transparency among grant-makers and ultimately make the award-making process more effective.


BMJ Open ◽  
2018 ◽  
Vol 8 (10) ◽  
pp. e020568 ◽  
Author(s):  
Ketevan Glonti ◽  
Darko Hren

IntroductionDespite dealing with scientific output and potentially having an impact on the quality of research published, the manuscript peer-review process itself has at times been criticised for being ‘unscientific’. Research indicates that there are social and subjective dimensions of the peer-review process that contribute to this perception, including how key stakeholders—namely authors, editors and peer reviewers—communicate. In particular, it has been suggested that the expected roles and tasks of stakeholders need to be more clearly defined and communicated if the manuscript review process is to be improved. Disentangling current communication practices, and outlining the specific roles and tasks of the main actors, might be a first step towards establishing the design of interventions that counterbalance social influences on the peer-review process.The purpose of this article is to present a methodological design for a qualitative study exploring the communication practices within the manuscript review process of biomedical journals from the journal editors’ point of view.Methods and analysisSemi-structured interviews will be carried out with editors of biomedical journals between October 2017 and February 2018. A heterogeneous sample of participants representing a wide range of biomedical journals will be sought through purposive maximum variation sampling, drawing from a professional network of contacts, publishers, conference participants and snowballing.Interviews will be thematically analysed following the method outlined by Braun and Clarke. The qualitative data analysis software NVivo V.11 will be used to aid data management and analysis.Ethics and disseminationThis research project was evaluated and approved by the University of Split, Medical School Ethics Committee (2181-198-03-04-17-0029) in May 2017. Findings will be disseminated through a publication in a peer-reviewed journal and presentations during conferences.


2015 ◽  
Vol 96 (2) ◽  
pp. 191-201 ◽  
Author(s):  
Matthew S. Mayernik ◽  
Sarah Callaghan ◽  
Roland Leigh ◽  
Jonathan Tedds ◽  
Steven Worley

Abstract Peer review holds a central place within the scientific communication system. Traditionally, research quality has been assessed by peer review of journal articles, conference proceedings, and books. There is strong support for the peer review process within the academic community, with scholars contributing peer reviews with little formal reward. Reviewing is seen as a contribution to the community as well as an opportunity to polish and refine understanding of the cutting edge of research. This paper discusses the applicability of the peer review process for assessing and ensuring the quality of datasets. Establishing the quality of datasets is a multifaceted task that encompasses many automated and manual processes. Adding research data into the publication and peer review queues will increase the stress on the scientific publishing system, but if done with forethought will also increase the trustworthiness and value of individual datasets, strengthen the findings based on cited datasets, and increase the transparency and traceability of data and publications. This paper discusses issues related to data peer review—in particular, the peer review processes, needs, and challenges related to the following scenarios: 1) data analyzed in traditional scientific articles, 2) data articles published in traditional scientific journals, 3) data submitted to open access data repositories, and 4) datasets published via articles in data journals.


2019 ◽  
Vol 2019 ◽  
pp. 1-9 ◽  
Author(s):  
Robert McNair ◽  
Hai Anh Le Phuong ◽  
Levente Cseri ◽  
Gyorgy Szekely

With the number of publications being all-time high, academic peer review is imperative to ensure high-quality research content. The wider involvement of postgraduate, early-career researchers (ECRs) has been proposed on several platforms to address the unsustainability of the peer review process caused by a lack of peer reviewers. A survey involving 1203 academics and ECRs in ten countries revealed their attitudes towards the involvement of ECRs in the peer review process. The trends and motives were identified, with emphasis on the peer review being an oft-neglected tool in the skill development of ECRs. In light of the survey results, the transferrable skills that ECRs acquire from performing peer reviews at a crucial stage in their career development are systematically explored. The findings call for further engagement of ECRs in the peer review process under supervisory mentoring.


2008 ◽  
Author(s):  
Kenya Malcolm ◽  
Allison Groenendyk ◽  
Mary Cwik ◽  
Alisa Beyer

2018 ◽  
Author(s):  
Cody Fullerton

For years, the gold-standard in academic publishing has been the peer-review process, and for the most part, peer-review remains a safeguard to authors publishing intentionally biased, misleading, and inaccurate information. Its purpose is to hold researchers accountable to the publishing standards of that field, including proper methodology, accurate literature reviews, etc. This presentation will establish the core tenants of peer-review, discuss if certain types of publications should be able to qualify as such, offer possible solutions, and discuss how this affects a librarian's reference interactions.


Sign in / Sign up

Export Citation Format

Share Document