Research Evaluation
Latest Publications


TOTAL DOCUMENTS

968
(FIVE YEARS 123)

H-INDEX

46
(FIVE YEARS 3)

Published By Oxford University Press

1471-5449, 0958-2029

2021 ◽  
Author(s):  
Marco Seeber ◽  
Jef Vlegels ◽  
Elwin Reimink ◽  
Ana Marušić ◽  
David G Pina

Abstract We have limited understanding of why reviewers tend to strongly disagree when scoring the same research proposal. Thus far, research that explored disagreement has focused on the characteristics of the proposal or the applicants, while ignoring the characteristics of the reviewers themselves. This article aims to address this gap by exploring which reviewer characteristics most affect disagreement among reviewers. We present hypotheses regarding the effect of a reviewer’s level of experience in evaluating research proposals for a specific granting scheme, that is, scheme reviewing experience. We test our hypotheses by studying two of the most important research funding programmes in the European Union from 2014 to 2018, namely, 52,488 proposals evaluated under three funding schemes of the Horizon 2020 Marie Sklodowska-Curie Actions (MSCA), and 1,939 proposals evaluated under the European Cooperation in Science and Technology Actions. We find that reviewing experience on previous calls of a specific scheme significantly reduces disagreement, while experience of evaluating proposals in other schemes—namely, general reviewing experience, does not have any effect. Moreover, in MSCA—Individual Fellowships, we observe an inverted U relationship between the number of proposals a reviewer evaluates in a given call and disagreement, with a remarkable decrease in disagreement above 13 evaluated proposals. Our results indicate that reviewing experience in a specific scheme improves reliability, curbing unwarranted disagreement by fine-tuning reviewers’ evaluation.


2021 ◽  
Author(s):  
Juan Aparicio ◽  
Dorys Yaneth Rodríguez ◽  
Jon Mikel Zabala-Iturriagagoitia

Abstract This article aims to provide a systemic instrument to evaluate the functioning of higher education systems. Despite systemic instruments have had a strong impact on the management of public policy systems in fields such as health and innovation, higher education has not been widely discussed in applying this type of instrument. Herein lies the main gap that we want to close. The ultimate purpose of the evaluation instrument introduced here is thus to provide information for decision-makers, so these can identify the strengths/weaknesses in the functioning of their respective higher education systems from a systemic perspective. To achieve the previous goal, we apply the methodological guidelines of the integrative review of the literature. An integrative review of the literature was chosen because it guides the extraction of quantitative evidence from the literature and its classification, with the purpose of integrating the results into an analytical framework. This resulting analytical framework is what we have labelled as the systemic evaluation instrument. The article makes three contributions to the literature. First, the different types of higher education institutions considered in the literature and the higher education systems analysis scales are evidenced. Second, we identify the capacities and functions examined by the literature so that higher education institutions and higher education systems can fulfil their missions. Third, a systemic evaluation framework for higher education institutions and higher education systems is presented. The article concludes with a discussion of the opportunities and challenges associated to the implementation of such a systemic framework for policymaking.


2021 ◽  
Author(s):  
Serge P J M Horbach

Abstract The global Covid-19 pandemic has had considerable impact on the scientific enterprise, including scholarly publication and peer review practices. Several studies have assessed these impacts, showing among others that medical journals have strongly accelerated their review processes for Covid-19 related content. This has raised questions and concerns regarding the quality of the review process and the standards to which manuscripts are held for publication. To address these questions, this study sets out to assess qualitative differences in review reports and editorial decision letters for Covid-19 related, articles not related to Covid-19 published during the 2020 pandemic, and articles published before the pandemic. It employs the open peer review model at the British Medical Journal and eLife to study the content of review reports, editorial decisions, author responses, and open reader comments. It finds no clear differences between review processes of articles not related to Covid-19 published during or before the pandemic. However, it does find notable diversity between Covid-19 and non-Covid-19 related articles, including fewer requests for additional experiments, more cooperative comments, and different suggestions to address too strong claims. In general, the findings suggest that both reviewers and journal editors implicitly and explicitly use different quality criteria to assess Covid-19 related manuscripts, hence transforming science’s main evaluation mechanism for their underlying studies and potentially affecting their public dissemination.


Sign in / Sign up

Export Citation Format

Share Document