scholarly journals Peer review research assessment: a sensitivity analysis of performance rankings to the share of research product evaluated

2010 ◽  
Vol 85 (3) ◽  
pp. 705-720 ◽  
Author(s):  
Giovanni Abramo ◽  
Ciriaco Andrea D’Angelo ◽  
Fulvio Viel
2017 ◽  
Vol 16 (6) ◽  
pp. 820-842 ◽  
Author(s):  
Marcelo Marques ◽  
Justin JW Powell ◽  
Mike Zapp ◽  
Gert Biesta

Research evaluation systems in many countries aim to improve the quality of higher education. Among the first of such systems, the UK’s Research Assessment Exercise (RAE) dating from 1986 is now the Research Excellence Framework (REF). Highly institutionalised, it transforms research to be more accountable. While numerous studies describe the system’s effects at different levels, this longitudinal analysis examines the gradual institutionalisation and (un)intended consequences of the system from 1986 to 2014. First, we analyse historically RAE/REF’s rationale, formalisation, standardisation, and transparency, framing it as a strong research evaluation system. Second, we locate the multidisciplinary field of education, analysing the submission behaviour (staff, outputs, funding) of departments of education over time to find decreases in the number of academic staff whose research was submitted for peer review assessment; the research article as the preferred publication format; the rise of quantitative analysis; and a high and stable concentration of funding among a small number of departments. Policy instruments invoke varied responses, with such reactivity demonstrated by (1) the increasing submission selectivity in the number of staff whose publications were submitted for peer review as a form of reverse engineering, and (2) the rise of the research article as the preferred output as a self-fulfilling prophecy. The funding concentration demonstrates a largely intended consequence that exacerbates disparities between departments of education. These findings emphasise how research assessment impacts the structural organisation and cognitive development of educational research in the UK.


2016 ◽  
Vol 108 (1) ◽  
pp. 349-353 ◽  
Author(s):  
Graziella Bertocchi ◽  
Alfonso Gambardella ◽  
Tullio Jappelli ◽  
Carmela Anna Nappi ◽  
Franco Peracchi

10.2196/26749 ◽  
2021 ◽  
Vol 23 (6) ◽  
pp. e26749
Author(s):  
Simon B Goldberg ◽  
Daniel M Bolt ◽  
Richard J Davidson

Background Missing data are common in mobile health (mHealth) research. There has been little systematic investigation of how missingness is handled statistically in mHealth randomized controlled trials (RCTs). Although some missing data patterns (ie, missing at random [MAR]) may be adequately addressed using modern missing data methods such as multiple imputation and maximum likelihood techniques, these methods do not address bias when data are missing not at random (MNAR). It is typically not possible to determine whether the missing data are MAR. However, higher attrition in active (ie, intervention) versus passive (ie, waitlist or no treatment) conditions in mHealth RCTs raise a strong likelihood of MNAR, such as if active participants who benefit less from the intervention are more likely to drop out. Objective This study aims to systematically evaluate differential attrition and methods used for handling missingness in a sample of mHealth RCTs comparing active and passive control conditions. We also aim to illustrate a modern model-based sensitivity analysis and a simpler fixed-value replacement approach that can be used to evaluate the influence of MNAR. Methods We reanalyzed attrition rates and predictors of differential attrition in a sample of 36 mHealth RCTs drawn from a recent meta-analysis of smartphone-based mental health interventions. We systematically evaluated the design features related to missingness and its handling. Data from a recent mHealth RCT were used to illustrate 2 sensitivity analysis approaches (pattern-mixture model and fixed-value replacement approach). Results Attrition in active conditions was, on average, roughly twice that of passive controls. Differential attrition was higher in larger studies and was associated with the use of MAR-based multiple imputation or maximum likelihood methods. Half of the studies (18/36, 50%) used these modern missing data techniques. None of the 36 mHealth RCTs reviewed conducted a sensitivity analysis to evaluate the possible consequences of data MNAR. A pattern-mixture model and fixed-value replacement sensitivity analysis approaches were introduced. Results from a recent mHealth RCT were shown to be robust to missing data, reflecting worse outcomes in missing versus nonmissing scores in some but not all scenarios. A review of such scenarios helps to qualify the observations of significant treatment effects. Conclusions MNAR data because of differential attrition are likely in mHealth RCTs using passive controls. Sensitivity analyses are recommended to allow researchers to assess the potential impact of MNAR on trial results.


Author(s):  
Ken Peach

This chapter focuses on the review process, the process of writing a proposal and the evaluation of science. The usual way that science is funded these days is through a proposal to a funding agency; if it satisfies peer review and there are sufficient resources available, it is then funded. Peer review is at the heart of academic life, and is used to assess research proposals, progress, publications and institutions. Peer review processes are discussed and, in light of this discussion, the art of proposal writing. The particular features of making fellowship proposals and preparing for an institutional review are described. In addition, several of the methods used for evaluating and ranking research and research institutions are reviewed, including the Research Assessment Exercise and the Research Excellence Framework.


Sign in / Sign up

Export Citation Format

Share Document