scholarly journals Clinical trial metadata: defining and extracting metadata on the design, conduct, results and costs of 125 randomised clinical trials funded by the National Institute for Health Research Health Technology Assessment programme

2015 ◽  
Vol 19 (11) ◽  
pp. 1-138 ◽  
Author(s):  
James Raftery ◽  
Amanda Young ◽  
Louise Stanton ◽  
Ruairidh Milne ◽  
Andrew Cook ◽  
...  

BackgroundBy 2011, the Health Technology Assessment (HTA) programme had published the results of over 100 trials with another 220 in progress. The aim of the project was to develop and pilot ‘metadata’ on clinical trials funded by the HTA programme.ObjectivesThe aim of the project was to develop and pilot questions describing clinical trials funded by the HTA programme in terms of it meeting the needs of the NHS with scientifically robust studies. The objectives were to develop relevant classification systems and definitions for use in answering relevant questions and to assess their utility.Data sourcesPublished monographs and internal HTA documents.Review methodsA database was developed, ‘populated’ using retrospective data and used to answer questions under six prespecified themes. Questions were screened for feasibility in terms of data availability and/or ease of extraction. Answers were assessed by the authors in terms of completeness, success of the classification system used and resources required. Each question was scored to be retained, amended or dropped.ResultsOne hundred and twenty-five randomised trials were included in the database from 109 monographs. Neither the International Standard Randomised Controlled Trial Number nor the term ‘randomised trial’ in the title proved a reliable way of identifying randomised trials. Only limited data were available on how the trials aimed to meet the needs of the NHS. Most trials were shown to follow their protocols but updates were often necessary as hardly any trials recruited as planned. Details were often lacking on planned statistical analyses, but we did not have access to the relevant statistical plans. Almost all the trials reported on cost-effectiveness, often in terms of both the primary outcome and quality-adjusted life-years. The cost of trials was shown to depend on the number of centres and the duration of the trial. Of the 78 questions explored, 61 were well answered, 33 fully with 28 requiring amendment were the analysis updated. The other 17 could not be answered with readily available data.LimitationsThe study was limited by being confined to 125 randomised trials by one funder.ConclusionsMetadata on randomised controlled trials can be expanded to include aspects of design, performance, results and costs. The HTA programme should continue and extend the work reported here.FundingThe National Institute for Health Research HTA programme.

Author(s):  
Fay Chinnery ◽  
Gemma Bashevoy ◽  
Amanda Blatch-Jones ◽  
Lisa Douet ◽  
Sarah Puddicombe ◽  
...  

INTRODUCTION:This study compared the National Institute for Health Research (NIHR) Health Technology Assessment (HTA) Programme portfolio of research with the united Kingdom (UK) burden of disease, as measured by Disability-Adjusted Life Years (DALYs).METHODS:Design: Cross-sectional study.Setting: The HTA Programme cohort included all funded applications (n = 363) received by the HTA Programme during the period 1 April 2011 to 31 March 2016. The sample contained primary research and evidence syntheses, all purely methodological studies were excluded since these are not comparable to the other study types.Main Outcome Measure: Proportion of spend for each of the twenty-one Health Research Classification System (HRCS) health categories were compared with burden of disease in the UK calculated using 2015 DALY data from the Institute for Health Metrics and Evaluation (IHME) Global Health Data Exchange (GHDx).RESULTS:The funded HTA Programme projects totalled about GBP397million research spend, which broadly reflected the UK DALY burden. Overall, there was less than 5 percent difference between the actual and predicted programme spend based on the burden of disease in the UK in most instances (seventeen out of the twenty-one HRCS Health Categories).The largest categories of apportioned spend were Cancer (accounting for 12.1 percent of portfolio), and Mental Health (11.8 percent of portfolio) which particularly reflected the 9.8 percent burden of disease to the UK. Most notable deviations from DALY, where spend was lower than disease burden, were in the Cancer, Cardiovascular and Musculoskeletal categories; which may reflect the importance of other, notably charity, funding.CONCLUSIONS:The HTA Programme spend broadly aligns with burden of disease as measured using DALYs. Discrepancies were expected owing to the programme remit and its approach to commissioning research to address market failure particularly in areas that are not already well supported by research charities or industry. Regular review of DALY data during research prioritisation and commissioning allows the HTA Programme to identify and address shortfalls in disease areas and to balance its portfolio.


2011 ◽  
Vol 27 (4) ◽  
pp. 384-390 ◽  
Author(s):  
Nicola Ring ◽  
Ruth Jepson ◽  
Karen Ritchie

Objectives: Synthesizing qualitative research is an important means of ensuring the needs, preferences, and experiences of patients are taken into account by service providers and policy makers, but the range of methods available can appear confusing. This study presents the methods for synthesizing qualitative research most used in health research to-date and, specifically those with a potential role in health technology assessment.Methods: To identify reviews conducted using the eight main methods for synthesizing qualitative studies, nine electronic databases were searched using key terms including meta-ethnography and synthesis. A summary table groups the identified reviews by their use of the eight methods, highlighting the methods used most generally and specifically in relation to health technology assessment topics.Results: Although there is debate about how best to identify and quality appraise qualitative research for synthesis, 107 reviews were identified using one of the eight main methods. Four methods (meta-ethnography, meta-study, meta-summary, and thematic synthesis) have been most widely used and have a role within health technology assessment. Meta-ethnography is the leading method for synthesizing qualitative health research. Thematic synthesis is also useful for integrating qualitative and quantitative findings. Four other methods (critical interpretive synthesis, grounded theory synthesis, meta-interpretation, and cross-case analysis) have been under-used in health research and their potential in health technology assessments is currently under-developed.Conclusions: Synthesizing individual qualitative studies has becoming increasingly common in recent years. Although this is still an emerging research discipline such an approach is one means of promoting the patient-centeredness of health technology assessments.


Author(s):  
Henry S. Richardson

Current thinking about the methodology of health technology assessment (HTA) seems to be dominated by two fundamental tensions: [1] between maintaining a tight focus on quality-adjusted life-years and broadening its concern out to pay attention to a broader range of factors, and [2] between thinking of the evaluative dimensions that matter as being objectively important factors or as ones that are ultimately of merely subjective importance. In this study, I will argue that health is a tremendously important all-purpose means to enjoying basic human capabilities, but a mere means, and not an end. The ends to which health is a means are manifold, requiring all those engaged in policy making to exercise intelligence in a continuing effort to identify them and to think through how they interrelate. Retreating to the subjective here would be at odds with the basic idea of HTA, which is to focus on certain objectively describable dimensions of what matters about health and to collect empirical evidence rigorously bearing on what produces improvements along those dimensions. To proceed intelligently in doing HTA, it is important to stay open to reframing and refashioning the ends we take to apply to that arena. The only way for that to happen, as an exercise of public, democratic policy making, is for the difficult value questions that arise when ends clash not to be buried in subjective preference information, but to be front-and-center in the analysis.


2016 ◽  
Vol 20 (76) ◽  
pp. 1-254 ◽  
Author(s):  
James Raftery ◽  
Steve Hanney ◽  
Trish Greenhalgh ◽  
Matthew Glover ◽  
Amanda Blatch-Jones

BackgroundThis report reviews approaches and tools for measuring the impact of research programmes, building on, and extending, a 2007 review.Objectives(1) To identify the range of theoretical models and empirical approaches for measuring the impact of health research programmes; (2) to develop a taxonomy of models and approaches; (3) to summarise the evidence on the application and use of these models; and (4) to evaluate the different options for the Health Technology Assessment (HTA) programme.Data sourcesWe searched databases including Ovid MEDLINE, EMBASE, Cumulative Index to Nursing and Allied Health Literature and The Cochrane Library from January 2005 to August 2014.Review methodsThis narrative systematic literature review comprised an update, extension and analysis/discussion. We systematically searched eight databases, supplemented by personal knowledge, in August 2014 through to March 2015.ResultsThe literature on impact assessment has much expanded. The Payback Framework, with adaptations, remains the most widely used approach. It draws on different philosophical traditions, enhancing an underlying logic model with an interpretative case study element and attention to context. Besides the logic model, other ideal type approaches included constructionist, realist, critical and performative. Most models in practice drew pragmatically on elements of several ideal types. Monetisation of impact, an increasingly popular approach, shows a high return from research but relies heavily on assumptions about the extent to which health gains depend on research. Despite usually requiring systematic reviews before funding trials, the HTA programme does not routinely examine the impact of those trials on subsequent systematic reviews. The York/Patient-Centered Outcomes Research Institute and the Grading of Recommendations Assessment, Development and Evaluation toolkits provide ways of assessing such impact, but need to be evaluated. The literature, as reviewed here, provides very few instances of a randomised trial playing a major role in stopping the use of a new technology. The few trials funded by the HTA programme that may have played such a role were outliers.DiscussionThe findings of this review support the continued use of the Payback Framework by the HTA programme. Changes in the structure of the NHS, the development of NHS England and changes in the National Institute for Health and Care Excellence’s remit pose new challenges for identifying and meeting current and future research needs. Future assessments of the impact of the HTA programme will have to take account of wider changes, especially as the Research Excellence Framework (REF), which assesses the quality of universities’ research, seems likely to continue to rely on case studies to measure impact. The HTA programme should consider how the format and selection of case studies might be improved to aid more systematic assessment. The selection of case studies, such as in the REF, but also more generally, tends to be biased towards high-impact rather than low-impact stories. Experience for other industries indicate that much can be learnt from the latter. The adoption of researchfish®(researchfish Ltd, Cambridge, UK) by most major UK research funders has implications for future assessments of impact. Although the routine capture of indexed research publications has merit, the degree to which researchfish will succeed in collecting other, non-indexed outputs and activities remains to be established.LimitationsThere were limitations in how far we could address challenges that faced us as we extended the focus beyond that of the 2007 review, and well beyond a narrow focus just on the HTA programme.ConclusionsResearch funders can benefit from continuing to monitor and evaluate the impacts of the studies they fund. They should also review the contribution of case studies and expand work on linking trials to meta-analyses and to guidelines.FundingThe National Institute for Health Research HTA programme.


Sign in / Sign up

Export Citation Format

Share Document