scholarly journals Nature and reporting characteristics of UK health technology assessment systematic reviews

2018 ◽  
Vol 18 (1) ◽  
Author(s):  
Christopher Carroll ◽  
Eva Kaltenthaler
2019 ◽  
Vol 35 (S1) ◽  
pp. 49-50
Author(s):  
Miriam Luhnen ◽  
Barbara Prediger ◽  
Edmund A.M. Neugebauer ◽  
Tim Mathes

IntroductionWhen making decisions in health care, it is essential to consider economic evidence about an intervention. The objective of this study was to analyze the methods applied for systematic reviews of economic evaluations in Health Technology Assessment (HTA) and to identify common challenges.MethodsWe manually searched the webpages of HTA organizations and included HTA-reports published since 2015. Prerequisites for inclusion were the conduct of a systematic review of economic evaluations in at least one electronic database and the use of the English, German, French, or Spanish language. Methodological features were extracted in standardized tables. We prepared descriptive statistical (e.g., median, range) measures to describe the applied methods. Data were synthesized in a structured narrative way.ResultsEighty-three reports were included in the analysis. We identified inexplicable heterogeneity, particularly concerning literature search strategy, data extraction, assessment of quality, and applicability. Furthermore, process steps were often missing or reported in a nontransparent way. The use of a standardized data extraction form was indicated in one-third of reports (32 percent). Fifty-four percent of authors systematically appraised included studies. In 10 percent of reports, the applicability of included studies was assessed. Involvement of two reviewers was rarely reported for the study selection (43 percent), data extraction (28 percent), and quality assessment (39 percent).ConclusionsThe methods applied for systematic reviews of economic evaluations in HTA and their reporting quality are very heterogeneous. Efforts toward a detailed, standardized guidance for the preparation of systematic reviews of economic evaluations definitely seem necessary. A general harmonization and improvement of the applied methodology would increase their value for decision makers.


2016 ◽  
Vol 20 (76) ◽  
pp. 1-254 ◽  
Author(s):  
James Raftery ◽  
Steve Hanney ◽  
Trish Greenhalgh ◽  
Matthew Glover ◽  
Amanda Blatch-Jones

BackgroundThis report reviews approaches and tools for measuring the impact of research programmes, building on, and extending, a 2007 review.Objectives(1) To identify the range of theoretical models and empirical approaches for measuring the impact of health research programmes; (2) to develop a taxonomy of models and approaches; (3) to summarise the evidence on the application and use of these models; and (4) to evaluate the different options for the Health Technology Assessment (HTA) programme.Data sourcesWe searched databases including Ovid MEDLINE, EMBASE, Cumulative Index to Nursing and Allied Health Literature and The Cochrane Library from January 2005 to August 2014.Review methodsThis narrative systematic literature review comprised an update, extension and analysis/discussion. We systematically searched eight databases, supplemented by personal knowledge, in August 2014 through to March 2015.ResultsThe literature on impact assessment has much expanded. The Payback Framework, with adaptations, remains the most widely used approach. It draws on different philosophical traditions, enhancing an underlying logic model with an interpretative case study element and attention to context. Besides the logic model, other ideal type approaches included constructionist, realist, critical and performative. Most models in practice drew pragmatically on elements of several ideal types. Monetisation of impact, an increasingly popular approach, shows a high return from research but relies heavily on assumptions about the extent to which health gains depend on research. Despite usually requiring systematic reviews before funding trials, the HTA programme does not routinely examine the impact of those trials on subsequent systematic reviews. The York/Patient-Centered Outcomes Research Institute and the Grading of Recommendations Assessment, Development and Evaluation toolkits provide ways of assessing such impact, but need to be evaluated. The literature, as reviewed here, provides very few instances of a randomised trial playing a major role in stopping the use of a new technology. The few trials funded by the HTA programme that may have played such a role were outliers.DiscussionThe findings of this review support the continued use of the Payback Framework by the HTA programme. Changes in the structure of the NHS, the development of NHS England and changes in the National Institute for Health and Care Excellence’s remit pose new challenges for identifying and meeting current and future research needs. Future assessments of the impact of the HTA programme will have to take account of wider changes, especially as the Research Excellence Framework (REF), which assesses the quality of universities’ research, seems likely to continue to rely on case studies to measure impact. The HTA programme should consider how the format and selection of case studies might be improved to aid more systematic assessment. The selection of case studies, such as in the REF, but also more generally, tends to be biased towards high-impact rather than low-impact stories. Experience for other industries indicate that much can be learnt from the latter. The adoption of researchfish®(researchfish Ltd, Cambridge, UK) by most major UK research funders has implications for future assessments of impact. Although the routine capture of indexed research publications has merit, the degree to which researchfish will succeed in collecting other, non-indexed outputs and activities remains to be established.LimitationsThere were limitations in how far we could address challenges that faced us as we extended the focus beyond that of the 2007 review, and well beyond a narrow focus just on the HTA programme.ConclusionsResearch funders can benefit from continuing to monitor and evaluate the impacts of the studies they fund. They should also review the contribution of case studies and expand work on linking trials to meta-analyses and to guidelines.FundingThe National Institute for Health Research HTA programme.


2018 ◽  
Vol 34 (6) ◽  
pp. 547-554 ◽  
Author(s):  
Mick Arber ◽  
Julie Glanville ◽  
Jaana Isojarvi ◽  
Erin Baragula ◽  
Mary Edwards ◽  
...  

Objectives:This study investigated which databases and which combinations of databases should be used to identify economic evaluations (EEs) to inform systematic reviews. It also investigated the characteristics of studies not identified in database searches and evaluated the success of MEDLINE search strategies used within typical reviews in retrieving EEs in MEDLINE.Methods:A quasi-gold standard (QGS) set of EEs was collected from reviews of EEs. The number of QGS records found in nine databases was calculated and the most efficient combination of databases was determined. The number and characteristics of QGS records not retrieved from the databases were collected. Reproducible MEDLINE strategies from the reviews were rerun to calculate the sensitivity and precision for each strategy in finding QGS records.Results:The QGS comprised 351 records. Across all databases, 337/351 (96 percent) QGS records were identified. Embase yielded the most records (314; 89 percent). Four databases were needed to retrieve all 337 references: Embase + Health Technology Assessment database + (MEDLINE or PubMed) + Scopus. Four percent (14/351) of records could not be found in any database. Twenty-nine of forty-one (71 percent) reviews reported a reproducible MEDLINE strategy. Ten of twenty-nine (34.5 percent) of the strategies missed at least one QGS record in MEDLINE. Across all twenty-nine MEDLINE searches, 25/143 records were missed (17.5 percent). Mean sensitivity was 89 percent and mean precision was 1.6 percent.Conclusions:Searching beyond key databases for published EEs may be inefficient, providing the search strategies in those key databases are adequately sensitive. Additional search approaches should be used to identify unpublished evidence (grey literature).


Trials ◽  
2015 ◽  
Vol 16 (S2) ◽  
Author(s):  
Sheetal Bhurke ◽  
Andrew Cook ◽  
Anna Tallant ◽  
Amanda Young ◽  
Elaine Williams ◽  
...  

2018 ◽  
Vol 34 (S1) ◽  
pp. 65-65
Author(s):  
Maria Eugenia Esandi ◽  
Iñaki Gutiérrez-Ibarluzea ◽  
Nora Ibargoyen-Roteta ◽  
Brian Godman

Introduction:Health technology has no or low added value when it is harmful and/or is deemed to deliver limited health gain relative to its cost, representing inefficient health resource allocation. A joint effort by the Health Technology Assessment International (HTAi) interest group (IG) on disinvestment and early awareness, the IG on ethics, the EuroScan network and the International Network of Agencies for Health Technology Assessment (INAHTA) is aiming to design a toolkit that could aid organizations and individuals considering disinvestment activities. We synthesized state of the art methods for identifying candidate technologies for disinvestment, and propose a framework for executing this task.Methods:We searched systematic reviews on disinvestment and compared the methods used for identifying potential candidates. A descriptive analysis was performed including sources of evidence used and methods for selection / filtration.Results:Ten systematic reviews were retrieved, and the methods of 29 disinvestment initiatives were compared. A new framework for identifying potential candidates was proposed which comprises seven basic approaches based on the wide definition of evidence provided by Lomas et al.; 11 triggers for disinvestment were adapted from Elshaug's proposal, and 13 methods for applying these triggers that were grouped in embedded and ad-hoc methods.Conclusions:Identification methods have been described in the literature, and have been tested in different contexts. Context is crucial in determining the ‘not to do’ practices as they are described in different sources.


2020 ◽  
Author(s):  
Käthe Goossen ◽  
Simone Hess ◽  
Carole Lunny ◽  
Dawid Pieper

Abstract Background When conducting an Overviews of Reviews on health-related topics, it is unclear which combination of bibliographic databases authors should use for searching for SRs. Our goal was to determine which databases included the most systematic reviews and identify an optimal database combination for searching systematic reviews. Methods A set of 86 Overviews of Reviews with 1219 included systematic reviews was extracted from a previous study. Inclusion of the systematic reviews was assessed in MEDLINE, CINAHL, Embase, Epistemonikos, PsycINFO, and TRIP. The mean inclusion rate (% of included systematic reviews) and corresponding 95% confidence interval were calculated for each database individually, as well as for combinations of MEDLINE with each other database and reference checking. Results Inclusion of systematic reviews was higher in MEDLINE than in any other single database (mean inclusion rate 89.7%; 95% confidence interval [89.0–90.3%]). Combined with reference checking, this value increased to 93.7% [93.2–94.2%]. The best combination of two databases plus reference checking consisted of MEDLINE and Epistemonikos (99.2% [99.0–99.3%]). Stratification by Health Technology Assessment reports (97.7% [96.5–98.9%]) vs. Cochrane Overviews (100.0%) vs. non-Cochrane Overviews (99.3% [99.1–99.4%]) showed that inclusion was only slightly lower for Health Technology Assessment reports. However, MEDLINE, Epistemonikos, and reference checking remained the best combination. Among the 10/1219 systematic reviews not identified by this combination, five were published as websites rather than journals, two were included in CINAHL and Embase, and one was included in the database ERIC. Conclusions MEDLINE and Epistemonikos, complemented by reference checking of included studies, is the best database combination to identify systematic reviews on health-related topics.


Author(s):  
Amber Watt ◽  
Alun Cameron ◽  
Lana Sturm ◽  
Timothy Lathlean ◽  
Wendy Babidge ◽  
...  

In the article entitled “Rapid reviews versus full systematic reviews: An inventory of current methods and practice in health technology assessment,” by Watt et al. in volume 24 number 2 (Spring 2008) ofInternational Journal of Technology Assessment in Health Care, the affiliation of Stephen Blamey is incorrectly listed as Department of Health & Ageing. Dr. Blamey is the current Chair of the Medical Services Advisory Committee (MSAC). MSAC is an independent scientific committee comprising individuals with expertise in clinical medicine, health economics, and consumer matters. The Department of Health & Ageing administers funding and operations for MSAC. However, members of MSAC act independently of the Department. As Chair of MSAC, Dr. Blamey can be contacted through the Department. Dr. Blamey is not affiliated with the Department of Health and Ageing and his contribution to the above-mentioned article does not reflect its policy. Dr. Blamey wishes to apologize for this misunderstanding.


Sign in / Sign up

Export Citation Format

Share Document