OP128 Improving Literature Searching For Evidence On Health Apps: The National Institute For Health And Care Excellence (NICE) MEDLINE And Embase (Ovid) Health Apps Search Filters

Author(s):  
Lynda Ayiku ◽  
Sarah Glover

IntroductionLiterature searching for evidence on apps in bibliographic databases is challenging because they are often described with inconsistent terminology. Information Specialists from the United Kingdom's National Institute for Health and Care Excellence (NICE) have developed validated search filters for retrieving evidence about apps from MEDLINE and Embase (Ovid) reliably.MethodsMedical informatics journals were hand-searched to create a ‘gold standard’ set of app references. The gold standard set was divided into two sets. The development set provided the search terms for the filters. The filters were validated by calculating their recall against the validation set. Target recall was >90%.A case study was then conducted to compare the number-needed-to-read (NNR) of the filters with previous non-validated MEDLINE and Embase app search strategies used for the ‘MIB214 myCOPD app’ NICE topic. NNR is the number of references screened to find each relevant reference.ResultsThe MEDLINE and Embase filters achieved 98.6 percent and 98.5 percent recall against the validation set, respectively. In the case study they achieved 100 percent recall, reducing NNR from 348 to 147 in MEDLINE and from 456 to 271 in Embase.ConclusionsThe novel NICE health apps search filters retrieve evidence on apps from MEDLINE and Embase effectively and more efficiently than previous non-validated search strategies used at NICE.

Author(s):  
Lynda Ayiku ◽  
Thomas Hudson ◽  
Sarah Glover ◽  
Nicola Walsh ◽  
Rachel Adams ◽  
...  

Abstract Objectives Health apps are software programs that are designed to prevent, diagnose, monitor, or manage conditions. Inconsistent terminology for apps is used in research literature and bibliographic database subject headings. It can therefore be challenging to retrieve evidence about them in literature searches. Information specialists at the United Kingdom's National Institute for Health and Care Excellence (NICE) have developed novel validated search filters to retrieve evidence about apps from MEDLINE and Embase (Ovid). Methods A selection of medical informatics journals was hand searched to identify a “gold standard” (GS) set of references about apps. The GS set was divided into a development and validation set. The filters’ search terms were derived from and tested against the development set. An external development set containing app references from published NICE products was also used to inform the development of the filters. The filters were then validated using the validation set. Target recall was >90 percent. The filters’ overall recall, specificity, and precision were calculated using all the references identified from the hand search. Results Both filters achieved 98.6 percent recall against their validation sets. Overall, the MEDLINE filter had 98.8 percent recall, 71.3 percent specificity, and 22.6 percent precision. The Embase filter had 98.6 percent recall, 74.9 percent specificity, and 24.5 percent precision. Conclusions The NICE health apps search filters retrieve evidence about apps from MEDLINE and Embase with high recall. They can be applied to literature searches to retrieve evidence about the interventions by information professionals, researchers, and clinicians.


Author(s):  
Martin Hensher ◽  
Paul Cooper ◽  
Sithara Wanni Arachchige Dona ◽  
Mary Rose Angeles ◽  
Dieu Nguyen ◽  
...  

Abstract Objective The study sought to review the different assessment items that have been used within existing health app evaluation frameworks aimed at individual, clinician, or organizational users, and to analyze the scoring and evaluation methods used in these frameworks. Materials and Methods We searched multiple bibliographic databases and conducted backward searches of reference lists, using search terms that were synonyms of “health apps,” “evaluation,” and “frameworks.” The review covered publications from 2011 to April 2020. Studies on health app evaluation frameworks and studies that elaborated on the scaling and scoring mechanisms applied in such frameworks were included. Results Ten common domains were identified across general health app evaluation frameworks. A list of 430 assessment criteria was compiled across 97 identified studies. The most frequently used scaling mechanism was a 5-point Likert scale. Most studies have adopted summary statistics to generate the total scoring of each app, and the most popular approach taken was the calculation of mean or average scores. Other frameworks did not use any scaling or scoring mechanism and adopted criteria-based, pictorial, or descriptive approaches, or “threshold” filter. Discussion There is wide variance in the approaches to evaluating health apps within published frameworks, and this variance leads to ongoing uncertainty in how to evaluate health apps. Conclusions A new evaluation framework is needed that can integrate the full range of evaluative criteria within one structure, and provide summative guidance on health app rating, to support individual app users, clinicians, and health organizations in choosing or recommending the best health app.


2019 ◽  
Vol 14 (1) ◽  
pp. 65-67
Author(s):  
Ann Glusker

A Review of: Golder, S., Wright, K., & Loke, Y.K. (2018). The development of search filters for adverse effects of surgical interventions in MEDLINE and Embase. Health Information and Libraries Journal, 35(2), 121-129. https://doi.org/10.1111/hir.12213 Abstract Objective – “To develop and validate search filters for MEDLINE and Embase for the adverse effects of surgical interventions” (p.121). Design – From a universe of systematic reviews, the authors created “an unselected cohort…where relevant articles are not chosen because of the presence of adverse effects terms” (p.123). The studies referenced in the cohort reviews were extracted to create an overall citation set. From this, three equal-sized sets of studies were created by random selection, and used for: development of a filter (identifying search terms); evaluation of the filter (testing how well it worked); and validation of the filter (assessing how well it retrieved relevant studies). Setting – Systematic reviews of adverse effects from the Database of Abstracts of Reviews of Effects (DARE), published in 2014. Subjects – 358 studies derived from the references of 19 systematic reviews (352 available in MEDLINE, 348 available in Embase). Methods – Word and phrase frequency analysis was performed on the development set of articles to identify a list of terms, starting with the term creating the highest recall from titles and abstracts of articles, and continuing until adding new search terms produced no more new records recalled. The search strategy thus developed was then tested on the evaluation set of articles. In this case, using the strategy recalled all of the articles which could be obtained using generic search terms; however, adding specific search terms (such as the MeSH term “surgical site infection”) improved recall. Finally, the strategy incorporating both generic and specific search terms for adverse effects was used on the validation set of articles. Search strategies used are included in the article, as is a list in the discussion section of MeSH and Embase indexing terms specific to or suggesting adverse effects. Main Results – “In each case the addition of specific adverse effects terms could have improved the recall of the searches” (p. 127). This was true for all six cases (development, evaluation and validation study sets, for each of MEDLINE and Embase) in which specific terms were added to searches using generic terms, and recall percentages compared. Conclusion – While no filter can deliver 100% of items in a given standard set of studies on adverse effects (since title and abstract fields may not contain any indication of relevance to the topic), adding specific adverse effects terms to generic ones while developing filters is shown to improve recall for surgery-related adverse effects (similarly to drug-related adverse effects). The use of filters requires user engagement and critical analysis; at the same time, deploying well-constructed filters can have many benefits, including: helping users, especially clinicians, get a search started; managing a large and unwieldy set of citations retrieved; and to suggest new search strategies.


2019 ◽  
Vol 76 (Suppl 1) ◽  
pp. A84.3-A84
Author(s):  
Stefania Curti ◽  
Stefano Mattioli

ObjectivesTo identify efficient PubMed search filters for the study of outdoor air pollution determinants of diseases.MethodsWe listed Medical Subject Headings (MeSH) and non-MeSH terms that seemed pertinent to outdoor air pollutants exposure as determinants of diseases. Proportions of potentially pertinent articles retrieved by each term were estimated. We then formulated two filters: one ‘more specific’, one ‘more sensitive’. Their performances were compared with a gold standard of systematic reviews on associations between diseases and outdoor air pollution. We calculated, for both the filters, the number (of abstract) needed to read (NNR) to identify one potentially pertinent article, exploring three diseases potentially associated with outdoor air pollution.ResultsThe combination of terms that yielded a threshold of potentially pertinent articles≥40% permitted to formulate the ‘more specific’ filter. On the basis of the combination of all search terms under study we formulated the ‘more sensitive’ filter. In comparison with the gold standard, the ‘more specific’ filter had the highest specificity (67.4%; with a sensitivity of 82.5%) and the ‘more sensitive’ filter had the highest sensitivity (98.5%; with a specificity of 47.9%). For the ‘more specific’ filter and the ‘more sensitive’ one the NNR to find one potentially pertinent article was 1.9 and 3.3, respectively.ConclusionsThe proposed search filters help investigating environmental determinants of medical conditions. We published them on: Curti S et al. PubMed search filters for the study of putative outdoor air pollution determinants of disease. BMJ Open. 2016;6 (12):e013092.


2020 ◽  
Vol 108 (4) ◽  
Author(s):  
Julie Glanville ◽  
Eleanor Kotas ◽  
Robin Featherstone ◽  
Gordon Dooley

Objective: The Cochrane Handbook of Systematic Reviews contains search filters to find randomized controlled trials (RCTs) in Ovid MEDLINE: one maximizing sensitivity and another balancing sensitivity and precision. These filters were originally published in 1994 and were adapted and updated in 2008. To determine the performance of these filters, the authors tested them and thirty-six other MEDLINE filters against a large new gold standard set of relevant records.Methods: We identified a gold standard set of RCT reports published in 2016 from the Cochrane CENTRAL database of controlled clinical trials. We retrieved the records in Ovid MEDLINE and combined these with each RCT filter. We calculated their sensitivity, relative precision, and f-scores.Results: The gold standard comprised 27,617 records. MEDLINE searches were run on July 16, 2019. The most sensitive RCT filter was Duggan et al. (sensitivity=0.99). The Cochrane sensitivity-maximizing RCT filter had a sensitivity of 0.96 but was more precise than Duggan et al. (0.14 compared to 0.04 for Duggan). The most precise RCT filters had 0.97 relative precision and 0.83 sensitivity.Conclusions: The Cochrane Ovid MEDLINE sensitivity-maximizing RCT filter can continue to be used by Cochrane reviewers and to populate CENTRAL, as it has very high sensitivity and a slightly better precision relative to more sensitive filters. The results of this study, which used a very large gold standard to compare the performance of all known RCT filters, allows searchers to make better informed decisions about which filters to use for their work.


2015 ◽  
Vol 103 (1) ◽  
pp. 22-30 ◽  
Author(s):  
John J. Frazier ◽  
Corey D. Stein ◽  
Eugene Tseytlin ◽  
Tanja Bekhuis

Author(s):  
Julie Glanville ◽  
Eleanor Kotas ◽  
Robin Featherstone ◽  
Gordon Dooley

IntroductionThe Cochrane Handbook of Systematic Reviews contains two search filters to find randomized controlled trials (RCT) in Ovid MEDLINE: a sensitivity maximizing RCT filter and a sensitivity and precision maximizing RCT filter. The RCT search strategies were originally published in 1994 have been adapted and updated, most recently in 2008. To determine whether the Cochrane filters are still performing adequately to inform Cochrane reviews, we tested the performance of the Cochrane filters and 36 other MEDLINE filters in a large new gold standard set of relevant records.MethodsWe identified a gold standard set of RCT reports published in 2016 from the Cochrane CENTRAL database of controlled clinical trials. We retrieved the records in Ovid MEDLINE using their PubMed identifiers. Each RCT filter was run in MEDLINE and combined with the gold standard set of records, to determine their sensitivity, precision and f-scores.ResultsThe gold standard comprised 27,617 records and the searches were run on 16 July 2019. The most sensitive RCT filter was Duggan (sensitivity 0.99). The Cochrane sensitivity maximizing RCT filter had a sensitivity of 0.96, but was more precise than Duggan (0.14 compared to 0.04 for Duggan). The most precise RCT filter was Chow, Glanville/Lefebvre, Royle/Waugh, Dumbrique (precision 0.97, sensitivity 0.83). The best precision Cochrane filter was the sensitivity and precision maximising RCT filter.ConclusionsThe Cochrane MEDLINE sensitivity maximizing RCT filter can continue to be used by Cochrane reviewers and CENTRAL compilers as it has very high sensitivity but a more acceptable precision than many higher sensitivity filters. Slightly more sensitive filters are available, but with lower precision than the Cochrane sensitivity maximizing RCT filter. These other filters may be preferred when combining with a subject search when record numbers may be more manageable than searching the whole of MEDLINE.


Author(s):  
Ravit Alfandari ◽  
Brian J Taylor

Abstract Skills of the ‘information age’ need to be applied to social work. Conceptual and practical aspects of using online bibliographic databases to identify research were explored using multi-professional decision-making in child protection as a case study. Five databases (Social Science Citation Index, Scopus, Medline, Social Work Abstracts and Cochrane Central Register of Controlled Trials) were searched for relevant studies, retrieving 6,934 records of which fifty-eight studies were identified as relevant. The usefulness of specific search terms and the process of learning from the terminology of previous searches are illustrated, as well as the value of software to manage retrieved studies. Scopus had the highest sensitivity (retrieving the highest number of relevant articles) and retrieved the most articles not retrieved by any other database (exclusiveness). All databases had low precision on this topic, despite extensive efforts in selecting search terms. Cumulative knowledge about search strategies and empirical comparison of database utility helps to increase the efficiency of systematic literature searching. Such endeavours encourage and support professionals to use the best available evidence to inform practice and policy.


2018 ◽  
Vol 7 (1) ◽  
pp. 16-23 ◽  
Author(s):  
Patric Gibbons ◽  
Edwin D. Boudreaux ◽  
Brianna L. Haskins

2019 ◽  
Vol 33 (4) ◽  
pp. 470-474 ◽  
Author(s):  
Judith AC Rietjens ◽  
Wichor M Bramer ◽  
Eric CT Geijteman ◽  
Agnes van der Heide ◽  
Wendy H Oldenmenger

Background: Healthcare professionals and researchers in the field of palliative care often have difficulties finding relevant articles in online databases. Standardized search filters may help improve the efficiency and quality of such searches, but prior developed filters showed only moderate performance. Aim: To develop and validate a specific search filter and a sensitive search filter for the field of palliative care. Design: We used a novel, objective method for search filter development. First, we created a gold standard set. This set was split into three groups: term identification, filter development, and filter validation set. After creating the filters in PubMed, we translated the filters into search filters for Ovid MEDLINE, Embase, CINAHL, PsychINFO, and Cochrane Library. We calculated specificity, sensitivity and precision of both filters. Results: The specific filter had a specificity of 97.4%, a sensitivity of 93.7%, and a precision of 45%. The sensitive filter had a sensitivity of 99.6%, a specificity of 92.5%, and a precision of 5%. Conclusion: Our search filters can support literature searches in the field of palliative care. Our specific filter retrieves 93.7% of relevant articles, while 45% of the retrieved articles are relevant. This filter can be used to find answers to questions when time is limited. Our sensitive filter finds 99.6% of all relevant articles and may, for instance, help conducting systematic reviews. Both filters perform better than prior developed search filters in the field of palliative care.


Sign in / Sign up

Export Citation Format

Share Document