Locating the Best Available Research

Author(s):  
John C. Norcross ◽  
Thomas P. Hogan ◽  
Gerald P. Koocher ◽  
Lauren A. Maggio

This chapter discusses the steps EBP clinicians should take in finding evidence that addresses their clinical questions: searching background information resources, which provide overviews of topics, and then moving to filtered information resources, which provide access to timesaving, synthesized information. To help clinicians navigate these resources, the chapter summarizes basic search concepts that are applicable across the resources, such as Boolean operators, truncation, wild cards, and limits. The chapter describes key background information sources, such as eMedicine, textbooks, and Wikipedia. It then discusses key filtered information sources, including the Cochrane Database of Systematic Reviews, BMJ Clinical Evidence, and several evidence-based journals. The chapter provides tailored tips for optimal searching within each resource introduced.

2008 ◽  
Vol 3 (2) ◽  
pp. 3 ◽  
Author(s):  
Alison Farrell

Objective – This project sought to identify the five most used evidence based bedside information tools used in Canadian health libraries, to examine librarians’ attitudes towards these tools, and to test the comprehensiveness of the tools. Methods – The author developed a definition of evidence based bedside information tools and a list of resources that fit this definition. Participants were respondents to a survey distributed via the CANMEDLIB electronic mail list. The survey sought to identify information from library staff regarding the most frequently used evidence based bedside information tools. Clinical questions were used to measure the comprehensiveness of each resource and the levels of evidence they provided to each question. Results – Survey respondents reported that the five most used evidence based bedside information tools in their libraries were UpToDate, BMJ Clinical Evidence, First Consult, Bandolier and ACP Pier. Librarians were generally satisfied with the ease of use, efficiency and informative nature of these resources. The resource assessment determined that not all of these tools are comprehensive in terms of their ability to answer clinical questions or with regard to the inclusion of levels of evidence. UpToDate was able to provide information for the greatest number of clinical questions, but it provided a level of evidence only seven percent of the time. ACP Pier was able to provide information on only 50% of the clinical questions, but it provided levels of evidence for all of these. Conclusion – UpToDate and BMJ Clinical Evidence were both rated as easy to use and informative. However, neither product generally includes levels of evidence, so it would be prudent for the practitioner to critically appraise information from these sources before using it in a patient care setting. ACP Pier eliminates the critical appraisal stage, thus reducing the time it takes to go from forming a clinical question to implementing the answer, but survey respondents did not rate it as high in terms of usability. There remains a need for user-friendly, comprehensive resources that provide evidence summaries relying on levels of evidence to support their conclusions.


2021 ◽  
Author(s):  
Carole Lunny ◽  
Sai Surabi Thirugnanasampanthar ◽  
Salman Kanji ◽  
Nicola Ferri ◽  
Dawid Pieper ◽  
...  

Abstract Introduction: An increasing growth of systematic reviews (SRs) presents notable challenges for decision-makers seeking to answer clinical questions. Overviews of systematic reviews aim to address these challenges by summarising results of SRs and making sense of potentially discrepant SR results and conclusions. In 1997, an algorithm was created by Jadad to assess discordance in results across SRs on the same topic. Since this tool pre-dates the advent of overviews, it has been inconsistently applied in this context. Our study aims to (a) replicate assessments done in a sample of overviews using the Jadad algorithm to determine if the same SR would have been chosen, (b) evaluate the Jadad algorithm in terms of utility, efficiency, and comprehensiveness, and (c) describe how overviews address discordance in results across multiple SRs. Methods and Analysis: We will use a database of 1218 overviews (2000-2020) created from a bibliometric study as the basis of our search for overviews assessing discordance. This bibliometric study searched MEDLINE (Ovid), Epistemonikos, and Cochrane Database for overviews. We will include any overviews using Jadad (1997) or another method to assess discordance. The first 30 overviews screened at the full-text stage by two independent reviewers will be included. We will replicate Jadad assessments in overviews. We will compare our outcomes qualitatively and evaluate the differences between our Jadad assessment of discordance and the overviews’ assessment. Ethics and Dissemination: No ethics approval was required as no human subjects were involved. In addition to publishing in an open-access journal, we will disseminate evidence summaries through formal and informal conferences, academic websites, and across social media platforms. This is the first study to comprehensively evaluate and replicate Jadad algorithm assessments of discordance in SRs.


2008 ◽  
Vol 3 (1) ◽  
pp. 78
Author(s):  
Martha Ingrid Preddie

A review of: McKibbon, K. Ann, and Douglas B. Fridsma. “Effectiveness of Clinician-selected Electronic Information Resources for Answering Primary Care Physicians’ Information Needs.” Journal of the American Medical Informatics Association 13.6 (2006): 653-9. Objective – To determine if electronic information resources selected by primary care physicians improve their ability to answer simulated clinical questions. Design – An observational study utilizing hour-long interviews and think-aloud protocols. Setting – The offices and clinics of primary care physicians in Canada and the United States. Subjects – 25 primary care physicians of whom 4 were women, 17 were from Canada, 22 were family physicians, and 24 were board certified. Methods – Participants provided responses to 23 multiple-choice questions. Each physician then chose two questions and looked for the answers utilizing information resources of their own choice. The search processes, chosen resources and search times were noted. These were analyzed along with data on the accuracy of the answers and certainties related to the answer to each clinical question prior to the search. Main results – Twenty-three physicians sought answers to 46 simulated clinical questions. Utilizing only electronic information resources, physicians spent a mean of 13.0 (SD 5.5) minutes searching for answers to the questions, an average of 7.3 (SD 4.0) minutes for the first question and 5.8 (SD 2.2) minutes to answer the second question. On average, 1.8 resources were utilized per question. Resources that summarized information, such as the Cochrane Database of Systematic Reviews, UpToDate and Clinical Evidence, were favored 39.2% of the time, MEDLINE (Ovid and PubMed) 35.7%, and Internet resources including Google 22.6%. Almost 50% of the search and retrieval strategies were keyword-based, while MeSH, subheadings and limiting were used less frequently. On average, before searching physicians answered 10 of 23 (43.5%) questions accurately. For questions that were searched using clinician-selected electronic resources, 18 (39.1%) of the 46 answers were accurate before searching, while 19 (42.1%) were accurate after searching. The difference of one correct answer was due to the answers from 5 (10.9%) questions changing from correct to incorrect, while the answers to 6 questions (13.0%) changed from incorrect to correct. The ability to provide correct answers differed among the various resources. Google and Cochrane provided the correct answers about 50% of the time while PubMed, Ovid MEDLINE, UpToDate, Ovid Evidence Based Medicine Reviews and InfoPOEMs were more likely to be associated with incorrect answers. Physicians also seemed unable to determine when they needed to search for information in order to make an accurate decision. Conclusion – Clinician-selected electronic information resources did not guarantee accuracy in the answers provided to simulated clinical questions. At times the use of these resources caused physicians to change self-determined correct answers to incorrect ones. The authors state that this was possibly due to factors such as poor choice of resources, ineffective search strategies, time constraints and automation bias. Library and information practitioners have an important role to play in identifying and advocating for appropriate information resources to be integrated into the electronic medical record systems provided by health care institutions to ensure evidence based health care delivery.


PLoS ONE ◽  
2019 ◽  
Vol 14 (12) ◽  
pp. e0226305
Author(s):  
David A. Groneberg ◽  
Stefan Rolle ◽  
Michael H. K. Bendels ◽  
Doris Klingelhöfer ◽  
Norman Schöffel ◽  
...  

Author(s):  
Sarah L Turvey ◽  
Nasir Hussain ◽  
Laura Banfield ◽  
Mohit Bhandari

Introduction: As evidence-based medicine is increasingly being adopted in medical and surgical practice, effective processing and interpretation of medical literature is imperative. Databases presenting the contents of medical literature have been developed; however, their efficacy merits investigation. The objective of this study was to quantify surgical and orthopaedic content within five evidence-based medicine resources: DynaMed, Clinical Evidence, UpToDate, PIER, and First Consult. Methods: We abstracted surgical and orthopaedic content from UpToDate, DynaMed, PIER, First Consult, and Clinical Evidence. We defined surgical content as that which involved surgical interventions. We classified surgical content by specialty and, for orthopaedics, by subspecialty. The amount of surgical content, as measured by the number of relevant reviews, was compared with the total number of reviews in each database. Likewise, the amount of orthopaedic content, as measured by the number of relevant reviews, was compared with the total number of reviews and the total number of surgical reviews in each database. Results: Across all databases containing a total of 13268 reviews, we identified an average of 18% surgical content. Specifically, First Consult and PIER contained 28% surgical content as a percentage of the total database content. DynaMed contained 14% and Clinical Evidence 11%, whereas UpToDate contained only 9.5% surgical content. Overall, general surgery, pediatrics, and oncology were the most common specialty areas in all databases. Discussion: Our findings suggest that the limited surgical content within these large scope resources poses difficulties for physicians and surgeons seeking answers to complex clinical questions, specifically within the field of orthopaedics. This study therefore demonstrates the potential need for, and benefit of, surgery-specific or even specialty-specific tools.


2002 ◽  
Vol 3 (3) ◽  
pp. 10-26 ◽  
Author(s):  
Jane L. Forrest ◽  
Syrene A. Miller

Abstract The purpose of this article is to introduce evidence-based concepts and demonstrate how to find valid evidence to answer clinical questions. Evidence-based decision making (EBDM) requires understanding new concepts and developing new skills including how to: ask good clinical questions, conduct a computerized search, critically appraise the evidence, apply the results in clinical practice, and evaluate the process. This approach recognizes that clinicians can never be completely current with all conditions, medications, materials, or available products. Thus EBDM provides a mechanism for addressing these gaps in knowledge in order to provide the best care possible. In Part 1, a case scenario demonstrates the application of the skills involved in structuring a clinical question and conducting an online search using PubMed. Practice tips are provided along with online resources related to the evidence-based process. Citation Forrest JL, Miller SA. Evidence-Based Decision Making in Action: Part 1 - Finding the Best Clinical Evidence. J Contemp Dent Pract 2002 August;(3)3: 010-026.


2021 ◽  
Author(s):  
Carole Lunny ◽  
Sai Surabi Thirugnanasampanthar ◽  
Salman Kanji ◽  
Nicola Ferri ◽  
Dawid Pieper ◽  
...  

Abstract Introduction: An increasing growth of systematic reviews (SRs) presents notable challenges for decision-makers seeking to answer clinical questions. In 1997, an algorithm was created by Jadad to assess discordance in results across SRs on the same question. Our study aims to (a) replicate assessments done in a sample of studies using the Jadad algorithm to determine if the same SR would have been chosen, (b) evaluate the Jadad algorithm in terms of utility, efficiency, and comprehensiveness, and (c) describe how authors address discordance in results across multiple SRs. Methods and Analysis: We will use a database of 1218 overviews (2000-2020) created from a bibliometric study as the basis of our search for studies assessing discordance (called Discordant Reviews). This bibliometric study searched MEDLINE (Ovid), Epistemonikos, and Cochrane Database of Systematic Reviews for overviews. We will include any study using Jadad (1997) or another method to assess discordance. The first 30 studies screened at the full-text stage by two independent reviewers will be included. We will replicate the authors’ Jadad assessments. We will compare our outcomes qualitatively and evaluate the differences between our Jadad assessment of discordance and the authors’ assessment. Ethics and Dissemination: No ethics approval was required as no human subjects were involved. In addition to publishing in an open-access journal, we will disseminate evidence summaries through formal and informal conferences, academic websites, and across social media platforms. This is the first study to comprehensively evaluate and replicate Jadad algorithm assessments of discordance across multiple SRs.


2020 ◽  
Author(s):  
Simon Schwab ◽  
Kreiliger Giuachin ◽  
Leonhard Held

Publication bias is a persisting problem in meta-analyses for evidence based medicine. As a consequence small studies with large treatment effects are more likely to be reported than studies with a null result which causes asymmetry. Here, we investigated treatment effects from 57,186 studies from 1922 to 2019, and overall 99,129 meta-analyses and 5,557 large meta-analyses from the Cochrane Database of Systematic Reviews. Altogether 19% (95%-CI from 18% to 20%) of the meta-analyses demonstrated evidence for asymmetry, but only 3.9% (95%-CI from 3.4% to 4.4%) showed evidence for publication bias after further assessment of funnel plots. Adjusting treatment effects resulted in overall less evidence for efficacy, and treatment effects in some medical specialties or published in prestigious journals were more likely to be statistically significant. These results suggest that asymmetry from exaggerated effects from small studies causes greater concern than publication bias.


Criminologie ◽  
2009 ◽  
Vol 42 (1) ◽  
pp. 143-183 ◽  
Author(s):  
Denis Lafortune ◽  
Dominique Meilleur ◽  
Brigitte Blanchard

Résumé Dans les échelles d’appréciation de la qualité scientifique des recherches, les essais randomisés contrôlés (ERC) figurent en haut de la liste. En termes de crédibilité, dans le courant des pratiques fondées sur des données probantes (Evidence Based Practice [EBP]), les résultats qu’ils obtiennent ont la priorité sur les autres. Les recensions Cochrane, qui portent généralement sur l’efficacité d’interventions médicales, s’intéressent aussi aux interventions de type criminologique. À notre connaissance, aucune étude ne s’est encore penchée sur les conclusions dégagées par la Collaboration Cochrane sur ce type d’intervention. Dans le présent article, le contenu de la revue électronique Cochrane Database of Systematic Reviews a été analysé, pour la période allant de 2000 à 2008. Les résultats montrent que 33 recensions Cochrane ont traité d’interventions de type criminologique. Privilégiant les ERC, ces recensions n’ont retenu en moyenne que 2 % de toutes les études publiées dans différents champs d’intervention. Un tel résultat permet de discuter de la pertinence de la méthode Cochrane pour évaluer l’efficacité d’interventions à caractère plus social. Les questions posées concernent la représentativité des milieux où sont implantées les interventions, la concomitance et la complexité des problèmes à résoudre, les apports et limites des « protocoles » d’intervention, ainsi que les risques de retard, voire de paralysie, dans l’implantation d’approches innovantes.


2020 ◽  
Vol 79 (Suppl 1) ◽  
pp. 812.2-812
Author(s):  
J. Uson Jaeger ◽  
E. Naredo ◽  
S. C. Rodriguez-García ◽  
R. Castellanos-Moreira ◽  
T. O’neill ◽  
...  

Background:Intra-articular therapies (IAT) are widely used in clinical practice to treat patients with rheumatic and musculoskeletal diseases (RMDs). Many factors influence their efficacy and safety. There is a wide variation in the way IATs are delivered by health professionals. In an attempt to standardise these procedures, evidence-based recommendations are the right way forward.Objectives:To establish evidence-based recommendations to guide health professionals using IAT in adult patients with peripheral arthropathies.Methods:At a first face-to-face meeting, the results of an overview of systematic reviews were presented to the multidisciplinary task force of members from 8 countries. The aim, scope and outline of the taskforce were also established at this meeting. Thirty-two clinical questions ranked for priority (relevance for practice plus feasibility) drove the systematic reviews performed by two fellows. In addition, two surveys addressed to physicians, health professionals and patients throughout Europe were agreed to acquire more background information. At the second face-to-face meeting, the evidence for each research question was discussed, and each recommendation shaped and voted in a first Delphi round. Level of agreement was numerically scored 0 to 10 (0 completely disagree, 10 completely agree). All panellists voted anonymously using a sli.do app. Agreement needed to be greater than 80% to be included in a second Delphi round, which also allowed reformulation of statements. Finally, a third Delphi round was sent to the taskforce. The level of evidence was assigned to each recommendation according to the EULAR SOP for establishing recommendations.Results:Recommendations focus on practical aspects for daily practice to guide health professionals before, during and after IAT in adult patients with peripheral arthropathies. Five overarching principles were established, together with 11 recommendations that address the following issues: (1) patient information; (2) procedure and setting; (3) accuracy issues; (3) routine and special antiseptic care; (4) safety issues and precautions to be addressed in special populations; (5) efficacy and safety of repeated joint injections; (6) the usage of local anaesthetics; and (7) aftercare. The document includes the supporting evidence and results from the surveys, level of evidence and agreement.Conclusion:We have developed the first evidence and expert opinion based recommendations to guide health professionals using IAT.Acknowledgments:Eular Taskforce grant CL109Disclosure of Interests:Jacqueline Uson Jaeger: None declared, Esperanza Naredo: None declared, Sebastian C Rodriguez-García Speakers bureau: Novartis Farmaceutica, S.A., Merck Sharp & Dohme España, S.A., Sanofi Aventis, UCB Pharma, Raul Castellanos-Moreira: None declared, Terence O’Neill: None declared, Hemant Pandit Grant/research support from: Glaxo Smith Kline (GSK) for work on Diclofenac Gel, Speakers bureau: Bristol Myers Squibb for teaching their employees about hip and knee replacement, Michael Doherty Grant/research support from: AstraZeneca funded the Nottingham Sons of Gout study, Consultant of: Advisory borads on gout for Grunenthal and Mallinckrodt, Mikael Boesen Consultant of: AbbVie, AstraZeneca, Eli Lilly, Esaote, Glenmark, Novartis, Pfizer, UCB, Paid instructor for: IAG, Image Analysis Group, AbbVie, Eli Lilly, AstraZeneca, esaote, Glenmark, Novartis, Pfizer, UCB (scientific advisor)., Speakers bureau: Eli Lilly, Esaote, Novartis, Pfizer, UCB, Ingrid Möller: None declared, Valentina Vardanyan: None declared, Jenny de la Torre-Aboki: None declared, Lene Terslev: None declared, Francis Berenbaum Grant/research support from: TRB Chemedica (through institution), MSD (through institution), Pfizer (through institution), Consultant of: Novartis, MSD, Pfizer, Lilly, UCB, Abbvie, Roche, Servier, Sanofi-Aventis, Flexion Therapeutics, Expanscience, GSK, Biogen, Nordic, Sandoz, Regeneron, Gilead, Bone Therapeutics, Regulaxis, Peptinov, 4P Pharma, Paid instructor for: Sandoz, Speakers bureau: Novartis, MSD, Pfizer, Lilly, UCB, Abbvie, Roche, Servier, Sanofi-Aventis, Flexion Therapeutics, Expanscience, GSK, Biogen, Nordic, Sandoz, Regeneron, Gilead, Sandoz, Maria Antonietta D’Agostino Consultant of: AbbVie, BMS, Novartis, and Roche, Speakers bureau: AbbVie, BMS, Novartis, and Roche, Willm Uwe Kampen: None declared, Elena Nikiphorou: None declared, IRENE Pitsillidou: None declared, Loreto Carmona Grant/research support from: Novartis Farmaceutica, SA, Pfizer, S.L.U., Merck Sharp & Dohme España, S.A., Roche Farma, S.A, Sanofi Aventis, AbbVie Spain, S.L.U., and Laboratorios Gebro Pharma, SA (All trhough institution)


Sign in / Sign up

Export Citation Format

Share Document