scholarly journals The Utilization of Wireless Handheld Computers with MEDLINE is an Effective Mechanism for Answering Clinical Questions at the Point of Care

2008 ◽  
Vol 3 (3) ◽  
pp. 64
Author(s):  
Martha Ingrid Preddie

A Review of: Hauser, Susan E., Dina Demner-Fushman, Joshua L. Jacobs, Susanne M. Humphrey, Glenn Ford, and George R. Thoma. “Using Wireless Handheld Computers to Seek Information at the Point of Care: An Evaluation by Clinicians.” Journal of the American Medical Informatics Association 14.6 (Nov./Dec. 2007): 807-15. Abstract Objective – To assess the effectiveness of wireless handheld computers (HHCs) for information retrieval in clinical environments and the role of MEDLINE in answering clinical questions at the point of care. Design – A prospective single-cohort study. Setting – Teaching rounds in the intensive care units and general medicine wards in two hospitals associated with a university’s school of medicine in the United States. Subjects – Five internal medicine residents with training in evidence-based practice. Methods – While accompanying medical teams on teaching rounds for approximately four consecutive weeks, each resident used MD on Tap (an application for handheld computers) on a TreoTM 650 PDA/cell phone to find answers in real time, to questions that were raised by members of the medical teams. Using a special version of MD on Tap, each resident initialized a UserID. Serving as evaluators, the residents described and categorized clinical scenarios and recognized questions. They also formulated search terms, searched MEDLINE and identified citations determined to be useful for answering the questions. An intermediate server collected details of all MEDLINE search query transactions, including system response time, the user (based on UserIDs), citations selected for viewing, the saving of citations to HHC memory, as well as use of the Linkout and Notes features. In addition evaluators submitted daily summaries. These summaries included information on the scenarios, clinical questions, evidence-based methodology (EBM) category, the team member who was the source of the question, the PubMed Identifiers (PMIDs) of relevant citations, and comments. At the end of the data collection period, each evaluator submitted a summary report consisting of a qualitative and quantitative evaluation of his experience using MEDLINE via the handheld device to find relevant evidence based information at the point of care. The report also focused on the usefulness of MD on Tap features, along with suggestions for additional features. Data analysis encompassed matching the text of daily summaries to transaction records in order to identify sessions (containing a scenario, clinical question, one or more search queries, citation fetches and selected PMIDs). A senior medical librarian/expert indexer reviewed all the citations selected by evaluators and graded each citation as A (useful for answering the question), B (provided a partial answer) or C (not useful for answering the question). Only those graded A were regarded as “relevant.” For the purpose of analysis a session was deemed to be successful “if at least one of the citations selected by the evaluator as relevant was also classified as Relevant” (810) by the expert indexer. Similarly, an individual query was successful “if at least one of the citations among the results of the query was Relevant, that citation was viewed by the evaluator during rounds, and it addressed the clinical question as recorded in the daily summary” (810). Various types of relationships were analyzed including the characteristics of clinical questions vis-a-vis successful sessions, search strategies in relation to successful queries, and the association between MD on Tap features and successful queries. SAS/SUDAAN version 9.1 was used for statistical analysis. Main Results – Evaluators answered 68% (246 of 363) clinical questions during rounding sessions. They identified 478 “relevant” citations, an average of 1.9 per successful session and 1.3 for each successful question. Session lengths averaged 3 minutes and 41 seconds. Characteristics of the evaluator (training, interest, experience and expertise) were a significant predictor of a session’s success. The significant determinants of query success were “the number of search terms that could be mapped to Medical Subject Headings (MeSH)” (812), the number of citations that were found for a query, and the use of MD on Tap’s auto-spellcheck feature. Narrative comments from the evaluators indicated that using MEDLINE on a HHC at the point of care contributed positively to the practice of evidence -based medicine. Conclusion – Wireless handheld computers are useful for retrieving information in clinical environments. The application of several MeSH terms in a query facilitates the retrieval of MEDLINE citations that provide answers to clinical questions. The MD on Tap program is a valuable interface to MEDLINE at the point of care.

2021 ◽  
Vol 22 (3) ◽  
pp. 18-22
Author(s):  
Jamie Saragossi

BMJ Best Practice is an evidence-based point of care tool that helps support clinical decisions by providing the latest and highest quality research available to clinicians. The target audience for this resource is anyone delivering health care services. Currently, BMJ Best Practice is available as an institutional subscription in the United States. The resource includes clinical summaries based on the latest evidence, drug information, clinical calculators, evidence-based tool kits, and patient leaflets. The content provided goes through a rigorous editorial process by expert reviewers who have been required to disclose any financial conflicts. This process can however be relatively time consuming therefore updates that do not pose an immediate harm to patient care could potentially take anywhere from 1 to 3 months to be reflected in the clinical topic overviews. Overall, the tools and content provided on the platform are reliable and easy to navigate for the end user.


2020 ◽  
Vol 108 (2) ◽  
Author(s):  
Joey Nicholson ◽  
Adina Kalet ◽  
Cees Van der Vleuten ◽  
Anique De Bruin

Objective: Evidence-based medicine practices of medical students in clinical scenarios are not well understood. Optimal foraging theory (OFT) is one framework that could be useful in breaking apart information-seeking patterns to determine effectiveness and efficiency of different methods of information seeking. The aims of this study were to use OFT to determine the number and type of resources used in information seeking when medical students answer a clinical question, to describe common information-seeking patterns, and identify patterns associated with higher quality answers to a clinical question.Methods: Medical students were observed via screen recordings while they sought evidence related to a clinical question and provided a written response for what they would do for that patient based on the evidence that they found.Results: Half (51%) of study participants used only 1 source before answering the clinical question. While the participants were able to successfully and efficiently navigate point-of-care tools and search engines, searching PubMed was not favored, with only half (48%) of PubMed searches being successful. There were no associations between information-seeking patterns and the quality of answers to the clinical question.Conclusion: Clinically experienced medical students most frequently relied on point-of-care tools alone or in combination with PubMed to answer a clinical question. OFT can be used as a framework to understand the information-seeking practices of medical students in clinical scenarios. This has implications for both teaching and assessment of evidence-based medicine in medical students.


2008 ◽  
Vol 3 (1) ◽  
pp. 78
Author(s):  
Martha Ingrid Preddie

A review of: McKibbon, K. Ann, and Douglas B. Fridsma. “Effectiveness of Clinician-selected Electronic Information Resources for Answering Primary Care Physicians’ Information Needs.” Journal of the American Medical Informatics Association 13.6 (2006): 653-9. Objective – To determine if electronic information resources selected by primary care physicians improve their ability to answer simulated clinical questions. Design – An observational study utilizing hour-long interviews and think-aloud protocols. Setting – The offices and clinics of primary care physicians in Canada and the United States. Subjects – 25 primary care physicians of whom 4 were women, 17 were from Canada, 22 were family physicians, and 24 were board certified. Methods – Participants provided responses to 23 multiple-choice questions. Each physician then chose two questions and looked for the answers utilizing information resources of their own choice. The search processes, chosen resources and search times were noted. These were analyzed along with data on the accuracy of the answers and certainties related to the answer to each clinical question prior to the search. Main results – Twenty-three physicians sought answers to 46 simulated clinical questions. Utilizing only electronic information resources, physicians spent a mean of 13.0 (SD 5.5) minutes searching for answers to the questions, an average of 7.3 (SD 4.0) minutes for the first question and 5.8 (SD 2.2) minutes to answer the second question. On average, 1.8 resources were utilized per question. Resources that summarized information, such as the Cochrane Database of Systematic Reviews, UpToDate and Clinical Evidence, were favored 39.2% of the time, MEDLINE (Ovid and PubMed) 35.7%, and Internet resources including Google 22.6%. Almost 50% of the search and retrieval strategies were keyword-based, while MeSH, subheadings and limiting were used less frequently. On average, before searching physicians answered 10 of 23 (43.5%) questions accurately. For questions that were searched using clinician-selected electronic resources, 18 (39.1%) of the 46 answers were accurate before searching, while 19 (42.1%) were accurate after searching. The difference of one correct answer was due to the answers from 5 (10.9%) questions changing from correct to incorrect, while the answers to 6 questions (13.0%) changed from incorrect to correct. The ability to provide correct answers differed among the various resources. Google and Cochrane provided the correct answers about 50% of the time while PubMed, Ovid MEDLINE, UpToDate, Ovid Evidence Based Medicine Reviews and InfoPOEMs were more likely to be associated with incorrect answers. Physicians also seemed unable to determine when they needed to search for information in order to make an accurate decision. Conclusion – Clinician-selected electronic information resources did not guarantee accuracy in the answers provided to simulated clinical questions. At times the use of these resources caused physicians to change self-determined correct answers to incorrect ones. The authors state that this was possibly due to factors such as poor choice of resources, ineffective search strategies, time constraints and automation bias. Library and information practitioners have an important role to play in identifying and advocating for appropriate information resources to be integrated into the electronic medical record systems provided by health care institutions to ensure evidence based health care delivery.


Author(s):  
John C. Norcross ◽  
Thomas P. Hogan ◽  
Gerald P. Koocher ◽  
Lauren A. Maggio

This chapter provides a guide to the first core skill of evidence-based practice (EBP): formulating a specific, answerable question. This skill lies at the heart of accessing the best available research. To practice EBP clinicians must first form an answerable clinical question; otherwise they will likely incur frustration and waste time once they embark on their literature search. The chapter introduces several types of questions, including background and foreground questions. The chapter also provides step-by-step instructions for formulating clinical questions using the PICO format, which encourages clinicians to identify the patient, intervention, comparison, and outcomes relevant to the patient. It concludes with a discussion of how to ensure that questions reflect the patient’s preferences and how to prioritize questions.


CJEM ◽  
2012 ◽  
Vol 14 (01) ◽  
pp. 31-35 ◽  
Author(s):  
Andrew Worster ◽  
R. Brian Haynes

ABSTRACT Emergency physicians often need point-of-care access to current, valid information to guide patient management. Most emergency physicians do not work in a hospital with a computerized decision support system that prompts and provides them with information to answer their clinical questions. Searching for answers to clinical questions online, especially those related to diagnosis and treatment, can be challenging, in part because determining the validity and clinical applicability of the results of individual studies is beyond the time constraints of most emergency physicians. This article describes currently available point-of-care sources of evidence-based information to answer clinical questions and provides the access information for each.


1998 ◽  
Vol 22 (7) ◽  
pp. 442-445
Author(s):  
Marc Lester ◽  
James P. Warner ◽  
Robert Blizard

Prompted by a clinical question, an article on prognosis in anorexia nervosa was appraised using evidence-based guidelines. Although problems with the validity and generalisability of the study were identified, this article yielded useful information. We conclude that it is not possible to address all clinical questions using the evidence-based framework.


2019 ◽  
pp. 1-9 ◽  
Author(s):  
Michael V. Sherer ◽  
Diana Lin ◽  
Kartikeya Puri ◽  
Neil Panjwani ◽  
Zhigang Zhang ◽  
...  

PURPOSE Variation in contouring quality by radiation oncologists is common and can have significant clinical consequences. Image-based guidelines can improve contour accuracy but are underused. We sought to develop a free, online, easily accessible contouring resource that allows users to scroll through cases with 3-dimensional images and access relevant evidence-based contouring information. MATERIALS AND METHODS eContour ( http://econtour.org ) was developed using modern Web technologies, primarily HTML5, Python, and JavaScript, to display JPEGs generated from DICOM files from real patient cases. The viewer has standard tools for image manipulation as well as toggling of contours and overlayed images and radiation dose distributions. Brief written content references published guidelines for contour delineation. Mixpanel software was used to collect Web page usage statistics. RESULTS In the first 3 years of operation (March 8, 2016 to March 7, 2019), a total of 13,391 users from 128 countries registered on the Web site, including 2,358 physicians from the United States. High-frequency users were more likely to be physicians ( P < .001) and from the United States ( P < .001). In one 6-month period, there were 68,642 individual case page views, with head-and-neck the most commonly viewed disease site (32%). Users who accessed a head-and-neck case were more likely to be high-frequency users, and 67% of repeat users accessed the same case more than once. CONCLUSION The large, diverse user base and steady growth in Web site traffic over the first 3 years of eContour demonstrate its strong potential to address the unmet need for dissemination and use of evidence-based contouring information at the point of care.


PRiMER ◽  
2018 ◽  
Vol 2 ◽  
Author(s):  
Thomas W. Hahn ◽  
Caitlin D'Agata ◽  
Jennifer Edgoose ◽  
Jennifer Mastrocola ◽  
Larissa Zakletskaia ◽  
...  

Introduction: Inpatient training and evidence-based medicine (EBM) curricula are fundamental components of medical education. Teaching EBM And Clinical topics in the Hospital (TEACH) Cards is an inpatient curricular tool developed to help guide efficient, discussion-based teaching sessions. TEACH Cards aims to increase frequency of inpatient teaching, improve exposure to the breadth of inpatient topics, advance EBM skills, and improve efficiency in answering clinical questions. Methods: TEACH Cards is a set of 25 topic-based cards, each addressing an adult inpatient medicine topic by asking background questions and encouraging learners to write and answer foreground questions. Residents and faculty from a family medicine residency rotating on an adult inpatient medicine service during the 6-month study period were invited to complete a prerotation survey, use the TEACH Cards, and then complete a postrotation survey. Results: Out of 54 potential participants, 35% completed both the pre- and postrotation surveys. Respondents used TEACH Cards on average three times per week, reporting significantly stronger agreement that they were both learning (P=0.034) and teaching (P=0.006) core inpatient topics. Respondents reported greater confidence in using EBM resources (P=0.006) and significantly shorter time to find an evidence-based answer to a clinical question (pretest median=6-10 minutes vs posttest median=2-5 minutes, P=0.002). Conclusion: Use of TEACH Cards increased self-reported exposure to the breadth of core inpatient topics, confidence with EBM skills, and efficiency in finding answers to clinical questions.  


1998 ◽  
Vol 22 (11) ◽  
pp. 698-701
Author(s):  
Apu Chakraborty ◽  
James P. Warner ◽  
Robert Blizard

Aims and methodPrompted by a clinical question, we critically appraised a meta-analysis of neuroimaging in our evidence-based journal club.ResultsThe results of the meta-analysis suggested differences in ventricular size and sulcal width between controls and people with schizophrenia and mood disorders. However, we were unable to answer the question that prompted this exercise.Clinical implicationsAlthough the evidence-based medicine approach facilitates appraisal of complex articles, some clinical questions are not yet answerable.


2013 ◽  
Vol 14 (4) ◽  
pp. 95-101 ◽  
Author(s):  
Robert Kraemer ◽  
Allison Coltisor ◽  
Meesha Kalra ◽  
Megan Martinez ◽  
Bailey Savage ◽  
...  

English language learning (ELL) children suspected of having specific-language impairment (SLI) should be assessed using the same methods as monolingual English-speaking children born and raised in the United States. In an effort to reduce over- and under-identification of ELL children as SLI, speech-language pathologists (SLP) must employ nonbiased assessment practices. This article presents several evidence-based, nonstandarized assessment practices SLPs can implement in place of standardized tools. As the number of ELL children SLPs come in contact with increases, the need for well-trained and knowledgeable SLPs grows. The goal of the authors is to present several well-establish, evidence-based assessment methods for assessing ELL children suspected of SLI.


Sign in / Sign up

Export Citation Format

Share Document