scholarly journals A qualitative framework-based evaluation of radiology clinical decision support initiatives: eliciting key factors to physician adoption in implementation

JAMIA Open ◽  
2019 ◽  
Vol 2 (1) ◽  
pp. 187-196 ◽  
Author(s):  
Laura Haak Marcial ◽  
Douglas S Johnston ◽  
Michael R Shapiro ◽  
Sara R Jacobs ◽  
Barry Blumenfeld ◽  
...  

Abstract Objectives To illustrate key contextual factors that may have effects on clinical decision support (CDS) adoption and, ultimately, success. Materials and Methods We conducted a qualitative evaluation of 2 similar radiology CDS innovations for near-term endpoints affecting adoption and present the findings using an evaluation framework. We identified key contextual factors between these 2 innovations and determined important adoption differences between them. Results Degree of electronic health record integration, approach to education and training, key drivers of adoption, and tailoring of the CDS to the clinical context were handled differently between the 2 innovations, contributing to variation in their relative degrees of adoption and use. Attention to these factors had impacts on both near and later-term measures of success (eg, patient outcomes). Discussion CDS adoption is a well-studied early-term measure of CDS success that directly impacts outcomes. Adoption requires attention throughout the design phases of an intervention especially to key factors directly affecting it, including how implementation across multiple sites and systems complicates adoption, which prior experience with CDS matters, and that practice guidelines invariably require tailoring to the clinical context. Conclusion With better planning for the capture of early-term measures of successful CDS implementation, especially adoption, critical adjustments may be made to ensure that the CDS is effectively implemented to be successful.

2012 ◽  
Vol 03 (03) ◽  
pp. 309-317 ◽  
Author(s):  
V. Anand ◽  
S. M. Downs ◽  
A.E. Carroll

Summary Introduction: The identification of key factors influencing responses to prompts and reminders within a computer decision support system (CDSS) has not been widely studied. The aim of this study was to evaluate why clinicians routinely answer certain prompts while others are ignored. Methods: We utilized data collected from a CDSS developed by our research group – the Child Health Improvement through Computer Automation (CHICA) system. The main outcome of interest was whether a clinician responded to a prompt. Results: This study found that, as expected, some clinics and physicians were more likely to address prompts than others. However, we also found clinicians are more likely to address prompts for younger patients and when the prompts address more serious issues. The most striking finding was that the position of a prompt was a significant predictor of the likelihood of the prompt being addressed, even after controlling for other factors. Prompts at the top of the page were significantly more likely to be answered than the ones on the bottom. Conclusions: This study detailed a number of factors that are associated with physicians following clinical decision support prompts. This information could be instrumental in designing better interventions and more successful clinical decision support systems in the future.


2020 ◽  
Author(s):  
Mengting Ji ◽  
Georgi Z Genchev ◽  
Hengye Huang ◽  
Ting Xu ◽  
Hui Lu ◽  
...  

BACKGROUND Clinical decision support systems are designed to utilize medical data, knowledge, and analysis engines and to generate patient-specific assessments or recommendations to health professionals in order to assist decision making. Artificial intelligence–enabled clinical decision support systems aid the decision-making process through an intelligent component. Well-defined evaluation methods are essential to ensure the seamless integration and contribution of these systems to clinical practice. OBJECTIVE The purpose of this study was to develop and validate a measurement instrument and test the interrelationships of evaluation variables for an artificial intelligence–enabled clinical decision support system evaluation framework. METHODS An artificial intelligence–enabled clinical decision support system evaluation framework consisting of 6 variables was developed. A Delphi process was conducted to develop the measurement instrument items. Cognitive interviews and pretesting were performed to refine the questions. Web-based survey response data were analyzed to remove irrelevant questions from the measurement instrument, to test dimensional structure, and to assess reliability and validity. The interrelationships of relevant variables were tested and verified using path analysis, and a 28-item measurement instrument was developed. Measurement instrument survey responses were collected from 156 respondents. RESULTS The Cronbach α of the measurement instrument was 0.963, and its content validity was 0.943. Values of average variance extracted ranged from 0.582 to 0.756, and values of the heterotrait-monotrait ratio ranged from 0.376 to 0.896. The final model had a good fit (<i>χ<sub>26</sub><sup>2</sup></i>=36.984; <i>P</i>=.08; comparative fit index 0.991; goodness-of-fit index 0.957; root mean square error of approximation 0.052; standardized root mean square residual 0.028). Variables in the final model accounted for 89% of the variance in the user acceptance dimension. CONCLUSIONS User acceptance is the central dimension of artificial intelligence–enabled clinical decision support system success. Acceptance was directly influenced by perceived ease of use, information quality, service quality, and perceived benefit. Acceptance was also indirectly influenced by system quality and information quality through perceived ease of use. User acceptance and perceived benefit were interrelated.


10.2196/25929 ◽  
2021 ◽  
Vol 23 (6) ◽  
pp. e25929
Author(s):  
Mengting Ji ◽  
Georgi Z Genchev ◽  
Hengye Huang ◽  
Ting Xu ◽  
Hui Lu ◽  
...  

Background Clinical decision support systems are designed to utilize medical data, knowledge, and analysis engines and to generate patient-specific assessments or recommendations to health professionals in order to assist decision making. Artificial intelligence–enabled clinical decision support systems aid the decision-making process through an intelligent component. Well-defined evaluation methods are essential to ensure the seamless integration and contribution of these systems to clinical practice. Objective The purpose of this study was to develop and validate a measurement instrument and test the interrelationships of evaluation variables for an artificial intelligence–enabled clinical decision support system evaluation framework. Methods An artificial intelligence–enabled clinical decision support system evaluation framework consisting of 6 variables was developed. A Delphi process was conducted to develop the measurement instrument items. Cognitive interviews and pretesting were performed to refine the questions. Web-based survey response data were analyzed to remove irrelevant questions from the measurement instrument, to test dimensional structure, and to assess reliability and validity. The interrelationships of relevant variables were tested and verified using path analysis, and a 28-item measurement instrument was developed. Measurement instrument survey responses were collected from 156 respondents. Results The Cronbach α of the measurement instrument was 0.963, and its content validity was 0.943. Values of average variance extracted ranged from 0.582 to 0.756, and values of the heterotrait-monotrait ratio ranged from 0.376 to 0.896. The final model had a good fit (χ262=36.984; P=.08; comparative fit index 0.991; goodness-of-fit index 0.957; root mean square error of approximation 0.052; standardized root mean square residual 0.028). Variables in the final model accounted for 89% of the variance in the user acceptance dimension. Conclusions User acceptance is the central dimension of artificial intelligence–enabled clinical decision support system success. Acceptance was directly influenced by perceived ease of use, information quality, service quality, and perceived benefit. Acceptance was also indirectly influenced by system quality and information quality through perceived ease of use. User acceptance and perceived benefit were interrelated.


PLoS ONE ◽  
2021 ◽  
Vol 16 (5) ◽  
pp. e0250946
Author(s):  
Mark Jeffries ◽  
Nde-Eshimuni Salema ◽  
Libby Laing ◽  
Azwa Shamsuddin ◽  
Aziz Sheikh ◽  
...  

Background The quality and safety of prescribing in general practice is important, Clinical decision support (CDS) systems can be used which present alerts to health professionals when prescribing in order to identify patients at risk of potentially hazardous prescribing. It is known that such computerised alerts may improve the safety of prescribing in hospitals but their implementation and sustainable use in general practice is less well understood. We aimed to understand the factors that influenced the successful implementation and sustained use in primary care of a CDS system. Methods Participants were purposively recruited from Clinical Commissioning Groups (CCGs) and general practices in the North West and East Midlands regions of England and from the CDS developers. We conducted face-to-face and telephone-based semi-structured qualitative interviews with staff stakeholders. A selection of participants was interviewed longitudinally to explore the further sustainability 1–2 years after implementation of the CDS system. The analysis, informed by Normalisation Process Theory (NPT), was thematic, iterative and conducted alongside data collection. Results Thirty-nine interviews were conducted either individually or in groups, with 33 stakeholders, including 11 follow-up interviews. Eight themes were interpreted in alignment with the four NPT constructs: Coherence (The purpose of the CDS: Enhancing medication safety and improving cost effectiveness; Relationship of users to the technology; Engagement and communication between different stakeholders); Cognitive Participation (Management of the profile of alerts); Collective Action (Prescribing in general practice, patient and population characteristics and engagement with patients; Knowledge);and Reflexive Monitoring (Sustaining the use of the CDS through maintenance and customisation; Learning and behaviour change. Participants saw that the CDS could have a role in enhancing medication safety and in the quality of care. Engagement through communication and support for local primary care providers and management leaders was considered important for successful implementation. Management of prescribing alert profiles for general practices was a dynamic process evolving over time. At regional management levels, work was required to adapt, and modify the system to optimise its use in practice and fulfil local priorities. Contextual factors, including patient and population characteristics, could impact upon the decision-making processes of prescribers influencing the response to alerts. The CDS could operate as a knowledge base allowing prescribers access to evidence-based information that they otherwise would not have. Conclusions This qualitative evaluation utilised NPT to understand the implementation, use and sustainability of a widely deployed CDS system offering prescribing alerts in general practice. The system was understood as having a role in medication safety in providing relevant patient specific information to prescribers in a timely manner. Engagement between stakeholders was considered important for the intervention in ensuring prescribers continued to utilise its functionality. Sustained implementation might be enhanced by careful profile management of the suite of alerts in the system. Our findings suggest that the use and sustainability of the CDS was related to prescribers’ perceptions of the relevance of alerts. Shared understanding of the purpose of the CDS between CCGS and general practices particularly in balancing cost saving and safety messages could be beneficial.


2013 ◽  
Vol 46 (2) ◽  
pp. 52
Author(s):  
CHRISTOPHER NOTTE ◽  
NEIL SKOLNIK

Sign in / Sign up

Export Citation Format

Share Document