scholarly journals Research Trends in Artificial Intelligence Applications in Human Factors Health Care: Mapping Review (Preprint)

2021 ◽  
Author(s):  
Onur Asan ◽  
Avishek Choudhury

BACKGROUND Despite advancements in artificial intelligence (AI) to develop prediction and classification models, little research has been devoted to real-world translations with a user-centered design approach. AI development studies in the health care context have often ignored two critical factors of ecological validity and human cognition, creating challenges at the interface with clinicians and the clinical environment. OBJECTIVE The aim of this literature review was to investigate the contributions made by major human factors communities in health care AI applications. This review also discusses emerging research gaps, and provides future research directions to facilitate a safer and user-centered integration of AI into the clinical workflow. METHODS We performed an extensive mapping review to capture all relevant articles published within the last 10 years in the major human factors journals and conference proceedings listed in the “Human Factors and Ergonomics” category of the Scopus Master List. In each published volume, we searched for studies reporting qualitative or quantitative findings in the context of AI in health care. Studies are discussed based on the key principles such as evaluating workload, usability, trust in technology, perception, and user-centered design. RESULTS Forty-eight articles were included in the final review. Most of the studies emphasized user perception, the usability of AI-based devices or technologies, cognitive workload, and user’s trust in AI. The review revealed a nascent but growing body of literature focusing on augmenting health care AI; however, little effort has been made to ensure ecological validity with user-centered design approaches. Moreover, few studies (n=5 against clinical/baseline standards, n=5 against clinicians) compared their AI models against a standard measure. CONCLUSIONS Human factors researchers should actively be part of efforts in AI design and implementation, as well as dynamic assessments of AI systems’ effects on interaction, workflow, and patient outcomes. An AI system is part of a greater sociotechnical system. Investigators with human factors and ergonomics expertise are essential when defining the dynamic interaction of AI within each element, process, and result of the work system.

Author(s):  
Neville Moray

Frequently claims are made that what the discipline of human factors and ergonomics needs are better and more detailed data bases which can be used by designers as “look up” tables to specify the properties of human beings. Several of these already exist but they seem not to be satisfactory. The experience of teaching user centered design has convinced the author that the problem lies not in the absence of appropriate data tables for designers, but in the nature of the systems we design. Unlike many other engineering disciplines human factors is extremely sensitive to context. The result is that there are no such things as context free laws in applied psychology, and hence the value of data bases and tables is restricted to certain fairly basic ergonomic problems. It is moreover not merely in small details that laws do not apply - hence the title of this paper. Increasingly the nature of advanced systems renders such data bases of little value unless we can develop equivalent data bases which describe context, not merely the properties of humans.


2021 ◽  
Author(s):  
Jeonghwan Hwang ◽  
Taeheon Lee ◽  
Honggu Lee ◽  
Seonjeong Byun

BACKGROUND Despite the unprecedented performances of deep learning algorithms in clinical domains, full reviews of algorithmic predictions by human experts remain mandatory. Under these circumstances, artificial intelligence (AI) models are primarily designed as clinical decision support systems (CDSSs). However, from the perspective of clinical practitioners, the lack of clinical interpretability and user-centered interfaces block the adoption of these AI systems in practice. OBJECTIVE The aim of this study was to develop an AI-based CDSS for assisting polysomnographic technicians in reviewing AI-predicted sleep staging results. This study proposed and evaluated a CDSS that provides clinically sound explanations for AI predictions in a user-centered fashion. METHODS User needs for the system were identified during interviews with polysomnographic technicians. User observation sessions were conducted to understand the workflow of the practitioners during sleep scoring. Iterative design process was performed to ensure easy integration of the tool into clinical workflows. Then, we evaluated the system with polysomnographic technicians. We measured the improvements in sleep staging accuracies after adopting our tool and assessed qualitatively how the participants perceived and used the tool. RESULTS The user study revealed that technicians desire explanations relevant to key electroencephalogram (EEG) patterns for sleep staging when assessing the correctness of the AI predictions. Here, technicians could evaluate whether AI models properly locate and use those patterns during prediction. Based on this, information in AI models that is closely related to sleep EEG patterns was formulated and visualized during the iterative design process. Furthermore, we developed a different visualization strategy for each pattern based on the way the technicians interpreted the EEG recordings with these patterns during their workflows. Generally, the tool evaluation results from the nine polysomnographic technicians were positive. Quantitatively, technicians achieved better classification performances after reviewing the AI-generated predictions with the proposed system; classification accuracies measured with Macro-F1 scores improved from 60.20 to 62.71. Qualitatively, participants reported that the provided information from the tool effectively supported them, and they were able to develop notable adoption strategies for the tool. CONCLUSIONS Our findings indicate that formulating clinical explanations for automated predictions using the information in the AI with a user-centered design process is an effective strategy for developing a CDSS for sleep staging.


10.2196/25148 ◽  
2021 ◽  
Vol 10 (3) ◽  
pp. e25148
Author(s):  
Ahmed Umar Otokiti ◽  
Catherine K Craven ◽  
Avniel Shetreat-Klein ◽  
Stacey Cohen ◽  
Bruce Darrow

Background Up to 60% of health care providers experience one or more symptoms of burnout. Perceived clinician burden resulting in burnout arises from factors such as electronic health record (EHR) usability or lack thereof, perceived loss of autonomy, and documentation burden leading to less clinical time with patients. Burnout can have detrimental effects on health care quality and contributes to increased medical errors, decreased patient satisfaction, substance use, workforce attrition, and suicide. Objective This project aims to improve the user-centered design of the EHR by obtaining direct input from clinicians about deficiencies. Fixing identified deficiencies via user-centered design has the potential to improve usability, thereby increasing satisfaction by reducing EHR-induced burnout. Methods Quantitative and qualitative data will be obtained from clinician EHR users. The input will be received through a form built in a REDCap database via a link embedded in the home page of the EHR. The REDCap data will be analyzed in 2 main dimensions, based on nature of the input, what section of the EHR is affected, and what is required to fix the issue(s). Identified issues will be escalated to relevant stakeholders responsible for rectifying the problems identified. Data analysis, project evaluation, and lessons learned from the evaluation will be incorporated in a Plan-Do-Study-Act (PDSA) manner every 4-6 weeks. Results The pilot phase of the study began in October 2020 in the Gastroenterology Division at Mount Sinai Hospital, New York City, NY, which includes 39 physicians and 15 nurses. The pilot is expected to run over a 4-6–month period. The results of the REDCap data analysis will be reported within 1 month of completing the pilot phase. We will analyze the nature of requests received and the impact of rectified issues on the clinician EHR user. We expect that the results will reveal which sections of the EHR have the highest deficiencies while also highlighting issues about workflow difficulties. Perceived impact of the project on provider engagement, patient safety, and workflow efficiency will also be captured by evaluation survey and other qualitative methods where possible. Conclusions The project aims to improve user-centered design of the EHR by soliciting direct input from clinician EHR users. The ultimate goal is to improve efficiency, reduce EHR inefficiencies with the possibility of improving staff engagement, and lessen EHR-induced clinician burnout. Our project implementation includes using informatics expertise to achieve the desired state of a learning health system as recommended by the National Academy of Medicine as we facilitate feedback loops and rapid cycles of improvement. International Registered Report Identifier (IRRID) PRR1-10.2196/25148


2018 ◽  
Vol 25 (6) ◽  
pp. 557-562 ◽  
Author(s):  
Tyson Schwab ◽  
John Langell

Background. The rapid adoption of smartphones and software applications (apps) has become prevalent worldwide, making these technologies nearly universally available. Low-cost mobile health (M-health) platforms are being rapidly adopted in both developed and emerging markets and have transformed the health care delivery landscape. Human factors optimization is critical to the safe and sustainable adoption of M-health solutions. The overall goal of engaging human factors requirements in the software app design process is to decrease patient safety risks while increasing usability and productivity for the end user. Methods. An extensive review of the literature was conducted using PubMed and Google search engines to identify best approaches to M-health software design based on human factors and user-centered design to optimize the usability, safety, and efficacy of M-health apps. Extracted data were used to create a health care app development algorithm. Results. A best practice algorithm for the design of mobile apps for global health care, based on the extracted data, was developed. The approach is based on an iterative 4-stage process that incorporates human factors and user-centered design processes. This process helps optimize the development of safe and effective mobile apps for use in global health care delivery and disease prevention. Conclusion. Mobile technologies designed for developing regions offer a potential solution to provide effective, low-cost health care. Applying human factors design principles to global health care app development helps ensure the delivery of safe and effective technologies tailored to the end-users requirements.


Author(s):  
Kermit G. Davis ◽  
Christopher R. Reid ◽  
David D. Rempel ◽  
Delia Treaster

Author(s):  
Steven M. Belz

Success in the marketplace doesn't happen by accident but through the application of human factors/ergonomics user-centered design principles.


1992 ◽  
Vol 36 (11) ◽  
pp. 872-873
Author(s):  
Susan M. Dray ◽  
Debra S. George

This paper describes the results of focus groups done with I/S professionals and business users to identify “best practices” for design of distributed systems. Many of these are “obvious” to a Human Factors professional, but the value of this effort was to help others to identify them from their own experience.


Sign in / Sign up

Export Citation Format

Share Document