scholarly journals Patient Perception of Plain-Language Medical Notes Generated Using Artificial Intelligence Software: Pilot Mixed-Methods Study (Preprint)

2019 ◽  
Author(s):  
Sandeep Bala ◽  
Angela Keniston ◽  
Marisha Burden

BACKGROUND Clinicians’ time with patients has become increasingly limited due to regulatory burden, documentation and billing, administrative responsibilities, and market forces. These factors limit clinicians’ time to deliver thorough explanations to patients. OpenNotes began as a research initiative exploring the ability of sharing medical notes with patients to help patients understand their health care. Providing patients access to their medical notes has been shown to have many benefits, including improved patient satisfaction and clinical outcomes. OpenNotes has since evolved into a national movement that helps clinicians share notes with patients. However, a significant barrier to the widespread adoption of OpenNotes has been clinicians’ concerns that OpenNotes may cost additional time to correct patient confusion over medical language. Recent advances in artificial intelligence (AI) technology may help resolve this concern by converting medical notes to plain language with minimal time required of clinicians. OBJECTIVE This pilot study assesses patient comprehension and perceived benefits, concerns, and insights regarding an AI-simplified note through comprehension questions and guided interview. METHODS Synthea, a synthetic patient generator, was used to generate a standardized medical-language patient note which was then simplified using AI software. A multiple-choice comprehension assessment questionnaire was drafted with physician input. Study participants were recruited from inpatients at the University of Colorado Hospital. Participants were randomly assigned to be tested for their comprehension of the standardized medical-language version or AI-generated plain-language version of the patient note. Following this, participants reviewed the opposite version of the note and participated in a guided interview. A Student <i>t</i> test was performed to assess for differences in comprehension assessment scores between plain-language and medical-language note groups. Multivariate modeling was performed to assess the impact of demographic variables on comprehension. Interview responses were thematically analyzed. RESULTS Twenty patients agreed to participate. The mean number of comprehension assessment questions answered correctly was found to be higher in the plain-language group compared with the medical-language group; however, the Student <i>t</i> test was found to be underpowered to determine if this was significant. Age, ethnicity, and health literacy were found to have a significant impact on comprehension scores by multivariate modeling. Thematic analysis of guided interviews highlighted patients’ perceived benefits, concerns, and suggestions regarding such notes. Major themes of benefits were that simplified plain-language notes may (1) be more useable than unsimplified medical-language notes, (2) improve the patient-clinician relationship, and (3) empower patients through an enhanced understanding of their health care. CONCLUSIONS AI software may translate medical notes into plain-language notes that are perceived as beneficial by patients. Limitations included sample size, inpatient-only setting, and possible confounding factors. Larger studies are needed to assess comprehension. Insight from patient responses to guided interviews can guide the future study and development of this technology.

10.2196/16670 ◽  
2020 ◽  
Vol 4 (6) ◽  
pp. e16670
Author(s):  
Sandeep Bala ◽  
Angela Keniston ◽  
Marisha Burden

Background Clinicians’ time with patients has become increasingly limited due to regulatory burden, documentation and billing, administrative responsibilities, and market forces. These factors limit clinicians’ time to deliver thorough explanations to patients. OpenNotes began as a research initiative exploring the ability of sharing medical notes with patients to help patients understand their health care. Providing patients access to their medical notes has been shown to have many benefits, including improved patient satisfaction and clinical outcomes. OpenNotes has since evolved into a national movement that helps clinicians share notes with patients. However, a significant barrier to the widespread adoption of OpenNotes has been clinicians’ concerns that OpenNotes may cost additional time to correct patient confusion over medical language. Recent advances in artificial intelligence (AI) technology may help resolve this concern by converting medical notes to plain language with minimal time required of clinicians. Objective This pilot study assesses patient comprehension and perceived benefits, concerns, and insights regarding an AI-simplified note through comprehension questions and guided interview. Methods Synthea, a synthetic patient generator, was used to generate a standardized medical-language patient note which was then simplified using AI software. A multiple-choice comprehension assessment questionnaire was drafted with physician input. Study participants were recruited from inpatients at the University of Colorado Hospital. Participants were randomly assigned to be tested for their comprehension of the standardized medical-language version or AI-generated plain-language version of the patient note. Following this, participants reviewed the opposite version of the note and participated in a guided interview. A Student t test was performed to assess for differences in comprehension assessment scores between plain-language and medical-language note groups. Multivariate modeling was performed to assess the impact of demographic variables on comprehension. Interview responses were thematically analyzed. Results Twenty patients agreed to participate. The mean number of comprehension assessment questions answered correctly was found to be higher in the plain-language group compared with the medical-language group; however, the Student t test was found to be underpowered to determine if this was significant. Age, ethnicity, and health literacy were found to have a significant impact on comprehension scores by multivariate modeling. Thematic analysis of guided interviews highlighted patients’ perceived benefits, concerns, and suggestions regarding such notes. Major themes of benefits were that simplified plain-language notes may (1) be more useable than unsimplified medical-language notes, (2) improve the patient-clinician relationship, and (3) empower patients through an enhanced understanding of their health care. Conclusions AI software may translate medical notes into plain-language notes that are perceived as beneficial by patients. Limitations included sample size, inpatient-only setting, and possible confounding factors. Larger studies are needed to assess comprehension. Insight from patient responses to guided interviews can guide the future study and development of this technology.


2019 ◽  
Vol 28 (01) ◽  
pp. 041-046 ◽  
Author(s):  
Harshana Liyanage ◽  
Siaw-Teng Liaw ◽  
Jitendra Jonnagaddala ◽  
Richard Schreiber ◽  
Craig Kuziemsky ◽  
...  

Background: Artificial intelligence (AI) is heralded as an approach that might augment or substitute for the limited processing power of the human brain of primary health care (PHC) professionals. However, there are concerns that AI-mediated decisions may be hard to validate and challenge, or may result in rogue decisions. Objective: To form consensus about perceptions, issues, and challenges of AI in primary care. Method: A three-round Delphi study was conducted. Round 1 explored experts’ viewpoints on AI in PHC (n=20). Round 2 rated the appropriateness of statements arising from round one (n=12). The third round was an online panel discussion of findings (n=8) with the members of both the International Medical Informatics Association and the European Federation of Medical Informatics Primary Health Care Informatics Working Groups. Results: PHC and informatics experts reported AI has potential to improve managerial and clinical decisions and processes, and this would be facilitated by common data standards. The respondents did not agree that AI applications should learn and adapt to clinician preferences or behaviour and they did not agree on the extent of AI potential for harm to patients. It was more difficult to assess the impact of AI-based applications on continuity and coordination of care. Conclusion: While the use of AI in medicine should enhance healthcare delivery, we need to ensure meticulous design and evaluation of AI applications. The primary care informatics community needs to be proactive and to guide the ethical and rigorous development of AI applications so that they will be safe and effective.


Author(s):  
Shakir Karim ◽  
Nitirajsingh Sandu ◽  
Ergun Gide

Artificial Intelligence (AI) is the biggest emerging movement and promise in today’s technology world. Artificial Intelligence (AI) in contrast to Natural (human or animal) Intelligence, is intelligence demonstrated by machines. AI is also called Machine Intelligence, aims to mimic human intelligence by being able to obtain and apply knowledge and skills.  It promises substantial involvements, vast changes, modernizations, and integration with and within people’s ongoing life. It makes the world more demanding and helps to take the prompt and appropriate decisions with real time. This paper provides a main analysis of health industry and health care system in Australian Healthcare that are relevant to the consequences formed by Artificial Intelligence (AI). This paper primarily has used secondary research analysis method to provide a wide-ranging investigation of the positive and negative consequences of health issues relevant to Artificial Intelligence (AI), the architects of those consequences and those overstated by the consequences. The secondary resources are subject to journal articles, reports, academic conference proceedings, media articles, corporation-based documents, blogs and other appropriate information. The study found that Artificial Intelligence (AI) provides useful insights in Australian Healthcare system. It is steadily reducing the cost of Australian Healthcare system and improving patients’ overall outcome in Australian Healthcare. Artificial Intelligence (AI) not only can improve the affairs between public and health enterprises but also make the life better by increasing efficiency and modernization. However, beyond the technology maturity, there are still many challenges to overcome before Australian Healthcare can fully leverage the potential of AI in health care - Ethics being one of the most critical.   Keywords: Artificial Intelligence (AI), Health Industry, Health Care System, Australian Healthcare;


2020 ◽  
Author(s):  
Soaad Q. Hossain

AbstractWith the rise of artificial intelligence (AI) and its application within industries, there is no doubt that someday AI will be one of the key players in medical diagnoses, assessments and treatments. With the involvement of AI in health care and medicine comes concerns pertaining to its application, more specifically its impact on both patients and medical professionals. To further expand on the discussion, using ethics of care, literature and a systematic review, we will address the impact of allowing AI to guide clinicians with medical procedures and decisions. We will then argue that the impact of allowing AI to guide clinicians with medical procedures and decisions can hinder patient-clinician relationships, concluding with a discussion on the future of patient care and how ethics of care can be used to investigate issues within AI in medicine.


2020 ◽  
Vol 10 (1_suppl) ◽  
pp. 99S-103S
Author(s):  
Michelle S. Lee ◽  
Matthew M. Grabowski ◽  
Ghaith Habboub ◽  
Thomas E. Mroz

As exponential expansion of computing capacity converges with unsustainable health care spending, a hopeful opportunity has emerged: the use of artificial intelligence to enhance health care quality and safety. These computer-based algorithms can perform the intricate and extremely complex mathematical operations of classification or regression on immense amounts of data to detect intricate and potentially previously unknown patterns in that data, with the end result of creating predictive models that can be utilized in clinical practice. Such models are designed to distinguish relevant from irrelevant data regarding a particular patient; choose appropriate perioperative care, intervention or surgery; predict cost of care and reimbursement; and predict future outcomes on a variety of anchored measures. If and when one is brought to fruition, an artificial intelligence platform could serve as the first legitimate clinical decision-making tool in spine care, delivering on the value equation while serving as a source for improving physician performance and promoting appropriate, efficient care in this era of financial uncertainty in health care.


2021 ◽  
Vol 3 ◽  
Author(s):  
Richard Ribón Fletcher ◽  
Audace Nakeshimana ◽  
Olusubomi Olubeko

In Low- and Middle- Income Countries (LMICs), machine learning (ML) and artificial intelligence (AI) offer attractive solutions to address the shortage of health care resources and improve the capacity of the local health care infrastructure. However, AI and ML should also be used cautiously, due to potential issues of fairness and algorithmic bias that may arise if not applied properly. Furthermore, populations in LMICs can be particularly vulnerable to bias and fairness in AI algorithms, due to a lack of technical capacity, existing social bias against minority groups, and a lack of legal protections. In order to address the need for better guidance within the context of global health, we describe three basic criteria (Appropriateness, Fairness, and Bias) that can be used to help evaluate the use of machine learning and AI systems: 1) APPROPRIATENESS is the process of deciding how the algorithm should be used in the local context, and properly matching the machine learning model to the target population; 2) BIAS is a systematic tendency in a model to favor one demographic group vs another, which can be mitigated but can lead to unfairness; and 3) FAIRNESS involves examining the impact on various demographic groups and choosing one of several mathematical definitions of group fairness that will adequately satisfy the desired set of legal, cultural, and ethical requirements. Finally, we illustrate how these principles can be applied using a case study of machine learning applied to the diagnosis and screening of pulmonary disease in Pune, India. We hope that these methods and principles can help guide researchers and organizations working in global health who are considering the use of machine learning and artificial intelligence.


Sign in / Sign up

Export Citation Format

Share Document