May We Support Your Decision?

1996 ◽  
Vol 1 (3) ◽  
pp. 175-178 ◽  
Author(s):  
Colin Gordon

Expert systems to support medical decision-making have so far achieved few successes. Current technical developments, however, may overcome some of the limitations. Although there are several theoretical currents in medical artificial intelligence, there are signs of them converging. Meanwhile, decision support systems, which set themselves more modest goals than replicating or improving on clinicians' expertise, have come into routine use in places where an adequate electronic patient record exists. They may also be finding a wider role, assisting in the implementation of clinical practice guidelines. There is, however, still much uncertainty about the kinds of decision support that doctors and other health care professionals are likely to want or accept.

2020 ◽  
Vol 46 (7) ◽  
pp. 478-481 ◽  
Author(s):  
Joshua James Hatherley

Artificial intelligence (AI) is expected to revolutionise the practice of medicine. Recent advancements in the field of deep learning have demonstrated success in variety of clinical tasks: detecting diabetic retinopathy from images, predicting hospital readmissions, aiding in the discovery of new drugs, etc. AI’s progress in medicine, however, has led to concerns regarding the potential effects of this technology on relationships of trust in clinical practice. In this paper, I will argue that there is merit to these concerns, since AI systems can be relied on, and are capable of reliability, but cannot be trusted, and are not capable of trustworthiness. Insofar as patients are required to rely on AI systems for their medical decision-making, there is potential for this to produce a deficit of trust in relationships in clinical practice.


2020 ◽  
Author(s):  
Thomas Ploug ◽  
Anna Sundby ◽  
Thomas B Moeslund ◽  
Søren Holm

BACKGROUND Certain types of artificial intelligence (AI), that is, deep learning models, can outperform health care professionals in particular domains. Such models hold considerable promise for improved diagnostics, treatment, and prevention, as well as more cost-efficient health care. They are, however, opaque in the sense that their exact reasoning cannot be fully explicated. Different stakeholders have emphasized the importance of the transparency/explainability of AI decision making. Transparency/explainability may come at the cost of performance. There is need for a public policy regulating the use of AI in health care that balances the societal interests in high performance as well as in transparency/explainability. A public policy should consider the wider public’s interests in such features of AI. OBJECTIVE This study elicited the public’s preferences for the performance and explainability of AI decision making in health care and determined whether these preferences depend on respondent characteristics, including trust in health and technology and fears and hopes regarding AI. METHODS We conducted a choice-based conjoint survey of public preferences for attributes of AI decision making in health care in a representative sample of the adult Danish population. Initial focus group interviews yielded 6 attributes playing a role in the respondents’ views on the use of AI decision support in health care: (1) type of AI decision, (2) level of explanation, (3) performance/accuracy, (4) responsibility for the final decision, (5) possibility of discrimination, and (6) severity of the disease to which the AI is applied. In total, 100 unique choice sets were developed using fractional factorial design. In a 12-task survey, respondents were asked about their preference for AI system use in hospitals in relation to 3 different scenarios. RESULTS Of the 1678 potential respondents, 1027 (61.2%) participated. The respondents consider the physician having the final responsibility for treatment decisions the most important attribute, with 46.8% of the total weight of attributes, followed by explainability of the decision (27.3%) and whether the system has been tested for discrimination (14.8%). Other factors, such as gender, age, level of education, whether respondents live rurally or in towns, respondents’ trust in health and technology, and respondents’ fears and hopes regarding AI, do not play a significant role in the majority of cases. CONCLUSIONS The 3 factors that are most important to the public are, in descending order of importance, (1) that physicians are ultimately responsible for diagnostics and treatment planning, (2) that the AI decision support is explainable, and (3) that the AI system has been tested for discrimination. Public policy on AI system use in health care should give priority to such AI system use and ensure that patients are provided with information.


2020 ◽  
Vol 46 (2) ◽  
Author(s):  
Mélanie Bourassa Forcier ◽  
Lara Khoury ◽  
Nathalie Vézina

This paper explores Canadian liability concerns flowing from the integration of artificial intelligence (AI) as a tool assisting physicians in their medical decision-making. It argues that the current Canadian legal framework is sufficient, in most cases, to allow developers and users of AI technology to assess each stakeholder's responsibility should the technology cause harm.


10.2196/26611 ◽  
2021 ◽  
Vol 23 (12) ◽  
pp. e26611
Author(s):  
Thomas Ploug ◽  
Anna Sundby ◽  
Thomas B Moeslund ◽  
Søren Holm

Background Certain types of artificial intelligence (AI), that is, deep learning models, can outperform health care professionals in particular domains. Such models hold considerable promise for improved diagnostics, treatment, and prevention, as well as more cost-efficient health care. They are, however, opaque in the sense that their exact reasoning cannot be fully explicated. Different stakeholders have emphasized the importance of the transparency/explainability of AI decision making. Transparency/explainability may come at the cost of performance. There is need for a public policy regulating the use of AI in health care that balances the societal interests in high performance as well as in transparency/explainability. A public policy should consider the wider public’s interests in such features of AI. Objective This study elicited the public’s preferences for the performance and explainability of AI decision making in health care and determined whether these preferences depend on respondent characteristics, including trust in health and technology and fears and hopes regarding AI. Methods We conducted a choice-based conjoint survey of public preferences for attributes of AI decision making in health care in a representative sample of the adult Danish population. Initial focus group interviews yielded 6 attributes playing a role in the respondents’ views on the use of AI decision support in health care: (1) type of AI decision, (2) level of explanation, (3) performance/accuracy, (4) responsibility for the final decision, (5) possibility of discrimination, and (6) severity of the disease to which the AI is applied. In total, 100 unique choice sets were developed using fractional factorial design. In a 12-task survey, respondents were asked about their preference for AI system use in hospitals in relation to 3 different scenarios. Results Of the 1678 potential respondents, 1027 (61.2%) participated. The respondents consider the physician having the final responsibility for treatment decisions the most important attribute, with 46.8% of the total weight of attributes, followed by explainability of the decision (27.3%) and whether the system has been tested for discrimination (14.8%). Other factors, such as gender, age, level of education, whether respondents live rurally or in towns, respondents’ trust in health and technology, and respondents’ fears and hopes regarding AI, do not play a significant role in the majority of cases. Conclusions The 3 factors that are most important to the public are, in descending order of importance, (1) that physicians are ultimately responsible for diagnostics and treatment planning, (2) that the AI decision support is explainable, and (3) that the AI system has been tested for discrimination. Public policy on AI system use in health care should give priority to such AI system use and ensure that patients are provided with information.


1995 ◽  
Vol 23 (1) ◽  
pp. 82-87 ◽  
Author(s):  
Paul K. Longmore

In discussions of medical decision making as it applies to people with disabilities, a major obstacle stands in the way: the perceptions and values of disabled people (particularly disability rights advocates and disabled social scientists) and of many nondisabled people (particularly health care professionals, ethicists, and health policy analysts), regarding virtually the whole range of current health and medical-ethical issues (treatment decision making, health care access and health care rationing, medical costcontainment, and assisted suicide), seem frequently to conflict with one another. This divergence in part grows out of the sense, common among people with disabilities, that their interactions with “the helping professions,” medical and social service professionals, are adversarial. But those differences of opinion also stem more basically from a clash of fundamental values.This paper addresses, in historical perspective, the ways in which the status of persons with disabilities as a stigmatized minority group affects medical decision making. It also examines the efforts of disability rights activists to prevent discrimination against persons with disabilities in current medical culture. Finally, it raises questions about how the rights of people with disabilities will fare as new care standards are developed and implemented.


1994 ◽  
Vol 9 (2) ◽  
pp. 58-63 ◽  
Author(s):  
Gilbert M. Goldman ◽  
Thyyar M. Ravindranath

Critical care decision-making involves principles common to all medical decision-making. However, critical care is a remarkably distinctive form of clinical practice and therefore it may be useful to distinguish those elements particularly important or unique to ICU decision-making. The peculiar contextuality of critical care decision-making may be the best example of these elements. If so, attempts to improve our understanding of ICU decision-making may benefit from a formal analysis of its remarkable contextual nature. Four key elements of the context of critical care decisions can be identified: (1) costs, (2) time constraints, (3) the uncertain status of much clinical data, and (4) the continually changing environment of the ICU setting. These 4 elements comprise the context for the practice of clinical judgment in the ICU. The fact that intensivists are severely constrained by teh context of each case has important ramifications both for practice and for retrospective review. During retrospective review, the contextual nature of ICU judgment may be unfairly neglected by ignoring one or more of the key elements. Such neglect can be avoided if intensivists demand empathetic evaluation from reviewers.


Author(s):  
Eelco Draaisma ◽  
Lauren A. Maggio ◽  
Jolita Bekhof ◽  
A. Debbie C. Jaarsma ◽  
Paul L. P. Brand

Abstract Introduction Although evidence-based medicine (EBM) teaching activities may improve short-term EBM knowledge and skills, they have little long-term impact on learners’ EBM attitudes and behaviour. This study examined the effects of learning EBM through stand-alone workshops or various forms of deliberate EBM practice. Methods We assessed EBM attitudes and behaviour with the evidence based practice inventory questionnaire, in paediatric health care professionals who had only participated in a stand-alone EBM workshop (controls), participants with a completed PhD in clinical research (PhDs), those who had completed part of their paediatric residency at a department (Isala Hospital) which systematically implemented EBM in its clinical and teaching activities (former Isala residents), and a reference group of paediatric professionals currently employed at Isala’s paediatric department (current Isala participants). Results Compared to controls (n = 16), current Isala participants (n = 13) reported more positive EBM attitudes (p < 0.01), gave more priority to using EBM in decision making (p = 0.001) and reported more EBM behaviour (p = 0.007). PhDs (n = 20) gave more priority to using EBM in medical decision making (p < 0.001) and reported more EBM behaviour than controls (p = 0.016). Discussion Health care professionals exposed to deliberate practice of EBM, either in the daily routines of their department or by completing a PhD in clinical research, view EBM as more useful and are more likely to use it in decision making than their peers who only followed a standard EBM workshop. These findings support the use of deliberate practice as the basis for postgraduate EBM educational activities.


2020 ◽  
Vol 176 ◽  
pp. 1703-1712
Author(s):  
Georgy Lebedev ◽  
Eduard Fartushnyi ◽  
Igor Fartushnyi ◽  
Igor Shaderkin ◽  
Herman Klimenko ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document