Transparency

Author(s):  
Nicholas Diakopoulos

This chapter describes algorithmic decision-making (ADM) systems. ADM systems are tools that leverage an algorithmic process to arrive at some form of decision such as a scoring, ranking, classification, or association that may then drive further system action and behavior. Such systems could be said to exhibit artificial intelligence (AI) insofar as they contribute to decision-making tasks that might normally be undertaken by humans. However, it is important to underscore that ADM systems must be understood as composites of nonhuman actors woven together with human actors such as designers, data-creators, maintainers, and operators into complex sociotechnical assemblages. If the end goal is accountability, then transparency must serve to help locate the various positions of human agency and responsibility in these large and complex sociotechnical assemblages. Ultimately, it is people who must be held accountable for the behavior of algorithmic systems. The chapter then highlights what is needed to realistically implement algorithmic transparency in terms of what is disclosed and how and to whom transparency information is disclosed.

Author(s):  
Ekaterina Jussupow ◽  
Kai Spohrer ◽  
Armin Heinzl ◽  
Joshua Gawlitza

Systems based on artificial intelligence (AI) increasingly support physicians in diagnostic decisions, but they are not without errors and biases. Failure to detect those may result in wrong diagnoses and medical errors. Compared with rule-based systems, however, these systems are less transparent and their errors less predictable. Thus, it is difficult, yet critical, for physicians to carefully evaluate AI advice. This study uncovers the cognitive challenges that medical decision makers face when they receive potentially incorrect advice from AI-based diagnosis systems and must decide whether to follow or reject it. In experiments with 68 novice and 12 experienced physicians, novice physicians with and without clinical experience as well as experienced radiologists made more inaccurate diagnosis decisions when provided with incorrect AI advice than without advice at all. We elicit five decision-making patterns and show that wrong diagnostic decisions often result from shortcomings in utilizing metacognitions related to decision makers’ own reasoning (self-monitoring) and metacognitions related to the AI-based system (system monitoring). As a result, physicians fall for decisions based on beliefs rather than actual data or engage in unsuitably superficial evaluation of the AI advice. Our study has implications for the training of physicians and spotlights the crucial role of human actors in compensating for AI errors.


2018 ◽  
Vol 13 (1) ◽  
pp. 123-127 ◽  
Author(s):  
David Kerr ◽  
David C. Klonoff

In the future artificial intelligence (AI) will have the potential to improve outcomes diabetes care. With the creation of new sensors for physiological monitoring sensors and the introduction of smart insulin pens, novel data relationships based on personal phenotypic and genotypic information will lead to selections of tailored, effective therapies that will transform health care. However, decision-making processes based exclusively on quantitative metrics that ignore qualitative factors could create a quantitative fallacy. Difficult to quantify inputs into AI-based therapeutic decision-making processes include empathy, compassion, experience, and unconscious bias. Failure to consider these “softer” variables could lead to important errors. In other words, that which is not quantified about human health and behavior is still part of the calculus for determining therapeutic interventions.


2020 ◽  
Author(s):  
Avishek Choudhury

UNSTRUCTURED Objective: The potential benefits of artificial intelligence based decision support system (AI-DSS) from a theoretical perspective are well documented and perceived by researchers but there is a lack of evidence showing its influence on routine clinical practice and how its perceived by care providers. Since the effectiveness of AI systems depends on data quality, implementation, and interpretation. The purpose of this literature review is to analyze the effectiveness of AI-DSS in clinical setting and understand its influence on clinician’s decision making outcome. Materials and Methods: This review protocol follows the Preferred Reporting Items for Systematic Reviews and Meta- Analyses reporting guidelines. Literature will be identified using a multi-database search strategy developed in consultation with a librarian. The proposed screening process consists of a title and abstract scan, followed by a full-text review by two reviewers to determine the eligibility of articles. Studies outlining application of AI based decision support system in a clinical setting and its impact on clinician’s decision making, will be included. A tabular synthesis of the general study details will be provided, as well as a narrative synthesis of the extracted data, organised into themes. Studies solely reporting AI accuracy an but not implemented in a clinical setting to measure its influence on clinical decision making were excluded from further review. Results: We identified 8 eligible studies that implemented AI-DSS in a clinical setting to facilitate decisions concerning prostate cancer, post traumatic stress disorder, cardiac ailment, back pain, and others. Five (62.50%) out of 8 studies reported positive outcome of AI-DSS. Conclusion: The systematic review indicated that AI-enabled decision support systems, when implemented in a clinical setting and used by clinicians might not ensure enhanced decision making. However, there are very limited studies to confirm the claim that AI based decision support system can uplift clinicians decision making abilities.


2020 ◽  
Vol 48 (7) ◽  
pp. 1-12
Author(s):  
Ran Xiong ◽  
Ping Wei

Confucian culture has had a deep-rooted influence on Chinese thinking and behavior for more than 2,000 years. With a manually created Confucian culture database and the 2017 China floating population survey, we used empirical analysis to test the relationship between Confucian culture and individual entrepreneurial choice using data obtained from China's floating population. After using the presence and number of Confucian schools and temples, and of chaste women as instrumental variables to counteract problems of endogeneity, we found that Confucian culture had a significant role in promoting individuals' entrepreneurial decision making among China's floating population. The results showed that, compared with those from areas of China not strongly influenced by Confucian culture, individuals from areas that are strongly influenced by Confucian culture were more likely to choose entrepreneurship as their occupation choice. Our findings reveal cultural factors that affect individual entrepreneurial behavior, and also illustrate the positive role of Confucianism as a representative of the typical cultures of the Chinese nation in the 21st century.


This book explores the intertwining domains of artificial intelligence (AI) and ethics—two highly divergent fields which at first seem to have nothing to do with one another. AI is a collection of computational methods for studying human knowledge, learning, and behavior, including by building agents able to know, learn, and behave. Ethics is a body of human knowledge—far from completely understood—that helps agents (humans today, but perhaps eventually robots and other AIs) decide how they and others should behave. Despite these differences, however, the rapid development in AI technology today has led to a growing number of ethical issues in a multitude of fields, ranging from disciplines as far-reaching as international human rights law to issues as intimate as personal identity and sexuality. In fact, the number and variety of topics in this volume illustrate the width, diversity of content, and at times exasperating vagueness of the boundaries of “AI Ethics” as a domain of inquiry. Within this discourse, the book points to the capacity of sociotechnical systems that utilize data-driven algorithms to classify, to make decisions, and to control complex systems. Given the wide-reaching and often intimate impact these AI systems have on daily human lives, this volume attempts to address the increasingly complicated relations between humanity and artificial intelligence. It considers not only how humanity must conduct themselves toward AI but also how AI must behave toward humanity.


2020 ◽  
Vol 34 (10) ◽  
pp. 13849-13850
Author(s):  
Donghyeon Lee ◽  
Man-Je Kim ◽  
Chang Wook Ahn

In a real-time strategy (RTS) game, StarCraft II, players need to know the consequences before making a decision in combat. We propose a combat outcome predictor which utilizes terrain information as well as squad information. For training the model, we generated a StarCraft II combat dataset by simulating diverse and large-scale combat situations. The overall accuracy of our model was 89.7%. Our predictor can be integrated into the artificial intelligence agent for RTS games as a short-term decision-making module.


Sign in / Sign up

Export Citation Format

Share Document