Explanations in Artificial Intelligence Decision Making

Author(s):  
Norman G. Vinson ◽  
Heather Molyneaux ◽  
Joel D. Martin

The opacity of AI systems' decision making has led to calls to modify these systems so they can provide explanations for their decisions. This chapter contains a discussion of what these explanations should address and what their nature should be to meet the concerns that have been raised and to prove satisfactory to users. More specifically, the chapter briefly reviews the typical forms of AI decision-making that are currently used to make real-world decisions affecting people's lives. Based on concerns about AI decision making expressed in the literature and the media, the chapter follows with principles that the systems should respect and corresponding requirements for explanations to respect those principles. A mapping between those explanation requirements and the types of explanations generated by AI decision making systems reveals the strengths and shortcomings of the explanations generated by those systems.

2021 ◽  
pp. 3-23
Author(s):  
Stuart Russell

Following the analysis given by Alan Turing in 1951, one must expect that AI capabilities will eventually exceed those of humans across a wide range of real-world-decision making scenarios. Should this be a cause for concern, as Turing, Hawking, and others have suggested? And, if so, what can we do about it? While some in the mainstream AI community dismiss the issue, I will argue that the problem is real: we have to work out how to design AI systems that are far more powerful than ourselves while ensuring that they never have power over us. I believe the technical aspects of this problem are solvable. Whereas the standard model of AI proposes to build machines that optimize known, exogenously specified objectives, a preferable approach would be to build machines that are of provable benefit to humans. I introduce assistance games as a formal class of problems whose solution, under certain assumptions, has the desired property.


2021 ◽  
Vol 11 (23) ◽  
pp. 11187
Author(s):  
Xadya van Bruxvoort ◽  
Maurice van Keulen

In the transition to a data-driven society, organizations have introduced data-driven algorithms that often apply artificial intelligence. In this research, an ethical framework was developed to ensure robustness and completeness and to avoid and mitigate potential public uproar. We take a socio-technical perspective, i.e., view the algorithm embedded in an organization with infrastructure, rules, and procedures as one to-be-designed system. The framework consists of five ethical principles: beneficence, non-maleficence, autonomy, justice, and explicability. It can be used during the design for identification of relevant concerns. The framework has been validated by applying it to real-world fraud detection cases: Systeem Risico Indicatie (SyRI) of the Dutch government and the algorithm of the municipality of Amersfoort. The former is a controversial country-wide algorithm that was ultimately prohibited by court. The latter is an algorithm in development. In both cases, it proved effective in identifying all ethical risks. For SyRI, all concerns found in the media were also identified by the framework, mainly focused on transparency of the entire socio-technical system. For the municipality of Amersfoort, the framework highlighted risks regarding the amount of sensitive data and communication to and with the public, presenting a more thorough overview compared to the risks the media raised.


In this chapter, the concept of a reasoning community is introduced. The overarching motivation is to understand reasoning within groups in real world settings so that technologies can be designed to better support the process. Four phases of the process of reasoning by a community are discerned: engagement of participants, individual reasoning, group coalescing, and, ultimately, group decision making. A reasoning community is contrasted with communities of practice and juxtaposed against concepts in related endeavours including computer supported collaborative work, decision science, and artificial intelligence.


2020 ◽  
Vol 133 (17) ◽  
pp. 2020-2026
Author(s):  
Cheng-Xu Li ◽  
Wen-Min Fei ◽  
Chang-Bing Shen ◽  
Zi-Yi Wang ◽  
Yan Jing ◽  
...  

2021 ◽  
pp. 19-24
Author(s):  
Stuart Russell

AbstractA long tradition in philosophy and economics equates intelligence with the ability to act rationally—that is, to choose actions that can be expected to achieve one’s objectives. This framework is so pervasive within AI that it would be reasonable to call it the standard model. A great deal of progress on reasoning, planning, and decision-making, as well as perception and learning, has occurred within the standard model. Unfortunately, the standard model is unworkable as a foundation for further progress because it is seldom possible to specify objectives completely and correctly in the real world. The chapter proposes a new model for AI development in which the machine’s uncertainty about the true objective leads to qualitatively new modes of behavior that are more robust, controllable, and deferential to humans.


Author(s):  
Kushagra Singh Sisodiya

Abstract: The Industrial Revolution 4.0 has flooded the virtual world with data, which includes Internet of Things (IoT) data, mobile data, cybersecurity data, business data, social networks, including health data. To analyse this data efficiently and create related efficient and streamlined applications, expertise in artificial intelligence specifically machine learning (ML), is required. This field makes use of a variety of machine learning methods, including supervised, unsupervised, semi-supervised, and reinforcement. Additionally, deep learning, which is a subset of a larger range of machine learning techniques, is capable of effectively analysing vast amounts of data. Machine learning is a broad term that encompasses a number of methods used to extract information from data. These methods may allow the rapid translation of massive real-world information into applications that assist patients and providers in making decisions. The objective of this literature review was to find observational studies that utilised machine learning to enhance patient-provider decision-making utilising secondary data. Keywords: Machine Learning, Real World, Patient, Population, Artificial Intelligence


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Mariem Gandouz ◽  
Hajo Holzmann ◽  
Dominik Heider

AbstractMachine learning and artificial intelligence have entered biomedical decision-making for diagnostics, prognostics, or therapy recommendations. However, these methods need to be interpreted with care because of the severe consequences for patients. In contrast to human decision-making, computational models typically make a decision also with low confidence. Machine learning with abstention better reflects human decision-making by introducing a reject option for samples with low confidence. The abstention intervals are typically symmetric intervals around the decision boundary. In the current study, we use asymmetric abstention intervals, which we demonstrate to be better suited for biomedical data that is typically highly imbalanced. We evaluate symmetric and asymmetric abstention on three real-world biomedical datasets and show that both approaches can significantly improve classification performance. However, asymmetric abstention rejects as many or fewer samples compared to symmetric abstention and thus, should be used in imbalanced data.


Sign in / Sign up

Export Citation Format

Share Document