scholarly journals Explainable Artificial Intelligence: What Do You Need to Know?

Author(s):  
Sam Hepenstal ◽  
David McNeish

Abstract In domains which require high risk and high consequence decision making, such as defence and security, there is a clear requirement for artificial intelligence (AI) systems to be able to explain their reasoning. In this paper we examine what it means to provide explainable AI. We report on research findings to propose that explanations should be tailored, depending upon the role of the human interacting with the system and the individual system components, to reflect different needs. We demonstrate that a ‘one-size-fits-all’ explanation is insufficient to capture the complexity of needs. Thus, designing explainable AI systems involves careful consideration of context, and within that the nature of both the human and AI components.

2017 ◽  
Vol 23 (1) ◽  
pp. 21-26 ◽  
Author(s):  
Vineet Sahu

Corruption in public life1 needs to be examined in greater detail as not only an individual lapse but also a feature of the collective that either does or does not put pressure on the individual to lapse. This paper takes a methodological holistic perspective exceeding the methodological individualistic perspective in understanding corruption. The claim is that the locus of responsibility cannot be restricted to the individual alone and the collective (if there be such an entity) be left scot-free. This claim is premised on the conception that an individual’s act which is in deviation of expected and established norms cannot be faulted only at the level of the individual, and careful consideration needs to be made to assess the role of the collective in precipitating the lapse(s) in the actions of the individual. This paper argues for sharing the liability of corruption in public life between the legally responsible individual as agent and the cultural milieu in which the agent operates. At a foundational level this paper calls for a reconceptualization of individual agency and decision making from being isolated and discrete, to being construed by the collective that the individual agent is a part of.


2020 ◽  
Author(s):  
Yasmeen Alufaisan ◽  
Laura Ranee Marusich ◽  
Jonathan Z Bakdash ◽  
Yan Zhou ◽  
Murat Kantarcioglu

Explainable AI provides insights to users into the why formodel predictions, offering potential for users to better un-derstand and trust a model, and to recognize and correct AIpredictions that are incorrect. Prior research on human andexplainable AI interactions has typically focused on measuressuch as interpretability, trust, and usability of the explanation.There are mixed findings whether explainable AI can improveactual human decision-making and the ability to identify theproblems with the underlying model. Using real datasets, wecompare objective human decision accuracy without AI (con-trol), with an AI prediction (no explanation), and AI predic-tion with explanation. We find providing any kind of AI pre-diction tends to improve user decision accuracy, but no con-clusive evidence that explainable AI has a meaningful impact.Moreover, we observed the strongest predictor for human de-cision accuracy was AI accuracy and that users were some-what able to detect when the AI was correct vs. incorrect, butthis was not significantly affected by including an explana-tion. Our results indicate that, at least in some situations, thewhy information provided in explainable AI may not enhanceuser decision-making, and further research may be needed tounderstand how to integrate explainable AI into real systems.


SAGE Open ◽  
2020 ◽  
Vol 10 (2) ◽  
pp. 215824402092703
Author(s):  
Andriani Kusumawati ◽  
Sari Listyorini ◽  
Suharyono ◽  
Edy Yulianto

Religiosity covers all aspects of human life values. Consumer decision-making in Muslim product purchase needs to involve religiosity. Muslim fashion is increasingly popular and becomes a potential business for fashion entrepreneurs in Indonesia. This condition evokes a dilemma for the consumers as Muslim fashion users on whether they have to conform to the religious sharia or follow the trend. The purpose of this article is to identify the role of religiosity as a factor affecting Muslim consumers to revisit Muslim fashion stores. This research involved 243 Muslim consumers of several Muslim fashion stores. The results showed that religiosity of Muslim consumers had a direct effect on patronage intention and indirect effect on patronage intention of Muslim fashion stores through Customer Satisfaction. The research findings are directed to managerial implications for Muslim fashion entrepreneurs in relation to consumer religiosity and marketing of Indonesian Muslim fashion products.


Author(s):  
Syahrizal Dwi Putra ◽  
M Bahrul Ulum ◽  
Diah Aryani

An expert system which is part of artificial intelligence is a computer system that is able to imitate the reasoning of an expert with certain expertise. An expert system in the form of software can replace the role of an expert (human) in the decision-making process based on the symptoms given to a certain level of certainty. This study raises the problem that many women experience, namely not understanding that they have uterine myomas. Many women do not understand and are not aware that there are already symptoms that are felt and these symptoms are symptoms of the presence of uterine myomas in their bodies. Therefore, it is necessary for women to be able to diagnose independently so that they can take treatment as quickly as possible. In this study, the expert will first provide the expert CF values. Then the user / respondent gives an assessment of his condition with the CF User values. In the end, the values obtained from these two factors will be processed using the certainty factor formula. Users must provide answers to all questions given by the system in accordance with their current conditions. After all the conditions asked are answered, the system will display the results to identify that the user is suffering from uterine myoma disease or not. The Expert System with the certainty factor method was tested with a patient who entered the symptoms experienced and got the percentage of confidence in uterine myomas/fibroids of 98.70%. These results indicate that an expert system with the certainty factor method can be used to assist in diagnosing uterine myomas as early as possible.


Author(s):  
Ekaterina Jussupow ◽  
Kai Spohrer ◽  
Armin Heinzl ◽  
Joshua Gawlitza

Systems based on artificial intelligence (AI) increasingly support physicians in diagnostic decisions, but they are not without errors and biases. Failure to detect those may result in wrong diagnoses and medical errors. Compared with rule-based systems, however, these systems are less transparent and their errors less predictable. Thus, it is difficult, yet critical, for physicians to carefully evaluate AI advice. This study uncovers the cognitive challenges that medical decision makers face when they receive potentially incorrect advice from AI-based diagnosis systems and must decide whether to follow or reject it. In experiments with 68 novice and 12 experienced physicians, novice physicians with and without clinical experience as well as experienced radiologists made more inaccurate diagnosis decisions when provided with incorrect AI advice than without advice at all. We elicit five decision-making patterns and show that wrong diagnostic decisions often result from shortcomings in utilizing metacognitions related to decision makers’ own reasoning (self-monitoring) and metacognitions related to the AI-based system (system monitoring). As a result, physicians fall for decisions based on beliefs rather than actual data or engage in unsuitably superficial evaluation of the AI advice. Our study has implications for the training of physicians and spotlights the crucial role of human actors in compensating for AI errors.


Organization ◽  
2019 ◽  
Vol 26 (5) ◽  
pp. 655-672 ◽  
Author(s):  
Verena Bader ◽  
Stephan Kaiser

Artificial intelligence can provide organizations with prescriptive options for decision-making. Based on the notions of algorithmic decision-making and user involvement, we assess the role of artificial intelligence in workplace decisions. Using a case study on the implementation and use of cognitive software in a telecommunications company, we address how actors can become distanced from or remain involved in decision-making. Our results show that humans are increasingly detached from decision-making spatially as well as temporally and in terms of rational distancing and cognitive displacement. At the same time, they remain attached to decision-making because of accidental and infrastructural proximity, imposed engagement, and affective adhesion. When human and algorithmic intelligence become unbalanced in regard to humans’ attachment to decision-making, three performative effects result: deferred decisions, workarounds, and (data) manipulations. We conceptualize the user interface that presents decisions to humans as a mediator between human detachment and attachment and, thus, between algorithmic and humans’ decisions. These findings contrast the traditional view of automated media as diminishing user involvement and have useful implications for research on artificial intelligence and algorithmic decision-making in organizations.


Risks ◽  
2020 ◽  
Vol 8 (4) ◽  
pp. 137
Author(s):  
Alex Gramegna ◽  
Paolo Giudici

We propose an Explainable AI model that can be employed in order to explain why a customer buys or abandons a non-life insurance coverage. The method consists in applying similarity clustering to the Shapley values that were obtained from a highly accurate XGBoost predictive classification algorithm. Our proposed method can be embedded into a technologically-based insurance service (Insurtech), allowing to understand, in real time, the factors that most contribute to customers’ decisions, thereby gaining proactive insights on their needs. We prove the validity of our model with an empirical analysis that was conducted on data regarding purchases of insurance micro-policies. Two aspects are investigated: the propensity to buy an insurance policy and the risk of churn of an existing customer. The results from the analysis reveal that customers can be effectively and quickly grouped according to a similar set of characteristics, which can predict their buying or churn behaviour well.


1979 ◽  
Vol 7 (3) ◽  
pp. 259-281 ◽  
Author(s):  
Robert W. Gilmer ◽  
Daniel C. Morgan

This article assesses wealth neutral grants within the traditional framework of the fiscal federalism. Discussions of the concept of fiscal equality or District Power Equalization (DPE) have centered largley on local control, and have defined equity as a problem of the comparison of local jurisdictions. The individual resident and the state government lie on either side of the locality in terms of collective decision-making, yet the perspective of neither of these sides has been adequately considered in past studies. These grants can cause substantial redefinitions of revenue responsibilities among various levels of government; they do far less than is commonly assumed to provide horizontal equity; and they do not relieve problems of location bias. We find that none of these problems, either individually or collectively, constitute an indictment of these grants, but their careful consideration offers a more balanced view of DPE than any yet offered


Sign in / Sign up

Export Citation Format

Share Document