scholarly journals Initial evidence for biased decision-making despite human-centered AI explanations

2021 ◽  
Author(s):  
Nicolas Scharowski ◽  
Florian Brühlmann

In explainable artificial intelligence (XAI) research, explainability is widely regarded as crucial for user trust in artificial intelligence (AI). However, empirical investigations of this assumption are still lacking. There are several proposals as to how explainability might be achieved and it is an ongoing debate what ramifications explanations actually have on humans. In our work-in-progress we explored two posthoc explanation approaches presented in natural language as a means for explainable AI. We examined the effects of human-centered explanations on trust behavior in a financial decision-making experiment (N = 387), captured by weight of advice (WOA). Results showed that AI explanations lead to higher trust behavior if participants were advised to decrease an initial price estimate. However, explanations had no effect if the AI recommended to increase the initial price estimate. We argue that these differences in trust behavior may be caused by cognitive biases and heuristics that people retain in their decision-making processes involving AI. So far, XAI has primarily focused on biased data and prejudice due to incorrect assumptions in the machine learning process. The implications of potential biases and heuristics that humans exhibit when being presented an explanation by AI have received little attention in the current XAI debate. Both researchers and practitioners need to be aware of such human biases and heuristics in order to develop truly human-centered AI.

2020 ◽  
pp. 089443932098012
Author(s):  
Teresa M. Harrison ◽  
Luis Felipe Luna-Reyes

While there is growing consensus that the analytical and cognitive tools of artificial intelligence (AI) have the potential to transform government in positive ways, it is also clear that AI challenges traditional government decision-making processes and threatens the democratic values within which they are framed. These conditions argue for conservative approaches to AI that focus on cultivating and sustaining public trust. We use the extended Brunswik lens model as a framework to illustrate the distinctions between policy analysis and decision making as we have traditionally understood and practiced them and how they are evolving in the current AI context along with the challenges this poses for the use of trustworthy AI. We offer a set of recommendations for practices, processes, and governance structures in government to provide for trust in AI and suggest lines of research that support them.


2006 ◽  
Vol 130 (5) ◽  
pp. 613-616 ◽  
Author(s):  
Roger E. McLendon

Abstract Context.—A significant difficulty that pathologists encounter in arriving at a correct diagnosis is related to the way information from various sources is processed and assimilated in context. Objective.—These issues are addressed by the science of cognitive psychology. Although cognitive biases are the focus of a number of studies on medical decision making, few if any focus on the visual sciences. Data Sources.—A recent publication authored by Richards Heuer, Jr, The Psychology of Intelligence Analysis, directly addresses many of the cognitive biases faced by neuropathologists and anatomic pathologists in general. These biases include visual anticipation, first impression, and established mindsets and subconsciously influence our critical decision-making processes. Conclusions.—The book points out that while biases are an inherent property of cognition, the influence of such biases can be recognized and the effects blunted.


Author(s):  
Orhan Kaya ◽  
Halil Ceylan ◽  
Sunghwan Kim ◽  
Danny Waid ◽  
Brian P. Moore

In their pavement management decision-making processes, U.S. state highway agencies are required to develop performance-based approaches by the Moving Ahead for Progress in the 21st Century (MAP-21) federal transportation legislation. One of the performance-based approaches to facilitate pavement management decision-making processes is the use of remaining service life (RSL) models. In this study, a detailed step-by-step methodology for the development of pavement performance and RSL prediction models for flexible and composite (asphalt concrete [AC] over jointed plain concrete pavement [JPCP]) pavement systems in Iowa is described. To develop such RSL models, pavement performance models based on statistics and artificial intelligence (AI) techniques were initially developed. While statistically defined pavement performance models were found to be accurate in predicting pavement performance at project level, AI-based pavement performance models were found to be successful in predicting pavement performance in network level analysis. Network level pavement performance models using both statistics and AI-based approaches were also developed to evaluate the relative success of these two models for network level pavement performance modeling. As part of this study, in the development of pavement RSL prediction models, automation tools for future pavement performance predictions were developed and used along with the threshold limits for various pavement performance indicators specified by the Federal Highway Administration. These RSL models will help engineers in decision-making processes at both network and project levels and for different types of pavement management business decisions.


2021 ◽  
Vol 59 (2) ◽  
pp. 123-140
Author(s):  
Milena Galetin ◽  
Anica Milovanović

Considering the possibility of using artificial intelligence in resolving legal disputes is becoming increasingly popular. The authors examine whether soft ware analysis can be applied to resolve a specific issue in investment disputes - to determine the applicable law to the substance of the dispute and highlight the application of artificial intelligence in the area of law, especially in predicting the outcome of a dispute. The starting point is a sample of 50 arbitral awards and the results of previously conducted research. It has been confirmed that soft ware analysis can be useful in decision-making processes, but not to the extent that arbitrators could exclusively rely on it. On the other hand, the development of an algorithm that would predict applicable law for different legal issues required a much larger sample. We also believe that the existence of different legal and factual circumstances in each case, as well as the personality of the arbitrator and arbitral/judicial discretion are limitations of the application of artificial intelligence in this area.


2022 ◽  
Vol 20 (1) ◽  
pp. 1-20
Author(s):  
Sakhhi Chhabra

In this exploratory study, the main aim was to find, ‘why do people disclose information when they are concerned about their privacy?’. The reasons that provide a plausible explanation to the privacy paradox have been conjectural. From the analysis of the eighteen in-depth interviews using grounded theory, themes were then conceptualized. We found rational and irrational explanations in terms of cognitive biases and heuristics that explain the privacy paradox among mobile users. We figured out some reasons in this context of mobile computing which were not emphasized earlier in the privacy paradox literature such as Peanut Effect, Fear of Missing Out- FoMo, Learned Helplessness, and Neophiliac Personality. These results add to the privacy paradox discourse and provide implications for smartphone users for making privacy-related decisions more consciously rather than inconsiderately disclosing information. Also, the results would help marketers and policymakers design nudges and choice architectures that consider privacy decision-making hurdles.


Author(s):  
M.P.L. Perera

Adaptive e-learning the aim is to fill the gap between the pupil and the educator by discussing the needs and skills of individual learners. Artificial intelligence strategies that have the potential to simulate human decision-making processes are important around adaptive e-Learning. This paper explores the Artificial techniques; Fuzzy Logic, Neural Networks, Bayesian Networks and Genetic Algorithms, highlighting their contributions to the notion of the adaptability in the sense of Adaptive E-learning. The implementation of Artificial Neural Networks to resolve problems in the current Adaptive e-learning frameworks have been established.


Author(s):  
Eva Thelisson

The research problem being investigated in this article is how to develop governance mechanisms and collective decision-making processes at a global level for Artificial Intelligence systems (AI) and Autonomous systems (AS), which would enhance confidence in AI and AS.


Sign in / Sign up

Export Citation Format

Share Document