scholarly journals Does Explainable Artificial Intelligence Improve Human Decision-Making?

Author(s):  
Yasmeen Alufaisan ◽  
Laura Ranee Marusich ◽  
Jonathan Z Bakdash ◽  
Yan Zhou ◽  
Murat Kantarcioglu

Explainable AI provides insights to users into the why formodel predictions, offering potential for users to better un-derstand and trust a model, and to recognize and correct AIpredictions that are incorrect. Prior research on human andexplainable AI interactions has typically focused on measuressuch as interpretability, trust, and usability of the explanation.There are mixed findings whether explainable AI can improveactual human decision-making and the ability to identify theproblems with the underlying model. Using real datasets, wecompare objective human decision accuracy without AI (con-trol), with an AI prediction (no explanation), and AI predic-tion with explanation. We find providing any kind of AI pre-diction tends to improve user decision accuracy, but no con-clusive evidence that explainable AI has a meaningful impact.Moreover, we observed the strongest predictor for human de-cision accuracy was AI accuracy and that users were some-what able to detect when the AI was correct vs. incorrect, butthis was not significantly affected by including an explana-tion. Our results indicate that, at least in some situations, thewhy information provided in explainable AI may not enhanceuser decision-making, and further research may be needed tounderstand how to integrate explainable AI into real systems.

2021 ◽  
Vol 3 (3) ◽  
pp. 740-770
Author(s):  
Samanta Knapič ◽  
Avleen Malhi ◽  
Rohit Saluja ◽  
Kary Främling

In this paper, we present the potential of Explainable Artificial Intelligence methods for decision support in medical image analysis scenarios. Using three types of explainable methods applied to the same medical image data set, we aimed to improve the comprehensibility of the decisions provided by the Convolutional Neural Network (CNN). In vivo gastral images obtained by a video capsule endoscopy (VCE) were the subject of visual explanations, with the goal of increasing health professionals’ trust in black-box predictions. We implemented two post hoc interpretable machine learning methods, called Local Interpretable Model-Agnostic Explanations (LIME) and SHapley Additive exPlanations (SHAP), and an alternative explanation approach, the Contextual Importance and Utility (CIU) method. The produced explanations were assessed by human evaluation. We conducted three user studies based on explanations provided by LIME, SHAP and CIU. Users from different non-medical backgrounds carried out a series of tests in a web-based survey setting and stated their experience and understanding of the given explanations. Three user groups (n = 20, 20, 20) with three distinct forms of explanations were quantitatively analyzed. We found that, as hypothesized, the CIU-explainable method performed better than both LIME and SHAP methods in terms of improving support for human decision-making and being more transparent and thus understandable to users. Additionally, CIU outperformed LIME and SHAP by generating explanations more rapidly. Our findings suggest that there are notable differences in human decision-making between various explanation support settings. In line with that, we present three potential explainable methods that, with future improvements in implementation, can be generalized to different medical data sets and can provide effective decision support to medical experts.


Author(s):  
Sam Hepenstal ◽  
David McNeish

Abstract In domains which require high risk and high consequence decision making, such as defence and security, there is a clear requirement for artificial intelligence (AI) systems to be able to explain their reasoning. In this paper we examine what it means to provide explainable AI. We report on research findings to propose that explanations should be tailored, depending upon the role of the human interacting with the system and the individual system components, to reflect different needs. We demonstrate that a ‘one-size-fits-all’ explanation is insufficient to capture the complexity of needs. Thus, designing explainable AI systems involves careful consideration of context, and within that the nature of both the human and AI components.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Pooya Tabesh

Purpose While it is evident that the introduction of machine learning and the availability of big data have revolutionized various organizational operations and processes, existing academic and practitioner research within decision process literature has mostly ignored the nuances of these influences on human decision-making. Building on existing research in this area, this paper aims to define these concepts from a decision-making perspective and elaborates on the influences of these emerging technologies on human analytical and intuitive decision-making processes. Design/methodology/approach The authors first provide a holistic understanding of important drivers of digital transformation. The authors then conceptualize the impact that analytics tools built on artificial intelligence (AI) and big data have on intuitive and analytical human decision processes in organizations. Findings The authors discuss similarities and differences between machine learning and two human decision processes, namely, analysis and intuition. While it is difficult to jump to any conclusions about the future of machine learning, human decision-makers seem to continue to monopolize the majority of intuitive decision tasks, which will help them keep the upper hand (vis-à-vis machines), at least in the near future. Research limitations/implications The work contributes to research on rational (analytical) and intuitive processes of decision-making at the individual, group and organization levels by theorizing about the way these processes are influenced by advanced AI algorithms such as machine learning. Practical implications Decisions are building blocks of organizational success. Therefore, a better understanding of the way human decision processes can be impacted by advanced technologies will prepare managers to better use these technologies and make better decisions. By clarifying the boundaries/overlaps among concepts such as AI, machine learning and big data, the authors contribute to their successful adoption by business practitioners. Social implications The work suggests that human decision-makers will not be replaced by machines if they continue to invest in what they do best: critical thinking, intuitive analysis and creative problem-solving. Originality/value The work elaborates on important drivers of digital transformation from a decision-making perspective and discusses their practical implications for managers.


Author(s):  
Chris Reed

Using artificial intelligence (AI) technology to replace human decision-making will inevitably create new risks whose consequences are unforeseeable. This naturally leads to calls for regulation, but I argue that it is too early to attempt a general system of AI regulation. Instead, we should work incrementally within the existing legal and regulatory schemes which allocate responsibility, and therefore liability, to persons. Where AI clearly creates risks which current law and regulation cannot deal with adequately, then new regulation will be needed. But in most cases, the current system can work effectively if the producers of AI technology can provide sufficient transparency in explaining how AI decisions are made. Transparency ex post can often be achieved through retrospective analysis of the technology's operations, and will be sufficient if the main goal is to compensate victims of incorrect decisions. Ex ante transparency is more challenging, and can limit the use of some AI technologies such as neural networks. It should only be demanded by regulation where the AI presents risks to fundamental rights, or where society needs reassuring that the technology can safely be used. Masterly inactivity in regulation is likely to achieve a better long-term solution than a rush to regulate in ignorance. This article is part of a discussion meeting issue ‘The growing ubiquity of algorithms in society: implications, impacts and innovations'.


2022 ◽  
pp. 231-246
Author(s):  
Swati Bansal ◽  
Monica Agarwal ◽  
Deepak Bansal ◽  
Santhi Narayanan

Artificial intelligence is already here in all facets of work life. Its integration into human resources is a necessary process which has far-reaching benefits. It may have its challenges, but to survive in the current Industry 4.0 environment and prepare for the future Industry 5.0, organisations must penetrate AI into their HR systems. AI can benefit all the functions of HR, starting right from talent acquisition to onboarding and till off-boarding. The importance further increases, keeping in mind the needs and career aspirations of Generation Y and Z entering the workforce. Though employees have apprehensions of privacy and loss of jobs if implemented effectively, AI is the present and future. AI will not make people lose jobs; instead, it would require the HR people to upgrade their skills and spend their time in more strategic roles. In the end, it is the HR who will make the final decisions from the information that they get from the AI tools. A proper mix of human decision-making skills and AI would give organisations the right direction to move forward.


Risks ◽  
2020 ◽  
Vol 8 (4) ◽  
pp. 137
Author(s):  
Alex Gramegna ◽  
Paolo Giudici

We propose an Explainable AI model that can be employed in order to explain why a customer buys or abandons a non-life insurance coverage. The method consists in applying similarity clustering to the Shapley values that were obtained from a highly accurate XGBoost predictive classification algorithm. Our proposed method can be embedded into a technologically-based insurance service (Insurtech), allowing to understand, in real time, the factors that most contribute to customers’ decisions, thereby gaining proactive insights on their needs. We prove the validity of our model with an empirical analysis that was conducted on data regarding purchases of insurance micro-policies. Two aspects are investigated: the propensity to buy an insurance policy and the risk of churn of an existing customer. The results from the analysis reveal that customers can be effectively and quickly grouped according to a similar set of characteristics, which can predict their buying or churn behaviour well.


2021 ◽  
Vol 4 ◽  
Author(s):  
Lindsay Wells ◽  
Tomasz Bednarz

Research into Explainable Artificial Intelligence (XAI) has been increasing in recent years as a response to the need for increased transparency and trust in AI. This is particularly important as AI is used in sensitive domains with societal, ethical, and safety implications. Work in XAI has primarily focused on Machine Learning (ML) for classification, decision, or action, with detailed systematic reviews already undertaken. This review looks to explore current approaches and limitations for XAI in the area of Reinforcement Learning (RL). From 520 search results, 25 studies (including 5 snowball sampled) are reviewed, highlighting visualization, query-based explanations, policy summarization, human-in-the-loop collaboration, and verification as trends in this area. Limitations in the studies are presented, particularly a lack of user studies, and the prevalence of toy-examples and difficulties providing understandable explanations. Areas for future study are identified, including immersive visualization, and symbolic representation.


Author(s):  
M.P.L. Perera

Adaptive e-learning the aim is to fill the gap between the pupil and the educator by discussing the needs and skills of individual learners. Artificial intelligence strategies that have the potential to simulate human decision-making processes are important around adaptive e-Learning. This paper explores the Artificial techniques; Fuzzy Logic, Neural Networks, Bayesian Networks and Genetic Algorithms, highlighting their contributions to the notion of the adaptability in the sense of Adaptive E-learning. The implementation of Artificial Neural Networks to resolve problems in the current Adaptive e-learning frameworks have been established.


Sign in / Sign up

Export Citation Format

Share Document