scholarly journals A Multidisciplinary Survey and Framework for Design and Evaluation of Explainable AI Systems

2021 ◽  
Vol 11 (3-4) ◽  
pp. 1-45
Author(s):  
Sina Mohseni ◽  
Niloofar Zarei ◽  
Eric D. Ragan

The need for interpretable and accountable intelligent systems grows along with the prevalence of artificial intelligence ( AI ) applications used in everyday life. Explainable AI ( XAI ) systems are intended to self-explain the reasoning behind system decisions and predictions. Researchers from different disciplines work together to define, design, and evaluate explainable systems. However, scholars from different disciplines focus on different objectives and fairly independent topics of XAI research, which poses challenges for identifying appropriate design and evaluation methodology and consolidating knowledge across efforts. To this end, this article presents a survey and framework intended to share knowledge and experiences of XAI design and evaluation methods across multiple disciplines. Aiming to support diverse design goals and evaluation methods in XAI research, after a thorough review of XAI related papers in the fields of machine learning, visualization, and human-computer interaction, we present a categorization of XAI design goals and evaluation methods. Our categorization presents the mapping between design goals for different XAI user groups and their evaluation methods. From our findings, we develop a framework with step-by-step design guidelines paired with evaluation methods to close the iterative design and evaluation cycles in multidisciplinary XAI teams. Further, we provide summarized ready-to-use tables of evaluation methods and recommendations for different goals in XAI research.

Author(s):  
Krzysztof Fiok ◽  
Farzad V Farahani ◽  
Waldemar Karwowski ◽  
Tareq Ahram

Researchers and software users benefit from the rapid growth of artificial intelligence (AI) to an unprecedented extent in various domains where automated intelligent action is required. However, as they continue to engage with AI, they also begin to understand the limitations and risks associated with ceding control and decision-making to not always transparent artificial computer agents. Understanding of “what is happening in the black box” becomes feasible with explainable AI (XAI) methods designed to mitigate these risks and introduce trust into human-AI interactions. Our study reviews the essential capabilities, limitations, and desiderata of XAI tools developed over recent years and reviews the history of XAI and AI in education (AIED). We present different approaches to AI and XAI from the viewpoint of researchers focused on AIED in comparison with researchers focused on AI and machine learning (ML). We conclude that both groups of interest desire increased efforts to obtain improved XAI tools; however, these groups formulate different target user groups and expectations regarding XAI features and provide different examples of possible achievements. We summarize these viewpoints and provide guidelines for scientists looking to incorporate XAI into their own work.


2021 ◽  
pp. 164-184
Author(s):  
Saiph Savage ◽  
Carlos Toxtli ◽  
Eber Betanzos-Torres

The artificial intelligence (AI) industry has created new jobs that are essential to the real world deployment of intelligent systems. Part of the job focuses on labelling data for machine learning models or having workers complete tasks that AI alone cannot do. These workers are usually known as ‘crowd workers’—they are part of a large distributed crowd that is jointly (but separately) working on the tasks although they are often invisible to end-users, leading to workers often being paid below minimum wage and having limited career growth. In this chapter, we draw upon the field of human–computer interaction to provide research methods for studying and empowering crowd workers. We present our Computational Worker Leagues which enable workers to work towards their desired professional goals and also supply quantitative information about crowdsourcing markets. This chapter demonstrates the benefits of this approach and highlights important factors to consider when researching the experiences of crowd workers.


2002 ◽  
Vol 3 (1) ◽  
pp. 28-31 ◽  
Author(s):  
Francisco Azuaje

Research on biological data integration has traditionally focused on the development of systems for the maintenance and interconnection of databases. In the next few years, public and private biotechnology organisations will expand their actions to promote the creation of a post-genome semantic web. It has commonly been accepted that artificial intelligence and data mining techniques may support the interpretation of huge amounts of integrated data. But at the same time, these research disciplines are contributing to the creation of content markup languages and sophisticated programs able to exploit the constraints and preferences of user domains. This paper discusses a number of issues on intelligent systems for the integration of bioinformatic resources.


2021 ◽  
Vol 4 ◽  
Author(s):  
Lindsay Wells ◽  
Tomasz Bednarz

Research into Explainable Artificial Intelligence (XAI) has been increasing in recent years as a response to the need for increased transparency and trust in AI. This is particularly important as AI is used in sensitive domains with societal, ethical, and safety implications. Work in XAI has primarily focused on Machine Learning (ML) for classification, decision, or action, with detailed systematic reviews already undertaken. This review looks to explore current approaches and limitations for XAI in the area of Reinforcement Learning (RL). From 520 search results, 25 studies (including 5 snowball sampled) are reviewed, highlighting visualization, query-based explanations, policy summarization, human-in-the-loop collaboration, and verification as trends in this area. Limitations in the studies are presented, particularly a lack of user studies, and the prevalence of toy-examples and difficulties providing understandable explanations. Areas for future study are identified, including immersive visualization, and symbolic representation.


2020 ◽  
Vol 8 ◽  
pp. 61-72
Author(s):  
Kara Combs ◽  
Mary Fendley ◽  
Trevor Bihl

Artificial Intelligence and Machine Learning (AI/ML) models are increasingly criticized for their “black-box” nature. Therefore, eXplainable AI (XAI) approaches to extract human-interpretable decision processes from algorithms have been explored. However, XAI research lacks understanding of algorithmic explainability from a human factors’ perspective. This paper presents a repeatable human factors heuristic analysis for XAI with a demonstration on four decision tree classifier algorithms.


Author(s):  
Людмила Васильевна Массель

В статье анализируется ряд публикаций на эту тему, а также обобщаются результаты дискуссий на конференции «Знания, онтологии, теории» (Новосибирск, 8-12 ноября 2021 г.) и Круглом столе в ИСЭМ СО РАН «Искусственный интеллект в энергетике» (22 декабря 2021 г.). Рассматриваются понятия: сильный и слабый ИИ, объяснимый ИИ, доверенный ИИ. Анализируются причины «бума» вокруг машинного обучения и его недостатки. Сравниваются облачные технологии и технологии граничных вычислений. Определяется понятие «умный» цифровой двойник, интегрирующий математические, информационные, онтологические модели и технологии ИИ. Рассматриваются этические риски ИИ и перспективы применения методов и технологий ИИ в энергетике. The article analyzes a number of publications on this topic, and also summarizes the results of discussions at the conference "Knowledge, Ontology, Theory" (Novosibirsk, November 8-12, 2021) and the Round Table at the ISEM SB RAS "Artificial Intelligence in Energy" (December 22 2021). The concepts are considered: artificial general intelligence (AGI), strong and narrow AI (NAI), explainable AI, trustworthy AI. The reasons for the "hype" around machine learning and its disadvantages are analyzed. Compares cloud and edge computing technologies. The concept of "smart" digital twin, which integrates mathematical, informational, ontological models and AI technologies, is defined. The ethical risks of AI and the prospects for the application of AI methods and technologies in the energy sector are considered.


Author(s):  
Fernando Luís-Ferreira ◽  
João Sarraipa ◽  
Jorge Calado ◽  
Joana Andrade ◽  
Daniel Rodrigues ◽  
...  

Abstract Artificial Intelligence is driving a revolution in the most diverse domains of computational services and user interaction. Data collected in large quantities is becoming useful for feeding intelligent systems that analyse, learn and provide insights and help decision support systems. Machine learning and the usage of algorithms are of most importance to extract features, reason over collected data so it becomes useful and preventive, exposing discoveries augmenting knowledge about systems and processes. Human driven applications, as those related with physiological assessment and user experience, are possible especially in the health domain and especially in supporting patients and the community. The work hereby described refers to different aspects where the Artificial Intelligence can help citizens and wraps a series devices and services that where developed and tested for the benefit of a special kind of citizens. The target population are those under some kind of Dementia, but the proposed solutions are also applicable to other elder citizens or even children that need to be assisted and prevented from risks.


The human brain is an extraordinary machine. Its ability to process information and adapt to circumstances by reprogramming itself is unparalleled, and it remains the best source of inspiration for recent developments in artificial intelligence. This has given rise to machine learning, intelligent systems, and robotics. Robots and AI might right now still seem the reserve of blockbuster science fiction movies and documentaries, but it's no doubt the world is changing. This chapter explores the origins, attitudes, and perceptions of robotics and the multiple types of robots that exist today. Perhaps most importantly, it focuses on ethical and societal concerns over the question: Are we heading for a brave new world or a science fiction horror-show where AI and robots displace or, perhaps more worryingly, replace humans?


Author(s):  
S. Matthew Liao

This introduction outlines in section I.1 some of the key issues in the study of the ethics of artificial intelligence (AI) and proposes ways to take these discussions further. Section I.2 discusses key concepts in AI, machine learning, and deep learning. Section I.3 considers ethical issues that arise because current machine learning is data hungry; is vulnerable to bad data and bad algorithms; is a black box that has problems with interpretability, explainability, and trust; and lacks a moral sense. Section I.4 discusses ethical issues that arise because current machine learning systems may be working too well and human beings can be vulnerable in the presence of these intelligent systems. Section I.5 examines ethical issues arising out of the long-term impact of superintelligence such as how the values of a superintelligent AI can be aligned with human values. Section I.6 presents an overview of the essays in this volume.


2021 ◽  
Vol 8 (2) ◽  
pp. 1-2
Author(s):  
Julkar Nine

Vision Based systems have become an integral part when it comes to autonomous driving. The autonomous industry has seen a made large progress in the perception of environment as a result of the improvements done towards vision based systems. As the industry moves up the ladder of automation, safety features are coming more and more into the focus. Different safety measurements have to be taken into consideration based on different driving situations. One of the major concerns of the highest level of autonomy is to obtain the ability of understanding both internal and external situations. Most of the research made on vision based systems are focused on image processing and artificial intelligence systems like machine learning and deep learning. Due to the current generation of technology being the generation of “Connected World”, there is no lack of data any more. As a result of the introduction of internet of things, most of these connected devices are able to share and transfer data. Vision based techniques are techniques that are hugely depended on these vision based data.


Sign in / Sign up

Export Citation Format

Share Document