scholarly journals Automated Reasoning for Explainable Artificial Intelligence

10.29007/4b7h ◽  
2018 ◽  
Author(s):  
Maria Paola Bonacina

Reasoning and learning have been considered fundamental features of intelligence ever since the dawn of the field of artificial intelligence, leading to the development of the research areas of automated reasoning and machine learning. This short paper is a non-technical position statement that aims at prompting a discussion of the relationship between automated reasoning and machine learning, and more generally between automated reasoning and artificial intelligence. We suggest that the emergence of the new paradigm of XAI, that stands for eXplainable Artificial Intelligence, is an opportunity for rethinking these relationships, and that XAI may offer a grand challenge for future research on automated reasoning.

2021 ◽  
Vol 3 (3) ◽  
pp. 615-661
Author(s):  
Giulia Vilone ◽  
Luca Longo

Machine and deep learning have proven their utility to generate data-driven models with high accuracy and precision. However, their non-linear, complex structures are often difficult to interpret. Consequently, many scholars have developed a plethora of methods to explain their functioning and the logic of their inferences. This systematic review aimed to organise these methods into a hierarchical classification system that builds upon and extends existing taxonomies by adding a significant dimension—the output formats. The reviewed scientific papers were retrieved by conducting an initial search on Google Scholar with the keywords “explainable artificial intelligence”; “explainable machine learning”; and “interpretable machine learning”. A subsequent iterative search was carried out by checking the bibliography of these articles. The addition of the dimension of the explanation format makes the proposed classification system a practical tool for scholars, supporting them to select the most suitable type of explanation format for the problem at hand. Given the wide variety of challenges faced by researchers, the existing XAI methods provide several solutions to meet the requirements that differ considerably between the users, problems and application fields of artificial intelligence (AI). The task of identifying the most appropriate explanation can be daunting, thus the need for a classification system that helps with the selection of methods. This work concludes by critically identifying the limitations of the formats of explanations and by providing recommendations and possible future research directions on how to build a more generally applicable XAI method. Future work should be flexible enough to meet the many requirements posed by the widespread use of AI in several fields, and the new regulations.


2021 ◽  
Author(s):  
J. Eric T. Taylor ◽  
Graham Taylor

Artificial intelligence powered by deep neural networks has reached a levelof complexity where it can be difficult or impossible to express how a modelmakes its decisions. This black-box problem is especially concerning when themodel makes decisions with consequences for human well-being. In response,an emerging field called explainable artificial intelligence (XAI) aims to increasethe interpretability, fairness, and transparency of machine learning. In thispaper, we describe how cognitive psychologists can make contributions to XAI.The human mind is also a black box, and cognitive psychologists have overone hundred and fifty years of experience modeling it through experimentation.We ought to translate the methods and rigour of cognitive psychology to thestudy of artificial black boxes in the service of explainability. We provide areview of XAI for psychologists, arguing that current methods possess a blindspot that can be complemented by the experimental cognitive tradition. Wealso provide a framework for research in XAI, highlight exemplary cases ofexperimentation within XAI inspired by psychological science, and provide atutorial on experimenting with machines. We end by noting the advantages ofan experimental approach and invite other psychologists to conduct research inthis exciting new field.


2021 ◽  
Vol 12 (1) ◽  
pp. 101-112
Author(s):  
Kishore Sugali ◽  
Chris Sprunger ◽  
Venkata N Inukollu

The history of Artificial Intelligence and Machine Learning dates back to 1950’s. In recent years, there has been an increase in popularity for applications that implement AI and ML technology. As with traditional development, software testing is a critical component of an efficient AI/ML application. However, the approach to development methodology used in AI/ML varies significantly from traditional development. Owing to these variations, numerous software testing challenges occur. This paper aims to recognize and to explain some of the biggest challenges that software testers face in dealing with AI/ML applications. For future research, this study has key implications. Each of the challenges outlined in this paper is ideal for further investigation and has great potential to shed light on the way to more productive software testing strategies and methodologies that can be applied to AI/ML applications.


2021 ◽  
Vol 4 ◽  
Author(s):  
Mustafa Y. Topaloglu ◽  
Elisabeth M. Morrell ◽  
Suraj Rajendran ◽  
Umit Topaloglu

Artificial Intelligence and its subdomain, Machine Learning (ML), have shown the potential to make an unprecedented impact in healthcare. Federated Learning (FL) has been introduced to alleviate some of the limitations of ML, particularly the capability to train on larger datasets for improved performance, which is usually cumbersome for an inter-institutional collaboration due to existing patient protection laws and regulations. Moreover, FL may also play a crucial role in circumventing ML’s exigent bias problem by accessing underrepresented groups’ data spanning geographically distributed locations. In this paper, we have discussed three FL challenges, namely: privacy of the model exchange, ethical perspectives, and legal considerations. Lastly, we have proposed a model that could aide in assessing data contributions of a FL implementation. In light of the expediency and adaptability of using the Sørensen–Dice Coefficient over the more limited (e.g., horizontal FL) and computationally expensive Shapley Values, we sought to demonstrate a new paradigm that we hope, will become invaluable for sharing any profit and responsibilities that may accompany a FL endeavor.


Membranes ◽  
2021 ◽  
Vol 11 (9) ◽  
pp. 672
Author(s):  
Md. Ashrafuzzaman

Ion channels are linked to important cellular processes. For more than half a century, we have been learning various structural and functional aspects of ion channels using biological, physiological, biochemical, and biophysical principles and techniques. In recent days, bioinformaticians and biophysicists having the necessary expertise and interests in computer science techniques including versatile algorithms have started covering a multitude of physiological aspects including especially evolution, mutations, and genomics of functional channels and channel subunits. In these focused research areas, the use of artificial intelligence (AI), machine learning (ML), and deep learning (DL) algorithms and associated models have been found very popular. With the help of available articles and information, this review provide an introduction to this novel research trend. Ion channel understanding is usually made considering the structural and functional perspectives, gating mechanisms, transport properties, channel protein mutations, etc. Focused research on ion channels and related findings over many decades accumulated huge data which may be utilized in a specialized scientific manner to fast conclude pinpointed aspects of channels. AI, ML, and DL techniques and models may appear as helping tools. This review aims at explaining the ways we may use the bioinformatics techniques and thus draw a few lines across the avenue to let the ion channel features appear clearer.


2021 ◽  
Vol 1 (1) ◽  
pp. 76-87
Author(s):  
Alexander Buhmann ◽  
Christian Fieseler

Organizations increasingly delegate agency to artificial intelligence. However, such systems can yield unintended negative effects as they may produce biases against users or reinforce social injustices. What pronounces them as a unique grand challenge, however, are not their potentially problematic outcomes but their fluid design. Machine learning algorithms are continuously evolving; as a result, their functioning frequently remains opaque to humans. In this article, we apply recent work on tackling grand challenges though robust action to assess the potential and obstacles of managing the challenge of algorithmic opacity. We stress that although this approach is fruitful, it can be gainfully complemented by a discussion regarding the accountability and legitimacy of solutions. In our discussion, we extend the robust action approach by linking it to a set of principles that can serve to evaluate organisational approaches of tackling grand challenges with respect to their ability to foster accountable outcomes under the intricate conditions of algorithmic opacity.


Author(s):  
Jessica Taylor ◽  
Eliezer Yudkowsky ◽  
Patrick LaVictoire ◽  
Andrew Critch

This chapter surveys eight research areas organized around one question: As learning systems become increasingly intelligent and autonomous, what design principles can best ensure that their behavior is aligned with the interests of the operators? The chapter focuses on two major technical obstacles to AI alignment: the challenge of specifying the right kind of objective functions and the challenge of designing AI systems that avoid unintended consequences and undesirable behavior even in cases where the objective function does not line up perfectly with the intentions of the designers. The questions surveyed include the following: How can we train reinforcement learners to take actions that are more amenable to meaningful assessment by intelligent overseers? What kinds of objective functions incentivize a system to “not have an overly large impact” or “not have many side effects”? The chapter discusses these questions, related work, and potential directions for future research, with the goal of highlighting relevant research topics in machine learning that appear tractable today.


Author(s):  
Marco Muselli

One of the most relevant problems in artificial intelligence is allowing a synthetic device to perform inductive reasoning, i.e. to infer a set of rules consistent with a collection of data pertaining to a given real world problem. A variety of approaches, arising in different research areas such as statistics, machine learning, neural networks, etc., have been proposed during the last 50 years to deal with the problem of realizing inductive reasoning.


Sign in / Sign up

Export Citation Format

Share Document