explainable artificial intelligence
Recently Published Documents


TOTAL DOCUMENTS

324
(FIVE YEARS 318)

H-INDEX

12
(FIVE YEARS 10)

2022 ◽  
Vol 17 (1) ◽  
pp. 16-33
Author(s):  
Mehrin Kiani ◽  
Javier Andreu-Perez ◽  
Hani Hagras ◽  
Silvia Rigato ◽  
Maria Laura Filippetti

Author(s):  
Joaquín Borrego-Díaz ◽  
Juan Galán Páez

AbstractAlongside the particular need to explain the behavior of black box artificial intelligence (AI) systems, there is a general need to explain the behavior of any type of AI-based system (the explainable AI, XAI) or complex system that integrates this type of technology, due to the importance of its economic, political or industrial rights impact. The unstoppable development of AI-based applications in sensitive areas has led to what could be seen, from a formal and philosophical point of view, as some sort of crisis in the foundations, for which it is necessary both to provide models of the fundamentals of explainability as well as to discuss the advantages and disadvantages of different proposals. The need for foundations is also linked to the permanent challenge that the notion of explainability represents in Philosophy of Science. The paper aims to elaborate a general theoretical framework to discuss foundational characteristics of explaining, as well as how solutions (events) would be justified (explained). The approach, epistemological in nature, is based on the phenomenological-based approach to complex systems reconstruction (which encompasses complex AI-based systems). The formalized perspective is close to ideas from argumentation and induction (as learning). The soundness and limitations of the approach are addressed from Knowledge representation and reasoning paradigm and, in particular, from Computational Logic point of view. With regard to the latter, the proposal is intertwined with several related notions of explanation coming from the Philosophy of Science.


Author(s):  
Lucas M. Thimoteo ◽  
Marley M. Vellasco ◽  
Jorge Amaral ◽  
Karla Figueiredo ◽  
Cátia Lie Yokoyama ◽  
...  

2022 ◽  
Vol 71 (2) ◽  
pp. 3853-3867
Author(s):  
Anwer Mustafa Hilal ◽  
Im鑞e ISSAOUI ◽  
Marwa Obayya ◽  
Fahd N. Al-Wesabi ◽  
Nadhem NEMRI ◽  
...  

2022 ◽  
Author(s):  
Babak Abedin ◽  
Mathias Klier ◽  
Christian Meske ◽  
Fethi Rabhi

2022 ◽  
pp. 146-164
Author(s):  
Duygu Bagci Das ◽  
Derya Birant

Explainable artificial intelligence (XAI) is a concept that has emerged and become popular in recent years. Even interpretation in machine learning models has been drawing attention. Human activity classification (HAC) systems still lack interpretable approaches. In this study, an approach, called eXplainable HAC (XHAC), was proposed in which the data exploration, model structure explanation, and prediction explanation of the ML classifiers for HAR were examined to improve the explainability of the HAR models' components such as sensor types and their locations. For this purpose, various internet of things (IoT) sensors were considered individually, including accelerometer, gyroscope, and magnetometer. The location of these sensors (i.e., ankle, arm, and chest) was also taken into account. The important features were explored. In addition, the effect of the window size on the classification performance was investigated. According to the obtained results, the proposed approach makes the HAC processes more explainable compared to the black-box ML techniques.


Sign in / Sign up

Export Citation Format

Share Document