scholarly journals XMAP: eXplainable mapping analytical process

Author(s):  
Su Nguyen ◽  
Binh Tran

AbstractAs the number of artificial intelligence (AI) applications increases rapidly and more people will be affected by AI’s decisions, there are real needs for novel AI systems that can deliver both accuracy and explanations. To address these needs, this paper proposes a new approach called eXplainable Mapping Analytical Process (XMAP). Different from existing works in explainable AI, XMAP is highly modularised and the interpretability for each step can be easily obtained and visualised. A number of core algorithms are developed in XMAP to capture the distributions and topological structures of data, define contexts that emerged from data, and build effective representations for classification tasks. The experiments show that XMAP can provide useful and interpretable insights across analytical steps. For the binary classification task, its predictive performance is very competitive as compared to advanced machine learning algorithms in the literature. In some large datasets, XMAP can even outperform black-box algorithms without losing its interpretability.

2020 ◽  
Vol 8 ◽  
pp. 61-72
Author(s):  
Kara Combs ◽  
Mary Fendley ◽  
Trevor Bihl

Artificial Intelligence and Machine Learning (AI/ML) models are increasingly criticized for their “black-box” nature. Therefore, eXplainable AI (XAI) approaches to extract human-interpretable decision processes from algorithms have been explored. However, XAI research lacks understanding of algorithmic explainability from a human factors’ perspective. This paper presents a repeatable human factors heuristic analysis for XAI with a demonstration on four decision tree classifier algorithms.


2021 ◽  
Vol 11 (2) ◽  
pp. 158-163
Author(s):  
Leixin Shi ◽  
Hongji Xu ◽  
Beibei Zhang ◽  
Xiaojie Sun ◽  
Juan Li ◽  
...  

Human Activity Recognition (HAR) is one of the main research fields in pattern recognition. In recent years, machine learning and deep learning have played important roles in Artificial Intelligence (AI) fields, and are proven to be very successful in classification tasks of HAR. However, there are two drawbacks of the mainstream frameworks: 1) all inputs are processed with the same parameters, which would cause the framework to incorrectly assign an unrealistic label to the object; 2) these frameworks lack generality in different application scenarios. In this paper, an adaptive multi-state pipe framework based on Set Pair Analysis (SPA) is presented, where pipes are mainly divided into three kinds of types: main pipe, sub-pipe and fusion pipe. In the main pipe, the input of classification tasks is preprocessed by SPA to obtain the Membership Belief Matrix (MBM). The sub-pipe shunt processing is performed according to the membership belief. The results are merged through the fusion pipe in the end. To test the performance of the proposed framework, we attempt to find the best configuration set that yields the optimal performance and evaluate the effectiveness of the new approach on the popular benchmark dataset WISDM. Experimental results demonstrate that the proposed framework can get the good performance by achieving a result of 1.4% test error.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Dennis Wagner ◽  
Dominik Heider ◽  
Georges Hattab

AbstractPredicting if a set of mushrooms is edible or not corresponds to the task of classifying them into two groups—edible or poisonous—on the basis of a classification rule. To support this binary task, we have collected the largest and most comprehensive attribute based data available. In this work, we detail the creation, curation and simulation of a data set for binary classification. Thanks to natural language processing, the primary data are based on a text book for mushroom identification and contain 173 species from 23 families. While the secondary data comprise simulated or hypothetical entries that are structurally comparable to the 1987 data, it serves as pilot data for classification tasks. We evaluated different machine learning algorithms, namely, naive Bayes, logistic regression, and linear discriminant analysis (LDA), and random forests (RF). We found that the RF provided the best results with a five-fold Cross-Validation accuracy and F2-score of 1.0 ($$\mu =1$$ μ = 1 , $$\sigma =0$$ σ = 0 ), respectively. The results of our pilot are conclusive and indicate that our data were not linearly separable. Unlike the 1987 data which showed good results using a linear decision boundary with the LDA. Our data set contains 23 families and is the largest available. We further provide a fully reproducible workflow and provide the data under the FAIR principles.


Author(s):  
Gagan Bansal ◽  
Besmira Nushi ◽  
Ece Kamar ◽  
Daniel S. Weld ◽  
Walter S. Lasecki ◽  
...  

AI systems are being deployed to support human decision making in high-stakes domains such as healthcare and criminal justice. In many cases, the human and AI form a team, in which the human makes decisions after reviewing the AI’s inferences. A successful partnership requires that the human develops insights into the performance of the AI system, including its failures. We study the influence of updates to an AI system in this setting. While updates can increase the AI’s predictive performance, they may also lead to behavioral changes that are at odds with the user’s prior experiences and confidence in the AI’s inferences. We show that updates that increase AI performance may actually hurt team performance. We introduce the notion of the compatibility of an AI update with prior user experience and present methods for studying the role of compatibility in human-AI teams. Empirical results on three high-stakes classification tasks show that current machine learning algorithms do not produce compatible updates. We propose a re-training objective to improve the compatibility of an update by penalizing new errors. The objective offers full leverage of the performance/compatibility tradeoff across different datasets, enabling more compatible yet accurate updates.


Society ◽  
2021 ◽  
Author(s):  
Karen Elliott ◽  
Rob Price ◽  
Patricia Shaw ◽  
Tasos Spiliotopoulos ◽  
Magdalene Ng ◽  
...  

AbstractIn the digital era, we witness the increasing use of artificial intelligence (AI) to solve problems, while improving productivity and efficiency. Yet, inevitably costs are involved with delegating power to algorithmically based systems, some of whose workings are opaque and unobservable and thus termed the “black box”. Central to understanding the “black box” is to acknowledge that the algorithm is not mendaciously undertaking this action; it is simply using the recombination afforded to scaled computable machine learning algorithms. But an algorithm with arbitrary precision can easily reconstruct those characteristics and make life-changing decisions, particularly in financial services (credit scoring, risk assessment, etc.), and it could be difficult to reconstruct, if this was done in a fair manner reflecting the values of society. If we permit AI to make life-changing decisions, what are the opportunity costs, data trade-offs, and implications for social, economic, technical, legal, and environmental systems? We find that over 160 ethical AI principles exist, advocating organisations to act responsibly to avoid causing digital societal harms. This maelstrom of guidance, none of which is compulsory, serves to confuse, as opposed to guide. We need to think carefully about how we implement these algorithms, the delegation of decisions and data usage, in the absence of human oversight and AI governance. The paper seeks to harmonise and align approaches, illustrating the opportunities and threats of AI, while raising awareness of Corporate Digital Responsibility (CDR) as a potential collaborative mechanism to demystify governance complexity and to establish an equitable digital society.


Author(s):  
Alja Videtič Paska ◽  
Katarina Kouter

In psychiatry, compared to other medical fields, the identification of biological markers that would complement current clinical interview, and enable more objective and faster clinical diagnosis, implement accurate monitoring of treatment response and remission, is grave. Current technological development enables analyses of various biological marks in high throughput scale at reasonable costs, and therefore ‘omic’ studies are entering the psychiatry research. However, big data demands a whole new plethora of skills in data processing, before clinically useful information can be extracted. So far the classical approach to data analysis did not really contribute to identification of biomarkers in psychiatry, but the extensive amounts of data might get to a higher level, if artificial intelligence in the shape of machine learning algorithms would be applied. Not many studies on machine learning in psychiatry have been published, but we can already see from that handful of studies that the potential to build a screening portfolio of biomarkers for different psychopathologies, including suicide, exists.


2019 ◽  
Vol 11 (3) ◽  
pp. 29-37
Author(s):  
Mateusz Kot ◽  
Grzegorz Leszczyński

Abstract This study focuses on the development of a specific type of Intelligent Agents — Business Virtual Assistants (BVA). The paper aims to identify the scope of collaboration between users and providers in the process of agent development and to define the impact that user interpretations of a BVA agent have on this collaboration. This study conceptualises the collaboration between providers and users in the process of the BVA development. It uses the concept of the collaborative development of innovation and sensemaking. The empirical part presents preliminary exploratory in-depth interviews conducted with CEOs of BVA providers and analyses the use of the scheme offered by Miles and Hubermann (1994). The main results show the scope of the collaboration between BVA users and providers in the process of the BVA development. User engagement is crucial in the development of BVA agents since they are using machine learning algorithms. The user interpretation through sensemaking influences the process as their attitudes guide their behaviour. Apart from that, users have to adjust to this new kind of entity in the market and learn how to use it in line with savoir-vivre rules. This paper suggests the need to develop a new approach to the collaborative development of innovation when Artificial Intelligence is involved.


2019 ◽  
Vol 19 (01) ◽  
pp. 2-13 ◽  
Author(s):  
Ronald Yu ◽  
Gabriele Spina Alì

AbstractThe Artificial intelligence revolution is happening and is going to drastically re-shape legal research in both the private sector and academia. AI research tools present several advantages over traditional research methods. They allow for the analysis and review of large datasets (‘Big Data’) and can identify patterns that are imperceptible to human researchers. However, the wonders of AI legal research are not without perils. Because of their complexity, AI systems can escape the control and understanding of their operators and programmers. Therefore, especially when run by researchers with insufficient IT background, computational AI research may skew analyses or result in flawed research. Premised thus, the main goals of this paper, written by Ronald Yu and Gabriele Spina Alì, are to analyse some of the factors that can jeopardize the reliability of AI-assisted legal research and to review some of the solutions to mitigate this situation.


Metabolomics ◽  
2019 ◽  
Vol 15 (12) ◽  
Author(s):  
Kevin M. Mendez ◽  
Stacey N. Reinke ◽  
David I. Broadhurst

Abstract Introduction Metabolomics is increasingly being used in the clinical setting for disease diagnosis, prognosis and risk prediction. Machine learning algorithms are particularly important in the construction of multivariate metabolite prediction. Historically, partial least squares (PLS) regression has been the gold standard for binary classification. Nonlinear machine learning methods such as random forests (RF), kernel support vector machines (SVM) and artificial neural networks (ANN) may be more suited to modelling possible nonlinear metabolite covariance, and thus provide better predictive models. Objectives We hypothesise that for binary classification using metabolomics data, non-linear machine learning methods will provide superior generalised predictive ability when compared to linear alternatives, in particular when compared with the current gold standard PLS discriminant analysis. Methods We compared the general predictive performance of eight archetypal machine learning algorithms across ten publicly available clinical metabolomics data sets. The algorithms were implemented in the Python programming language. All code and results have been made publicly available as Jupyter notebooks. Results There was only marginal improvement in predictive ability for SVM and ANN over PLS across all data sets. RF performance was comparatively poor. The use of out-of-bag bootstrap confidence intervals provided a measure of uncertainty of model prediction such that the quality of metabolomics data was observed to be a bigger influence on generalised performance than model choice. Conclusion The size of the data set, and choice of performance metric, had a greater influence on generalised predictive performance than the choice of machine learning algorithm.


Author(s):  
Krzysztof Fiok ◽  
Farzad V Farahani ◽  
Waldemar Karwowski ◽  
Tareq Ahram

Researchers and software users benefit from the rapid growth of artificial intelligence (AI) to an unprecedented extent in various domains where automated intelligent action is required. However, as they continue to engage with AI, they also begin to understand the limitations and risks associated with ceding control and decision-making to not always transparent artificial computer agents. Understanding of “what is happening in the black box” becomes feasible with explainable AI (XAI) methods designed to mitigate these risks and introduce trust into human-AI interactions. Our study reviews the essential capabilities, limitations, and desiderata of XAI tools developed over recent years and reviews the history of XAI and AI in education (AIED). We present different approaches to AI and XAI from the viewpoint of researchers focused on AIED in comparison with researchers focused on AI and machine learning (ML). We conclude that both groups of interest desire increased efforts to obtain improved XAI tools; however, these groups formulate different target user groups and expectations regarding XAI features and provide different examples of possible achievements. We summarize these viewpoints and provide guidelines for scientists looking to incorporate XAI into their own work.


Sign in / Sign up

Export Citation Format

Share Document