scholarly journals Definitions, methods, and applications in interpretable machine learning

2019 ◽  
Vol 116 (44) ◽  
pp. 22071-22080 ◽  
Author(s):  
W. James Murdoch ◽  
Chandan Singh ◽  
Karl Kumbier ◽  
Reza Abbasi-Asl ◽  
Bin Yu

Machine-learning models have demonstrated great success in learning complex patterns that enable them to make predictions about unobserved data. In addition to using models for prediction, the ability to interpret what a model has learned is receiving an increasing amount of attention. However, this increased focus has led to considerable confusion about the notion of interpretability. In particular, it is unclear how the wide array of proposed interpretation methods are related and what common concepts can be used to evaluate them. We aim to address these concerns by defining interpretability in the context of machine learning and introducing the predictive, descriptive, relevant (PDR) framework for discussing interpretations. The PDR framework provides 3 overarching desiderata for evaluation: predictive accuracy, descriptive accuracy, and relevancy, with relevancy judged relative to a human audience. Moreover, to help manage the deluge of interpretation methods, we introduce a categorization of existing techniques into model-based and post hoc categories, with subgroups including sparsity, modularity, and simulatability. To demonstrate how practitioners can use the PDR framework to evaluate and understand interpretations, we provide numerous real-world examples. These examples highlight the often underappreciated role played by human audiences in discussions of interpretability. Finally, based on our framework, we discuss limitations of existing methods and directions for future work. We hope that this work will provide a common vocabulary that will make it easier for both practitioners and researchers to discuss and choose from the full range of interpretation methods.

2019 ◽  
Vol 333 ◽  
pp. 273-283 ◽  
Author(s):  
Yawen Li ◽  
Liu Yang ◽  
Bohan Yang ◽  
Ning Wang ◽  
Tian Wu

2020 ◽  
Vol 7 (4) ◽  
pp. 212-219 ◽  
Author(s):  
Aixia Guo ◽  
Michael Pasque ◽  
Francis Loh ◽  
Douglas L. Mann ◽  
Philip R. O. Payne

Abstract Purpose of Review One in five people will develop heart failure (HF), and 50% of HF patients die in 5 years. The HF diagnosis, readmission, and mortality prediction are essential to develop personalized prevention and treatment plans. This review summarizes recent findings and approaches of machine learning models for HF diagnostic and outcome prediction using electronic health record (EHR) data. Recent Findings A set of machine learning models have been developed for HF diagnostic and outcome prediction using diverse variables derived from EHR data, including demographic, medical note, laboratory, and image data, and achieved expert-comparable prediction results. Summary Machine learning models can facilitate the identification of HF patients, as well as accurate patient-specific assessment of their risk for readmission and mortality. Additionally, novel machine learning techniques for integration of diverse data and improvement of model predictive accuracy in imbalanced data sets are critical for further development of these promising modeling methodologies.


As Artificial Intelligence penetrates all aspects of human life, more and more questions about ethical practices and fair uses arise, which has motivated the research community to look inside and develop methods to interpret these Artificial Intelligence/Machine Learning models. This concept of interpretability can not only help with the ethical questions but also can provide various insights into the working of these machine learning models, which will become crucial in trust-building and understanding how a model makes decisions. Furthermore, in many machine learning applications, the feature of interpretability is the primary value that they offer. However, in practice, many developers select models based on the accuracy score and disregarding the level of interpretability of that model, which can be chaotic as predictions by many high accuracy models are not easily explainable. In this paper, we introduce the concept of Machine Learning Model Interpretability, Interpretable Machine learning, and the methods used for interpretation and explanations.


Author(s):  
Terazima Maeda

Nowadays, there is a large number of machine learning models that could be used for various areas. However, different research targets are usually sensitive to the type of models. For a specific prediction target, the predictive accuracy of a machine learning model is always dependent to the data feature, data size and the intrinsic relationship between inputs and outputs. Therefore, for a specific data group and a fixed prediction mission, how to rationally compare the predictive accuracy of different machine learning model is a big question. In this brief note, we show how should we compare the performances of different machine models by raising some typical examples.


2019 ◽  
Vol 1 (8) ◽  
pp. 1900045 ◽  
Author(s):  
Paulius Mikulskis ◽  
Morgan R. Alexander ◽  
David Alan Winkler

2018 ◽  
Vol 66 (4) ◽  
pp. 283-290 ◽  
Author(s):  
Johannes Brinkrolf ◽  
Barbara Hammer

Abstract Classification by means of machine learning models constitutes one relevant technology in process automation and predictive maintenance. However, common techniques such as deep networks or random forests suffer from their black box characteristics and possible adversarial examples. In this contribution, we give an overview about a popular alternative technology from machine learning, namely modern variants of learning vector quantization, which, due to their combined discriminative and generative nature, incorporate interpretability and the possibility of explicit reject options for irregular samples. We give an explicit bound on minimum changes required for a change of the classification in case of LVQ networks with reject option, and we demonstrate the efficiency of reject options in two examples.


Sign in / Sign up

Export Citation Format

Share Document