Machine learning for clinician-interpretable algorithm development: pressure injury safety surveillance using explainable-Ai

Author(s):  
Jonathan Dimas ◽  
Andrew Wilson ◽  
Jenny Alderden ◽  
Sergey Krikov ◽  
Amanda Shields ◽  
...  
2020 ◽  
Author(s):  
Markus Jaeger ◽  
Stephan Krügel ◽  
Dimitri Marinelli ◽  
Jochen Papenbrock ◽  
Peter Schwendner

2021 ◽  
Author(s):  
Jean-Jacques Ohana ◽  
Steve Ohana ◽  
Eric Benhamou ◽  
David Saltiel ◽  
Beatrice Guez

10.2196/25704 ◽  
2020 ◽  
Author(s):  
Mengyao Jiang ◽  
Yuxia Ma ◽  
Siyi Guo ◽  
Liuqi Jin ◽  
Lin Lv ◽  
...  

Author(s):  
Gaël Aglin ◽  
Siegfried Nijssen ◽  
Pierre Schaus

Decision Trees (DTs) are widely used Machine Learning (ML) models with a broad range of applications. The interest in these models has increased even further in the context of Explainable AI (XAI), as decision trees of limited depth are very interpretable models. However, traditional algorithms for learning DTs are heuristic in nature; they may produce trees that are of suboptimal quality under depth constraints. We introduce PyDL8.5, a Python library to infer depth-constrained Optimal Decision Trees (ODTs). PyDL8.5 provides an interface for DL8.5, an efficient algorithm for inferring depth-constrained ODTs. The library provides an easy-to-use scikit-learn compatible interface. It cannot only be used for classification tasks, but also for regression, clustering, and other tasks. We introduce an interface that allows users to easily implement these other learning tasks. We provide a number of examples of how to use this library.


2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Katharina Weitz

Abstract Human-Centered AI is a widely requested goal for AI applications. To reach this is explainable AI promises to help humans to understand the inner workings and decisions of AI systems. While different XAI techniques have been developed to shed light on AI systems, it is still unclear how end-users with no experience in machine learning perceive these. Psychological concepts like trust, mental models, and self-efficacy can serve as instruments to evaluate XAI approaches in empirical studies with end-users. First results in applications for education, healthcare, and industry suggest that one XAI does not fit all. Instead, the design of XAI has to consider user needs, personal background, and the specific task of the AI system.


Sign in / Sign up

Export Citation Format

Share Document