scholarly journals Towards Explainable AI: Design and Development for Explanation of Machine Learning Predictions for a Patient Readmittance Medical Application

Author(s):  
Sofia Meacham ◽  
Georgia Isaac ◽  
Detlef Nauck ◽  
Botond Virginas
2020 ◽  
Author(s):  
Markus Jaeger ◽  
Stephan Krügel ◽  
Dimitri Marinelli ◽  
Jochen Papenbrock ◽  
Peter Schwendner

2021 ◽  
Author(s):  
Jean-Jacques Ohana ◽  
Steve Ohana ◽  
Eric Benhamou ◽  
David Saltiel ◽  
Beatrice Guez

Predictive modelling is a mathematical technique which uses Statistics for prediction, due to the rapid growth of data over the cloud system, data mining plays a significant role. Here, the term data mining is a way of extracting knowledge from huge data sources where it’s increasing the attention in the field of medical application. Specifically, to analyse and extract the knowledge from both known and unknown patterns for effective medical diagnosis, treatment, management, prognosis, monitoring and screening process. But the historical medical data might include noisy, missing, inconsistent, imbalanced and high dimensional data.. This kind of data inconvenience lead to severe bias in predictive modelling and decreased the data mining approach performances. The various pre-processing and machine learning methods and models such as Supervised Learning, Unsupervised Learning and Reinforcement Learning in recent literature has been proposed. Hence the present research focuses on review and analyses the various model, algorithm and machine learning technique for clinical predictive modelling to obtain high performance results from numerous medical data which relates to the patients of multiple diseases.


Author(s):  
Azamat Yeshmukhametov ◽  
Koichi Koganezawa ◽  
Zholdas Buribayev ◽  
Yedilkhan Amirgaliyev ◽  
Yoshio Yamamoto

Designing and development of agricultural robot is always a challenging issue, because of robot intends to work an unstructured environment and at the same time, it should be safe for the surrounded plants. Therefore, traditional robots cannot meet the high demands of modern challenges, such as working in confined and unstructured workspaces. Based on current issues, we developed a new tomato harvesting wire-driven discrete continuum robot arm with a flexible backbone structure for working in confined and extremely constrained spaces. Moreover, we optimized a tomato detaching process by using newly designed gripper with passive stem cutting function. Moreover, by designing the robot we also developed ripe tomato recognition by using machine learning. This paper explains the proposed continuum robot structure, gripper design, and development of tomato recognition system.


Author(s):  
Gaël Aglin ◽  
Siegfried Nijssen ◽  
Pierre Schaus

Decision Trees (DTs) are widely used Machine Learning (ML) models with a broad range of applications. The interest in these models has increased even further in the context of Explainable AI (XAI), as decision trees of limited depth are very interpretable models. However, traditional algorithms for learning DTs are heuristic in nature; they may produce trees that are of suboptimal quality under depth constraints. We introduce PyDL8.5, a Python library to infer depth-constrained Optimal Decision Trees (ODTs). PyDL8.5 provides an interface for DL8.5, an efficient algorithm for inferring depth-constrained ODTs. The library provides an easy-to-use scikit-learn compatible interface. It cannot only be used for classification tasks, but also for regression, clustering, and other tasks. We introduce an interface that allows users to easily implement these other learning tasks. We provide a number of examples of how to use this library.


2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Katharina Weitz

Abstract Human-Centered AI is a widely requested goal for AI applications. To reach this is explainable AI promises to help humans to understand the inner workings and decisions of AI systems. While different XAI techniques have been developed to shed light on AI systems, it is still unclear how end-users with no experience in machine learning perceive these. Psychological concepts like trust, mental models, and self-efficacy can serve as instruments to evaluate XAI approaches in empirical studies with end-users. First results in applications for education, healthcare, and industry suggest that one XAI does not fit all. Instead, the design of XAI has to consider user needs, personal background, and the specific task of the AI system.


2021 ◽  
pp. 713-720
Author(s):  
Oleksandr Nakonechnyi ◽  
Vasyl Martsenyuk ◽  
Aleksandra Klos-Witkowska ◽  
Diana Zhehestovska

Sign in / Sign up

Export Citation Format

Share Document