Three-dimensional modelling of alteration zones based on geochemical exploration data: An interpretable machine-learning approach via generalized additive models

2020 ◽  
Vol 123 ◽  
pp. 104781
Author(s):  
Jin Chen ◽  
Xiancheng Mao ◽  
Hao Deng ◽  
Zhankun Liu ◽  
Qi Wang
Electronics ◽  
2021 ◽  
Vol 10 (18) ◽  
pp. 2208
Author(s):  
Maria Anna Ferlin ◽  
Michał Grochowski ◽  
Arkadiusz Kwasigroch ◽  
Agnieszka Mikołajczyk ◽  
Edyta Szurowska ◽  
...  

Machine learning-based systems are gaining interest in the field of medicine, mostly in medical imaging and diagnosis. In this paper, we address the problem of automatic cerebral microbleeds (CMB) detection in magnetic resonance images. It is challenging due to difficulty in distinguishing a true CMB from its mimics, however, if successfully solved, it would streamline the radiologists work. To deal with this complex three-dimensional problem, we propose a machine learning approach based on a 2D Faster RCNN network. We aimed to achieve a reliable system, i.e., with balanced sensitivity and precision. Therefore, we have researched and analysed, among others, impact of the way the training data are provided to the system, their pre-processing, the choice of model and its structure, and also the ways of regularisation. Furthermore, we also carefully analysed the network predictions and proposed an algorithm for its post-processing. The proposed approach enabled for obtaining high precision (89.74%), sensitivity (92.62%), and F1 score (90.84%). The paper presents the main challenges connected with automatic cerebral microbleeds detection, its deep analysis and developed system. The conducted research may significantly contribute to automatic medical diagnosis.


2019 ◽  
Author(s):  
Fred Hohman ◽  
Arjun Srinivasan ◽  
Steven M. Drucker

While machine learning (ML) continues to find success in solving previously-thought hard problems, interpreting and exploring ML models remains challenging. Recent work has shown that visualizations are a powerful tool to aid debugging, analyzing, and interpreting ML models. However, depending on the complexity of the model (e.g., number of features), interpreting these visualizations can be difficult and may require additional expertise. Alternatively, textual descriptions, or verbalizations, can be a simple, yet effective way to communicate or summarize key aspects about a model, such as the overall trend in a model’s predictions or comparisons between pairs of data instances. With the potential benefits of visualizations and verbalizations in mind, we explore how the two can be combined to aid ML interpretability. Specifically, we present a prototype system, TeleGam, that demonstrates how visualizations and verbalizations can collectively support interactive exploration of ML models, for example, generalized additive models (GAMs). We describe TeleGam’s interface and underlying heuristics to generate the verbalizations. We conclude by discussing how TeleGam can serve as a platform to conduct future studies for understanding user expectations and designing novel interfaces for interpretable ML.


Sign in / Sign up

Export Citation Format

Share Document