scholarly journals Improving Medical Image Decision‐Making by Leveraging Metacognitive Processes and Representational Similarity

Author(s):  
Eeshan Hasan ◽  
Quentin Eichbaum ◽  
Adam C. Seegmiller ◽  
Charles Stratton ◽  
Jennifer S. Trueblood
Medical Care ◽  
2013 ◽  
Vol 51 (7) ◽  
pp. 628-632 ◽  
Author(s):  
Ronald W. Gimbel ◽  
Paul Fontelo ◽  
Mark B. Stephens ◽  
Cara H. Olsen ◽  
Christopher Bunt ◽  
...  

Cognition ◽  
2021 ◽  
Vol 212 ◽  
pp. 104713
Author(s):  
Jennifer S. Trueblood ◽  
Quentin Eichbaum ◽  
Adam C. Seegmiller ◽  
Charles Stratton ◽  
Payton O'Daniels ◽  
...  

Author(s):  
David A. Washburn ◽  
Lauren A. Baker ◽  
Pamela R. Raby ◽  
J. David Smith

Studies of decision making reveal individual differences, not just in perception, memory, categorization, and other cognitive skills that support judgment, but also in the metacognitive processes that monitor confidence and uncertainty. Two experiments are described in the present report in which a psychophysical uncertainty task was used to assess these individual differences, their relation to personality and temperament differences, and the possibility of improving how optimally people respond to their own uncertainty.


2021 ◽  
Author(s):  
Eeshan Hasan ◽  
Quentin Eichbaum ◽  
Adam Seegmiller ◽  
Charles Stratton ◽  
Jennifer Trueblood

Improving the accuracy of medical image interpretation is critical to improving the diagnosis of many diseases. Using both novices (undergraduates) and experts (medical professionals), we investigated methods for improving the accuracy of a single decision maker and a group of decision makers by aggregating repeated decisions in different ways. Participants made classification decisions (cancerous versus non-cancerous) and confidence judgments on a series of cell images, viewing and classifying each image twice. We first examined whether it is possible to improve individual-level performance by using the maximum confidence slating algorithm (Koriat, 2012b), which leverages metacognitive ability by using the most confident response for an image as the ‘final response’. We find maximum confidence slating improves individual classification accuracy for both novices and experts. Building on these results, we show that aggregation algorithms based on confidence weighting scale to larger groups of participants, dramatically improving diagnostic accuracy, with the performance of groups of novices reaching that of individual experts. In sum, we find that repeated decision making and confidence weighting can be a valuable way to improve accuracy in medical image decision-making and that these techniques can be used in conjunction with each other.


2021 ◽  
Vol 3 (3) ◽  
pp. 740-770
Author(s):  
Samanta Knapič ◽  
Avleen Malhi ◽  
Rohit Saluja ◽  
Kary Främling

In this paper, we present the potential of Explainable Artificial Intelligence methods for decision support in medical image analysis scenarios. Using three types of explainable methods applied to the same medical image data set, we aimed to improve the comprehensibility of the decisions provided by the Convolutional Neural Network (CNN). In vivo gastral images obtained by a video capsule endoscopy (VCE) were the subject of visual explanations, with the goal of increasing health professionals’ trust in black-box predictions. We implemented two post hoc interpretable machine learning methods, called Local Interpretable Model-Agnostic Explanations (LIME) and SHapley Additive exPlanations (SHAP), and an alternative explanation approach, the Contextual Importance and Utility (CIU) method. The produced explanations were assessed by human evaluation. We conducted three user studies based on explanations provided by LIME, SHAP and CIU. Users from different non-medical backgrounds carried out a series of tests in a web-based survey setting and stated their experience and understanding of the given explanations. Three user groups (n = 20, 20, 20) with three distinct forms of explanations were quantitatively analyzed. We found that, as hypothesized, the CIU-explainable method performed better than both LIME and SHAP methods in terms of improving support for human decision-making and being more transparent and thus understandable to users. Additionally, CIU outperformed LIME and SHAP by generating explanations more rapidly. Our findings suggest that there are notable differences in human decision-making between various explanation support settings. In line with that, we present three potential explainable methods that, with future improvements in implementation, can be generalized to different medical data sets and can provide effective decision support to medical experts.


Author(s):  
Jennifer S. Trueblood ◽  
William R. Holmes ◽  
Adam C. Seegmiller ◽  
Jonathan Douds ◽  
Margaret Compton ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document