scholarly journals How do visual explanations foster end users' appropriate trust in machine learning?

Author(s):  
Fumeng Yang ◽  
Zhuanyi Huang ◽  
Jean Scholtz ◽  
Dustin L. Arendt
Keyword(s):  
2021 ◽  
Author(s):  
Brendon R Lutnick ◽  
David Manthey ◽  
Jan U Becker ◽  
Brandon Ginley ◽  
Katharina Moos ◽  
...  

Image-based machine learning tools hold great promise for clinical applications in nephropathology and kidney research. However, the ideal end-users of these computational tools (e.g., pathologists and biological scientists) often face prohibitive challenges in using these tools to their full potential, including the lack of technical expertise, suboptimal user interface, and limited computation power. We have developed Histo-Cloud, a tool for segmentation of whole slide images (WSIs) that has an easy-to-use graphical user interface. This tool runs a state-of-the-art convolutional neural network (CNN) for segmentation of WSIs in the cloud and allows the extraction of features from segmented regions for further analysis. By segmenting glomeruli, interstitial fibrosis and tubular atrophy, and vascular structures from renal and non-renal WSIs, we demonstrate the scalability, best practices for transfer learning, and effects of dataset variability. Finally, we demonstrate an application for animal model research, analyzing glomerular features in murine models of aging, diabetic nephropathy, and HIV associated nephropathy. The ability to access this tool over the internet will facilitate widespread use by computational non-experts. Histo-Cloud is open source and adaptable for segmentation of any histological structure regardless of stain. Histo-Cloud will greatly accelerate and facilitate the generation of datasets for machine learning in the analysis of kidney histology, empowering computationally novice end-users to conduct deep feature analysis of tissue slides.


2014 ◽  
Vol 40 (3) ◽  
pp. 307-323 ◽  
Author(s):  
Alex Groce ◽  
Todd Kulesza ◽  
Chaoqiang Zhang ◽  
Shalini Shamasunder ◽  
Margaret Burnett ◽  
...  

2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Katharina Weitz

Abstract Human-Centered AI is a widely requested goal for AI applications. To reach this is explainable AI promises to help humans to understand the inner workings and decisions of AI systems. While different XAI techniques have been developed to shed light on AI systems, it is still unclear how end-users with no experience in machine learning perceive these. Psychological concepts like trust, mental models, and self-efficacy can serve as instruments to evaluate XAI approaches in empirical studies with end-users. First results in applications for education, healthcare, and industry suggest that one XAI does not fit all. Instead, the design of XAI has to consider user needs, personal background, and the specific task of the AI system.


2021 ◽  
Vol 26 (1) ◽  
pp. 22-30
Author(s):  
Oksana Ņikiforova ◽  
Vitaly Zabiniako ◽  
Jurijs Kornienko ◽  
Madara Gasparoviča-Asīte ◽  
Amanda Siliņa

Abstract Improving IS (Information System) end-user experience is one of the most important tasks in the analysis of end-users behaviour, evaluation and identification of its improvement potential. However, the application of Machine Learning methods for the UX (User Experience) usability and effic iency improvement is not widely researched. In the context of the usability analysis, the information about behaviour of end-users could be used as an input, while in the output data the focus should be made on non-trivial or difficult attention-grabbing events and scenarios. The goal of this paper is to identify which data potentially can serve as an input for Machine Learning methods (and accordingly graph theory, transformation methods, etc.), to define dependency between these data and desired output, which can help to apply Machine Learning / graph algorithms to user activity records.


2019 ◽  
Vol 7 ◽  
pp. 29
Author(s):  
Emma M.A.L. Beauxis-Aussalet ◽  
Joost Van Doorn ◽  
Lynda Hardman

Classifiers are applied in many domains where classification errors have significant implications. However, end-users may not always understand the errors and their impact, as error visualizations are typically designed for experts and for improving classifiers. We discuss the specific needs of classifiers' end-users, and a simplified visualization designed to address them. We evaluate this design with users from three levels of expertise, and compare it with ROC curves and confusion matrices. We identify key difficulties with understanding the classification errors, and how visualizations addressed or aggravated them. The main issues concerned confusions of the actual and predicted classes (e.g., confusion of False Positives and False Negatives). The machine learning terminology, complexity of ROC curves, and symmetry of confusion matrices aggravated the confusions. The end-user-oriented visualization reduced the difficulties by using several visual features to clarify the actual and predicted classes, and more tangible metrics and representation. Our results contribute to supporting end-users' understanding of classification errors, and informed decisions when choosing or tuning classifiers.


2021 ◽  
Vol 5 (12) ◽  
pp. 78
Author(s):  
Hebitz C. H. Lau ◽  
Jeffrey C. F. Ho

This study presents a co-design project that invites participants with little or no background in artificial intelligence (AI) and machine learning (ML) to design their ideal virtual assistants (VAs) for everyday (/daily) use. VAs are differently designed and function when integrated into people’s daily lives (e.g., voice-controlled VAs are designed to blend in based on their natural qualities). To further understand users’ ideas of their ideal VA designs, participants were invited to generate designs of personal VAs. However, end users may have unrealistic expectations of future technologies. Therefore, design fiction was adopted as a method of guiding the participants’ image of the future and carefully managing their realistic, as well as unrealistic, expectations of future technologies. The result suggests the need for a human–AI relationship based on controls with various dimensions (e.g., vocalness degree and autonomy level) instead of specific features. The design insights are discussed in detail. Additionally, the co-design process offers insights into how users can participate in AI/ML designs.


2020 ◽  
Vol 3 (1) ◽  
Author(s):  
Mark P. Sendak ◽  
Michael Gao ◽  
Nathan Brajer ◽  
Suresh Balu

Author(s):  
Nguyen Tung Lam ◽  

The attack technique targeting end-users through phishing URLs is very dangerous nowadays. With this technique, attackers could steal user data or take control of the system, etc. Therefore, early detecting phishing URLs is essential. In this paper, we propose a method to detect phishing URLs based on supervised learning algorithms and abnormal behaviors from URLs. Finally, based on the research results, we build a framework for detecting phishing URLs through endusers. The novelty and advantage of our proposed method are that abnormal behaviors are extracted based on URLs which are monitored and collected directly from attack campaigns instead of using inefficient old datasets. Keywords— phishing URLs; detecting phishing URLs; abnormal behaviors of phishing URLs; Machine learning


2021 ◽  
Author(s):  
Fumeng Yang

We investigated the effects of example-based explanations for a machine learning classifier on end users' appropriate trust. We explored the effects of spatial layout and visual representation in an in-person user study with 33 participants. We measured participants' appropriate trust in the classifier, quantified the effects of different spatial layouts and visual representations, and observed changes in users' trust over time. The results show that each explanation improved users' trust in the classifier, and the combination of explanation, human, and classification algorithm yielded much better decisions than the human and classification algorithm separately. Yet these visual explanations lead to different levels of trust and may cause inappropriate trust if an explanation is difficult to understand. Visual representation and performance feedback strongly affect users' trust, and spatial layout shows a moderate effect. Our results do not support that individual differences (e.g., propensity to trust) affect users' trust in the classifier. This work advances the state-of-the-art in trust-able machine learning and informs the design and appropriate use of automated systems.


Sign in / Sign up

Export Citation Format

Share Document