explainable recommendation
Recently Published Documents


TOTAL DOCUMENTS

55
(FIVE YEARS 47)

H-INDEX

8
(FIVE YEARS 4)

Author(s):  
Ziyu Lyu ◽  
Yue Wu ◽  
Junjie Lai ◽  
Min Yang ◽  
Chengming Li ◽  
...  

2021 ◽  
Vol 2 (4) ◽  
pp. 434-447
Author(s):  
Shunsuke Kido ◽  
Ryuji Sakamoto ◽  
Masayoshi Aritsugi

There are a lot of reviews in the Internet, and existing explainable recommendation techniques use them. However, how to use reviews has not been so far adequately addressed. This paper proposes a new exploiting method of reviews in explainable recommendation generation. Our new method makes use of not only reviews written but also those referred to by users. This paper adopts two state-of-the-art explainable recommendation approaches and shows how to apply our method to them. Moreover, our method in this paper considers the possibility of making use of reviews which do not provide detailed review utilization. Our proposal can be applied to different explainable recommendation approaches, which is shown by adopting the two approaches, with reviews that do not necessarily provide their detailed utilization data. The evaluation with using Amazon reviews shows an improvement of the two explainable recommendation approaches. Our proposal is the first attempt to make use of reviews which are written or referred to by users in generating explainable recommendation. Particularly, this study does not suppose that reviews provide their detailed utilization data.


2021 ◽  
Author(s):  
Juntao Tan ◽  
Shuyuan Xu ◽  
Yingqiang Ge ◽  
Yunqi Li ◽  
Xu Chen ◽  
...  

2021 ◽  
Author(s):  
Mohammad Naiseh ◽  
Dena Al-Thani ◽  
Nan Jiang ◽  
Raian Ali

AbstractHuman-AI collaborative decision-making tools are being increasingly applied in critical domains such as healthcare. However, these tools are often seen as closed and intransparent for human decision-makers. An essential requirement for their success is the ability to provide explanations about themselves that are understandable and meaningful to the users. While explanations generally have positive connotations, studies showed that the assumption behind users interacting and engaging with these explanations could introduce trust calibration errors such as facilitating irrational or less thoughtful agreement or disagreement with the AI recommendation. In this paper, we explore how to help trust calibration through explanation interaction design. Our research method included two main phases. We first conducted a think-aloud study with 16 participants aiming to reveal main trust calibration errors concerning explainability in AI-Human collaborative decision-making tools. Then, we conducted two co-design sessions with eight participants to identify design principles and techniques for explanations that help trust calibration. As a conclusion of our research, we provide five design principles: Design for engagement, challenging habitual actions, attention guidance, friction and support training and learning. Our findings are meant to pave the way towards a more integrated framework for designing explanations with trust calibration as a primary goal.


Sign in / Sign up

Export Citation Format

Share Document