scholarly journals How should intelligent agents apologize to restore trust? Interaction effects between anthropomorphism and apology attribution on trust repair

2021 ◽  
Vol 61 ◽  
pp. 101595
Author(s):  
Taenyun Kim ◽  
Hayeon Song
2021 ◽  
Author(s):  
Taenyun Kim ◽  
Hayeon Song

After an intelligent agent makes an error, trust repair can be attempted to regain lost trust. While several ways are possible, individuals' underlying perception of malleability in machines--implicit theory-- can also influence the agent's trust repair process. In this study, we investigated the influence of implicit theory of machines on intelligent agents' apology after the trust violation. A 2 (implicit theory: Incremental vs. Entity) X 2 (apology attribution: Internal vs. External) between-subject design experiment of simulated stock market investment was conducted (N = 150) via online. Participants were given a situation in which they had to make investment decisions based on the recommendation of an artificial intelligence agent. We created an investment game consist of 40 investment opportunities to see the process of trust development, trust violation, and trust repair. The results show that trust damaged less severely in Incremental rather than Entity implicit theory condition and External rather than internal attribution apology condition after the trust violation. However, trust recovered more highly in Entity-External condition. We discussed both theoretical and practical implications.


2020 ◽  
Author(s):  
Taenyun Kim ◽  
Hayeon Song

Trust is essential in individuals' perception, behavior, and evaluation of intelligent agents. Indeed, it is the primary motive for people to accept new technology. Thus, it is crucial to repair trust in the event when it is damaged. This study investigated how intelligent agents should apologize to recover trust and how the effectiveness of the apology is different when the agent is humanlike compared to machine-like based on two seemingly competing frameworks of the CASA (Computers-Are-Social-Actors) paradigm and automation bias. A 2 (agent: Human-like vs. Machine-like) X 2 (apology attribution: Internal vs. External) between-subject design experiment was conducted (N = 193) in the context of the stock market. Participants were presented with a scenario in which they were supposed to make investment choices with the help of an artificial intelligence agent's advice. To see the trajectory of initial trust-building, trust violation, and trust repair process, we designed an investment game that consists of 5 rounds of 8 investment choices (in total, 40 investment choices). The results show that trust was repaired more efficiently when a human-like agent apologizes with internal compared to external attribution. However, the opposite pattern was observed among participants who had machine-like agents; the external compared to internal attribution condition showed better trust repair. Both theoretical and practical implications are discussed.


2021 ◽  
Vol 35 (2) ◽  
Author(s):  
E. S. Kox ◽  
J. H. Kerstholt ◽  
T. F. Hueting ◽  
P. W. de Vries

AbstractThe role of intelligent agents becomes more social as they are expected to act in direct interaction, involvement and/or interdependency with humans and other artificial entities, as in Human-Agent Teams (HAT). The highly interdependent and dynamic nature of teamwork demands correctly calibrated trust among team members. Trust violations are an inevitable aspect of the cycle of trust and since repairing damaged trust proves to be more difficult than building trust initially, effective trust repair strategies are needed to ensure durable and successful team performance. The aim of this study was to explore the effectiveness of different trust repair strategies from an intelligent agent by measuring the development of human trust and advice taking in a Human-Agent Teaming task. Data for this study were obtained using a task environment resembling a first-person shooter game. Participants carried out a mission in collaboration with their artificial team member. A trust violation was provoked when the agent failed to detect an approaching enemy. After this, the agent offered one of four trust repair strategies, composed of the apology components explanation and expression of regret (either one alone, both or neither). Our results indicated that expressing regret was crucial for effective trust repair. After trust declined due to the violation by the agent, trust only significantly recovered when an expression of regret was included in the apology. This effect was stronger when an explanation was added. In this context, the intelligent agent was the most effective in its attempt of rebuilding trust when it provided an apology that was both affective, and informational. Finally, the implications of our findings for the design and study of Human-Agent trust repair are discussed.


2005 ◽  
Author(s):  
Peter H. Kim ◽  
Kurt T. Dirks ◽  
Cecily D. Cooper ◽  
Donald L. Ferrin
Keyword(s):  

2011 ◽  
Author(s):  
Stephen D. R. Bennett ◽  
A. Nicole Burnett ◽  
Paul D. Siakaluk ◽  
Penny M. Pexman

Sign in / Sign up

Export Citation Format

Share Document