Exploring Verbal Uncanny Valley Effects with Vague Language in Computer Speech

Author(s):  
Leigh Clark ◽  
Abdulmalik Ofemile ◽  
Benjamin R. Cowan
2020 ◽  
Vol 2020 ◽  
pp. 1346-1346
Author(s):  
Juran Kim ◽  
◽  
Seungmook Kang ◽  
Joonheui Bae

2020 ◽  
Author(s):  
Christopher Welker ◽  
David France ◽  
Alice Henty ◽  
Thalia Wheatley

Advances in artificial intelligence (AI) enable the creation of videos in which a person appears to say or do things they did not. The impact of these so-called “deepfakes” hinges on their perceived realness. Here we tested different versions of deepfake faces for Welcome to Chechnya, a documentary that used face swaps to protect the privacy of Chechen torture survivors who were persecuted because of their sexual orientation. AI face swaps that replace an entire face with another were perceived as more human-like and less unsettling compared to partial face swaps that left the survivors’ original eyes unaltered. The full-face swap was deemed the least unsettling even in comparison to the original (unaltered) face. When rendered in full, AI face swaps can appear human and avoid aversive responses in the viewer associated with the uncanny valley.


2014 ◽  
Author(s):  
Jessy Rose Goodman
Keyword(s):  

Author(s):  
Jiayuan Dong ◽  
Emily Lawson ◽  
Jack Olsen ◽  
Myounghoon Jeon

Driving agents can provide an effective solution to improve drivers’ trust in and to manage interactions with autonomous vehicles. Research has focused on voice-agents, while few have explored robot-agents or the comparison between the two. The present study tested two variables - voice gender and agent embodiment, using conversational scripts. Twenty participants experienced autonomous driving using the simulator for four agent conditions and filled out subjective questionnaires for their perception of each agent. Results showed that the participants perceived the voice only female agent as more likeable, more comfortable, and more competent than other conditions. Their final preference ranking also favored this agent over the others. Interestingly, eye-tracking data showed that embodied agents did not add more visual distractions than the voice only agents. The results are discussed with the traditional gender stereotype, uncanny valley, and participants’ gender. This study can contribute to the design of in-vehicle agents in the autonomous vehicles and future studies are planned to further identify the underlying mechanisms of user perception on different agents.


2021 ◽  
Vol 5 (3) ◽  
pp. 13
Author(s):  
Heting Wang ◽  
Vidya Gaddy ◽  
James Ross Beveridge ◽  
Francisco R. Ortega

The role of affect has been long studied in human–computer interactions. Unlike previous studies that focused on seven basic emotions, an avatar named Diana was introduced who expresses a higher level of emotional intelligence. To adapt to the users various affects during interaction, Diana simulates emotions with dynamic facial expressions. When two people collaborated to build blocks, their affects were recognized and labeled using the Affdex SDK and a descriptive analysis was provided. When participants turned to collaborate with Diana, their subjective responses were collected and the length of completion was recorded. Three modes of Diana were involved: a flat-faced Diana, a Diana that used mimicry facial expressions, and a Diana that used emotionally responsive facial expressions. Twenty-one responses were collected through a five-point Likert scale questionnaire and the NASA TLX. Results from questionnaires were not statistically different. However, the emotionally responsive Diana obtained more positive responses, and people spent the longest time with the mimicry Diana. In post-study comments, most participants perceived facial expressions on Diana’s face as natural, four mentioned uncomfortable feelings caused by the Uncanny Valley effect.


2019 ◽  
Vol 30 (1) ◽  
pp. 1-12 ◽  
Author(s):  
Seo Young Kim ◽  
Bernd H. Schmitt ◽  
Nadia M. Thalmann
Keyword(s):  

2015 ◽  
Vol 24 (1) ◽  
pp. 1-23 ◽  
Author(s):  
Himalaya Patel ◽  
Karl F. MacDorman

Just as physical appearance affects social influence in human communication, it may also affect the processing of advice conveyed through avatars, computer-animated characters, and other human-like interfaces. Although the most persuasive computer interfaces are often the most human-like, they have been predicted to incur the greatest risk of falling into the uncanny valley, the loss of empathy attributed to characters that appear eerily human. Previous studies compared interfaces on the left side of the uncanny valley, namely, those with low human likeness. To examine interfaces with higher human realism, a between-groups factorial experiment was conducted through the internet with 426 midwestern U.S. undergraduates. This experiment presented a hypothetical ethical dilemma followed by the advice of an authority figure. The authority was manipulated in three ways: depiction (digitally recorded or computer animated), motion quality (smooth or jerky), and advice (disclose or refrain from disclosing sensitive information). Of these, only the advice changed opinion about the ethical dilemma, even though the animated depiction was significantly eerier than the human depiction. These results indicate that compliance with an authority persists even when using an uncannily realistic computer-animated double.


Sign in / Sign up

Export Citation Format

Share Document