Perception of robots in Kenya’s infosphere: Tools or colleagues?

2021 ◽  
pp. 37-56
Author(s):  
Tom Kwanya ◽  
◽  
Keyword(s):  
2021 ◽  
Author(s):  
Sangmin Kim ◽  
Sukyung Seok ◽  
Jongsuk Choi ◽  
Yoonseob Lim ◽  
Sonya S. Kwak

2021 ◽  
Vol 8 ◽  
Author(s):  
Giulia Perugia ◽  
Maike Paetzel-Prüsmann ◽  
Madelene Alanenpää ◽  
Ginevra Castellano

Over the past years, extensive research has been dedicated to developing robust platforms and data-driven dialog models to support long-term human-robot interactions. However, little is known about how people's perception of robots and engagement with them develop over time and how these can be accurately assessed through implicit and continuous measurement techniques. In this paper, we explore this by involving participants in three interaction sessions with multiple days of zero exposure in between. Each session consists of a joint task with a robot as well as two short social chats with it before and after the task. We measure participants' gaze patterns with a wearable eye-tracker and gauge their perception of the robot and engagement with it and the joint task using questionnaires. Results disclose that aversion of gaze in a social chat is an indicator of a robot's uncanniness and that the more people gaze at the robot in a joint task, the worse they perform. In contrast with most HRI literature, our results show that gaze toward an object of shared attention, rather than gaze toward a robotic partner, is the most meaningful predictor of engagement in a joint task. Furthermore, the analyses of gaze patterns in repeated interactions disclose that people's mutual gaze in a social chat develops congruently with their perceptions of the robot over time. These are key findings for the HRI community as they entail that gaze behavior can be used as an implicit measure of people's perception of robots in a social chat and of their engagement and task performance in a joint task.


2020 ◽  
Author(s):  
Anna Henschel ◽  
Hannah Bargel ◽  
Emily S. Cross

As robots begin to receive citizenship, be treated as beloved pets, and are given a place at Japanese family tables, it is becoming clear that these machines are taking on increasingly social roles. While human robot interaction research relies heavily on self-report measures for assessing people’s perception of robots, a distinct lack of robust cognitive and behavioural measures to gage the scope and limits of social motivation towards artificial agents exists. Here we adapted Conty and colleagues’ (2010) social version of the classic Stroop paradigm, in which we showed four kinds of distractor images above incongruent and neutral words: human faces, robot faces, object faces (for example, a cloud with facial features) and flowers (control). We predicted that social stimuli, like human faces, would be extremely salient and draw attention away from the to-be-processed words. A repeated-measures ANOVA indicated that the task worked (the Stroop effect was observed), and a distractor-dependent enhancement of Stroop interference emerged. Planned contrasts indicated that specifically human faces presented above incongruent words significantly slowed participants’ reaction times. To investigate this small effect further, we conducted a second study (N=51) with a larger stimulus set. While the main effect of the incongruent condition slowing down the reaction time of the participants replicated, we did not observe an interaction effect of the social distractors (human faces) drawing more attention than the other distractor types. We question the suitability of this task as a robust measure for social motivation and discuss our findings in the light of recent conflicting results in the social attentional capture literature.


2021 ◽  
Vol 7 (1) ◽  
Author(s):  
Anna Henschel ◽  
Hannah Bargel ◽  
Emily S. Cross

As robots begin to receive citizenship, are treated as beloved pets, and given a place at Japanese family tables, it is becoming clear that these machines are taking on increasingly social roles. While human-robot interaction research relies heavily on self-report measures for assessing people’s perception of robots, a distinct lack of robust cognitive and behavioural measures to gauge the scope and limits of social motivation towards artificial agents exists. Here we adapted Conty and colleagues’ (2010) social version of the classic Stroop paradigm, in which we showed four kinds of distractor images above incongruent and neutral words: human faces, robot faces, object faces (for example, a cloud with facial features) and flowers (control). We predicted that social stimuli, like human faces, would be extremely salient and draw attention away from the to-be-processed words. A repeated-measures ANOVA indicated that the task worked (the Stroop effect was observed), and a distractor-dependent enhancement of Stroop interference emerged. Planned contrasts indicated that specifically human faces presented above incongruent words significantly slowed participants’ reaction times. To investigate this small effect further, we conducted a second experiment (N=51) with a larger stimulus set. While the main effect of the incongruent condition slowing down participants’ reaction time replicated, we did not observe an interaction effect of the social distractors (human faces) drawing more attention than the other distractor types. We question the suitability of this task as a robust measure for social motivation and discuss our findings in the light of recent conflicting results in the social attentional capture literature.


2013 ◽  
Vol 10 (2) ◽  
pp. 365-379 ◽  
Author(s):  
Yuanlong Yu ◽  
Jason Gu ◽  
George K. I. Mann ◽  
Raymond G. Gosine

Author(s):  
Rebecca Butler ◽  
Zoe Pruitt ◽  
Eva Wiese

As social robots are increasingly introduced into our everyday lives, an emphasis on improving the human-robot interaction (HRI), particularly through increased mind perception, is necessary. Substantial research has been conducted that demonstrates how manipulations to a robot’s physical appearance or behavior increases mind perception, yet little has been done to examine the effects of the social environment. This study aims to identify the impact of social context on mind perception by comparing mind perception ratings assigned to robots viewed in a human context with those assigned to robots viewed in a robot context. Participants were assigned to one of the two contexts in which they viewed images of 5 control robots with either 15 humans or 15 robots and answered questions that measured the degree to which they ascribed mind to the agents. A t-test comparing the overall average mind ratings of the control robots between contexts showed a significant difference between the two, with the robots in the robot context having a higher average rating than those in the human context. This result demonstrates a need to consider the social context in which the HRI will take place when designing for the best interaction. Considering that most robots in the foreseeable future will be viewed in a human context, this result also calls for additional research on ways to further increase mind perception to combat the negative effect of the most likely social environment.


AI & Society ◽  
2021 ◽  
Author(s):  
Caroline L. van Straten ◽  
Jochen Peter ◽  
Rinaldo Kühne ◽  
Alex Barco

AbstractIt has been well documented that children perceive robots as social, mental, and moral others. Studies on child-robot interaction may encourage this perception of robots, first, by using a Wizard of Oz (i.e., teleoperation) set-up and, second, by having robots engage in self-description. However, much remains unknown about the effects of transparent teleoperation and self-description on children’s perception of, and relationship formation with a robot. To address this research gap initially, we conducted an experimental study with a 2 × 2 (teleoperation: overt/covert; self-description: yes/no) between-subject design in which 168 children aged 7–10 interacted with a Nao robot once. Transparency about the teleoperation procedure decreased children’s perceptions of the robot’s autonomy and anthropomorphism. Self-description reduced the degree to which children perceived the robot as being similar to themselves. Transparent teleoperation and self-description affected neither children’s perceptions of the robot’s animacy and social presence nor their closeness to and trust in the robot.


Author(s):  
Reza Etemad-Sajadi ◽  
Antonin Soussan ◽  
Théo Schöpfer

AbstractThe goal of this research is to focus on the ethical issues linked to the interaction between humans and robots in a service delivery context. Through this user study, we want to see how ethics influence user’s intention to use a robot in a frontline service context. We want to observe the importance of each ethical attribute on user’s intention to use the robot in the future. To achieve this goal, we incorporated a video that showed Pepper, the robot, in action. Then respondents had to answer questions about their perception of robots based on the video. Based on a final sample of 341 respondents, we used structural equation modeling (SEM) to test our hypotheses. The results show that the most important ethical issue is the Replacement and its implications for labor. When we look at the impact of the ethical issues on the intention to use, we discovered that the variables impacting the most are Social cues, Trust and Safety.


Sign in / Sign up

Export Citation Format

Share Document