perception of robots
Recently Published Documents


TOTAL DOCUMENTS

26
(FIVE YEARS 17)

H-INDEX

4
(FIVE YEARS 1)

Author(s):  
Reza Etemad-Sajadi ◽  
Antonin Soussan ◽  
Théo Schöpfer

AbstractThe goal of this research is to focus on the ethical issues linked to the interaction between humans and robots in a service delivery context. Through this user study, we want to see how ethics influence user’s intention to use a robot in a frontline service context. We want to observe the importance of each ethical attribute on user’s intention to use the robot in the future. To achieve this goal, we incorporated a video that showed Pepper, the robot, in action. Then respondents had to answer questions about their perception of robots based on the video. Based on a final sample of 341 respondents, we used structural equation modeling (SEM) to test our hypotheses. The results show that the most important ethical issue is the Replacement and its implications for labor. When we look at the impact of the ethical issues on the intention to use, we discovered that the variables impacting the most are Social cues, Trust and Safety.


2021 ◽  
Author(s):  
Sarah Mandl ◽  
Maximilian Bretschneider ◽  
Stefanie Meyer ◽  
Dagmar Gesmann-Nuissl ◽  
Frank Asbrock ◽  
...  

New bionic technologies and robots are becoming increasingly common in work spaces and private spheres. It is thus crucial to understand concerns regarding their use in social and legal terms and the qualities they should possess to be accepted as ‘co-workers’. Previous research in these areas used the Stereotype Content Model (SCM) to investigate, for example attributions of warmth and competence towards people who use bionic prostheses, cyborgs, and robots. In the present study, we propose to differentiate the Warmth dimension into the dimensions Sociability and Morality to gain deeper insight in how people with or without bionic prostheses are perceived. In addition, we extend our research to the perception of robots, such as industrial, social, or android robots. Since legal aspects need to be considered if robots are expected to be ‘co-workers’, we also evaluated current perceptions of robots in terms of legal questions. We conducted two studies in which participants rated visual stimuli of individuals with or without disabilities and low- or high-tech prostheses, and robots of different levels of Anthropomorphism (Study 1), or robots of different levels of Anthropomorphism (Study 2), in terms of Competence, Sociability, and Morality, and, for Study 2, Legal Personality and Decision-Making Authority. We also controlled for participants’ personality. Results showed that attributions of Competence and Morality varied as a function of technical sophistication of the prostheses. For robots, competence attributions were negatively related to Anthropomorphism. Sociability, Morality, Legal Personality , and Decision-Making Authority varied as functions of Anthropomorphism. Overall, this study provides a contribution to technological design, which aims at ensuring high acceptance and minimal undesirable side effects, both with regard to the application of bionic instruments and robotics. Additionally, first insights in whether more anthropomorphized robots will need to be considered differently in terms of legal practice are given.


Author(s):  
Joan Torrent-Sellens ◽  
Ana Isabel Jiménez-Zarco ◽  
Francesc Saigí-Rubió

(1) Background: The goal of the paper was to establish the factors that influence how people feel about having a medical operation performed on them by a robot. (2) Methods: Data were obtained from a 2017 Flash Eurobarometer (number 460) of the European Commission with 27,901 citizens aged 15 years and over in the 28 countries of the European Union. Logistic regression (odds ratios, OR) to model the predictors of trust in robot-assisted surgery was calculated through motivational factors, using experience and sociodemographic independent variables. (3) Results: The results obtained indicate that, as the experience of using robots increases, the predictive coefficients related to information, attitude, and perception of robots become more negative. Furthermore, sociodemographic variables played an important predictive role. The effect of experience on trust in robots for surgical interventions was greater among men, people between 40 and 54 years old, and those with higher educational levels. (4) Conclusions: The results show that trust in robots goes beyond rational decision-making, since the final decision about whether it should be a robot that performs a complex procedure like a surgical intervention depends almost exclusively on the patient’s wishes.


2021 ◽  
Author(s):  
Nicolas Spatola ◽  
Serena Marchesi ◽  
Agnieszka Wykowska

Anthropomorphism describes the tendency to ascribe nonhuman agents with characteristics and capacities such as cognitions, intentions, or emotions. Due to the increased interest in social robotic, anthropomorphism has become a core concept of human-robot interaction (HRI) studies. However, the wide use of this concept resulted in an interchangeability of its definition along with a lack of integrative approaches. In the present study, we propose a framework of anthropomorphism encompassing three levels of integration: cultural (i.e. animism beliefs), individual (i.e. mentalization, spiritualization, humanization tendencies), and attributional (i.e. cognition, emotion, intention attributions). We also acknowledge the westernized bias of the current view of anthropomorphism and develop a cross-cultural approach. In two studies, participants from different cultures completed various tasks and questionnaires assessing their animism beliefs, individual tendencies to imbue robots with mental properties (i.e. mentalization), spirit (i.e. spiritualization), and consider them as more or less human (i.e. humanization). We also evaluated their attributions of mental anthropomorphic characteristics to robots (i.e. cognition, emotion, intention). Our results demonstrate, in both experiments, that the three levels model reliably explain the collected data and that culture modulates the integration point of the cultural beliefs at the individual level. In addition, in experiment 2, the analyses show a more anthropocentric view of the mind for Western than East-Asian participants do. As such, Western perception of robots depends more on humanization while mentalization is the core of the East-Asian participant model. We further discuss these results in relation to the anthropomorphism literature and argue for the use of integrative cross-cultural model in HRI research.


2021 ◽  
Author(s):  
Sangmin Kim ◽  
Sukyung Seok ◽  
Jongsuk Choi ◽  
Yoonseob Lim ◽  
Sonya S. Kwak

AI & Society ◽  
2021 ◽  
Author(s):  
Caroline L. van Straten ◽  
Jochen Peter ◽  
Rinaldo Kühne ◽  
Alex Barco

AbstractIt has been well documented that children perceive robots as social, mental, and moral others. Studies on child-robot interaction may encourage this perception of robots, first, by using a Wizard of Oz (i.e., teleoperation) set-up and, second, by having robots engage in self-description. However, much remains unknown about the effects of transparent teleoperation and self-description on children’s perception of, and relationship formation with a robot. To address this research gap initially, we conducted an experimental study with a 2 × 2 (teleoperation: overt/covert; self-description: yes/no) between-subject design in which 168 children aged 7–10 interacted with a Nao robot once. Transparency about the teleoperation procedure decreased children’s perceptions of the robot’s autonomy and anthropomorphism. Self-description reduced the degree to which children perceived the robot as being similar to themselves. Transparent teleoperation and self-description affected neither children’s perceptions of the robot’s animacy and social presence nor their closeness to and trust in the robot.


2021 ◽  
Vol 8 ◽  
Author(s):  
Giulia Perugia ◽  
Maike Paetzel-Prüsmann ◽  
Madelene Alanenpää ◽  
Ginevra Castellano

Over the past years, extensive research has been dedicated to developing robust platforms and data-driven dialog models to support long-term human-robot interactions. However, little is known about how people's perception of robots and engagement with them develop over time and how these can be accurately assessed through implicit and continuous measurement techniques. In this paper, we explore this by involving participants in three interaction sessions with multiple days of zero exposure in between. Each session consists of a joint task with a robot as well as two short social chats with it before and after the task. We measure participants' gaze patterns with a wearable eye-tracker and gauge their perception of the robot and engagement with it and the joint task using questionnaires. Results disclose that aversion of gaze in a social chat is an indicator of a robot's uncanniness and that the more people gaze at the robot in a joint task, the worse they perform. In contrast with most HRI literature, our results show that gaze toward an object of shared attention, rather than gaze toward a robotic partner, is the most meaningful predictor of engagement in a joint task. Furthermore, the analyses of gaze patterns in repeated interactions disclose that people's mutual gaze in a social chat develops congruently with their perceptions of the robot over time. These are key findings for the HRI community as they entail that gaze behavior can be used as an implicit measure of people's perception of robots in a social chat and of their engagement and task performance in a joint task.


PLoS ONE ◽  
2021 ◽  
Vol 16 (2) ◽  
pp. e0247364
Author(s):  
Fangkai Yang ◽  
Yuan Gao ◽  
Ruiyang Ma ◽  
Sahba Zojaji ◽  
Ginevra Castellano ◽  
...  

The analysis and simulation of the interactions that occur in group situations is important when humans and artificial agents, physical or virtual, must coordinate when inhabiting similar spaces or even collaborate, as in the case of human-robot teams. Artificial systems should adapt to the natural interfaces of humans rather than the other way around. Such systems should be sensitive to human behaviors, which are often social in nature, and account for human capabilities when planning their own behaviors. A limiting factor relates to our understanding of how humans behave with respect to each other and with artificial embodiments, such as robots. To this end, we present CongreG8 (pronounced ‘con-gre-gate’), a novel dataset containing the full-body motions of free-standing conversational groups of three humans and a newcomer that approaches the groups with the intent of joining them. The aim has been to collect an accurate and detailed set of positioning, orienting and full-body behaviors when a newcomer approaches and joins a small group. The dataset contains trials from human and robot newcomers. Additionally, it includes questionnaires about the personality of participants (BFI-10), their perception of robots (Godspeed), and custom human/robot interaction questions. An overview and analysis of the dataset is also provided, which suggests that human groups are more likely to alter their configuration to accommodate a human newcomer than a robot newcomer. We conclude by providing three use cases that the dataset has already been applied to in the domains of behavior detection and generation in real and virtual environments. A sample of the CongreG8 dataset is available at https://zenodo.org/record/4537811.


2021 ◽  
Vol 7 (1) ◽  
Author(s):  
Anna Henschel ◽  
Hannah Bargel ◽  
Emily S. Cross

As robots begin to receive citizenship, are treated as beloved pets, and given a place at Japanese family tables, it is becoming clear that these machines are taking on increasingly social roles. While human-robot interaction research relies heavily on self-report measures for assessing people’s perception of robots, a distinct lack of robust cognitive and behavioural measures to gauge the scope and limits of social motivation towards artificial agents exists. Here we adapted Conty and colleagues’ (2010) social version of the classic Stroop paradigm, in which we showed four kinds of distractor images above incongruent and neutral words: human faces, robot faces, object faces (for example, a cloud with facial features) and flowers (control). We predicted that social stimuli, like human faces, would be extremely salient and draw attention away from the to-be-processed words. A repeated-measures ANOVA indicated that the task worked (the Stroop effect was observed), and a distractor-dependent enhancement of Stroop interference emerged. Planned contrasts indicated that specifically human faces presented above incongruent words significantly slowed participants’ reaction times. To investigate this small effect further, we conducted a second experiment (N=51) with a larger stimulus set. While the main effect of the incongruent condition slowing down participants’ reaction time replicated, we did not observe an interaction effect of the social distractors (human faces) drawing more attention than the other distractor types. We question the suitability of this task as a robust measure for social motivation and discuss our findings in the light of recent conflicting results in the social attentional capture literature.


Sign in / Sign up

Export Citation Format

Share Document