social robots
Recently Published Documents


TOTAL DOCUMENTS

923
(FIVE YEARS 506)

H-INDEX

34
(FIVE YEARS 11)

2022 ◽  
Vol 11 (1) ◽  
pp. 1-39
Author(s):  
Minja Axelsson ◽  
Raquel Oliveira ◽  
Mattia Racca ◽  
Ville Kyrki

Design teams of social robots are often multidisciplinary, due to the broad knowledge from different scientific domains needed to develop such complex technology. However, tools to facilitate multidisciplinary collaboration are scarce. We introduce a framework for the participatory design of social robots and corresponding canvas tool for participatory design. The canvases can be applied in different parts of the design process to facilitate collaboration between experts of different fields, as well as to incorporate prospective users of the robot into the design process. We investigate the usability of the proposed canvases with two social robot design case studies: a robot that played games online with teenage users and a librarian robot that guided users at a public library. We observe through participants’ feedback that the canvases have the advantages of (1) providing structure, clarity, and a clear process to the design; (2) encouraging designers and users to share their viewpoints to progress toward a shared one; and (3) providing an educational and enjoyable design experience for the teams.


Author(s):  
Aike C. Horstmann ◽  
Nicole C. Krämer

AbstractSince social robots are rapidly advancing and thus increasingly entering people’s everyday environments, interactions with robots also progress. For these interactions to be designed and executed successfully, this study considers insights of attribution theory to explore the circumstances under which people attribute responsibility for the robot’s actions to the robot. In an experimental online study with a 2 × 2 × 2 between-subjects design (N = 394), people read a vignette describing the social robot Pepper either as an assistant or a competitor and its feedback, which was either positive or negative during a subsequently executed quiz, to be generated autonomously by the robot or to be pre-programmed by programmers. Results showed that feedback believed to be autonomous leads to more attributed agency, responsibility, and competence to the robot than feedback believed to be pre-programmed. Moreover, the more agency is ascribed to the robot, the better the evaluation of its sociability and the interaction with it. However, only the valence of the feedback affects the evaluation of the robot’s sociability and the interaction with it directly, which points to the occurrence of a fundamental attribution error.


Electronics ◽  
2022 ◽  
Vol 11 (2) ◽  
pp. 212
Author(s):  
Fernando Alonso Martín ◽  
José Carlos Castillo ◽  
María Malfáz ◽  
Álvaro Castro-González

Social robots are intended to coexist with humans and engage in relationships that lead them to a better quality of life [...]


2022 ◽  
Vol 8 ◽  
Author(s):  
Autumn Edwards ◽  
Chad Edwards

Increasingly, people interact with embodied machine communicators and are challenged to understand their natures and behaviors. The Fundamental Attribution Error (FAE, sometimes referred to as the correspondence bias) is the tendency for individuals to over-emphasize personality-based or dispositional explanations for other people’s behavior while under-emphasizing situational explanations. This effect has been thoroughly examined with humans, but do people make the same causal inferences when interpreting the actions of a robot? As compared to people, social robots are less autonomous and agentic because their behavior is wholly determined by humans in the loop, programming, and design choices. Nonetheless, people do assign robots agency, intentionality, personality, and blame. Results of an experiment showed that participants made correspondent inferences when evaluating both human and robot speakers, attributing their behavior to underlying attitudes even when it was clearly coerced. However, they committed a stronger correspondence bias in the case of the robot–an effect driven by the greater dispositional culpability assigned to robots committing unpopular behavior–and they were more confident in their attitudinal judgments of robots than humans. Results demonstrated some differences in the global impressions of humans and robots based on behavior valence and choice. Judges formed more generous impressions of the robot agent when its unpopular behavior was coerced versus chosen; a tendency not displayed when forming impressions of the human agent. Implications of attributing robot behavior to disposition, or conflating robot actors with their actions, are addressed.


2022 ◽  
Vol 8 ◽  
Author(s):  
Oliver Santiago Quick

This paper discusses the ethical nature of empathetic and sympathetic engagement with social robots, ultimately arguing that an entity which is engaged with through empathy or sympathy is engaged with as an “experiencing Other” and is as such due at least “minimal” moral consideration. Additionally, it is argued that extant HRI research often fails to recognize the complexity of empathy and sympathy, such that the two concepts are frequently treated as synonymous. The arguments for these claims occur in two steps. First, it is argued that there are at least three understandings of empathy, such that particular care is needed when researching “empathy” in human-robot interactions. The phenomenological approach to empathy—perhaps the least utilized of the three discussed understandings—is the approach with the most direct implications for moral standing. Furthermore, because “empathy” and “sympathy” are often conflated, a novel account of sympathy which makes clear the difference between the two concepts is presented, and the importance for these distinctions is argued for. In the second step, the phenomenological insights presented before regarding the nature of empathy are applied to the problem of robot moral standing to argue that empathetic and sympathetic engagement with an entity constitute an ethical engagement with it. The paper concludes by offering several potential research questions that result from the phenomenological analysis of empathy in human-robot interactions.


2022 ◽  
Vol 132 ◽  
pp. 01017
Author(s):  
Sangjip Ha ◽  
Eun-ju Yi ◽  
In-jin Yoo ◽  
Do-Hyung Park

This study intends to utilize eye tracking for the appearance of a robot, which is one of the trends in social robot design research. We suggest a research model with the entire stage from the consumer gaze response to the perceived consumer beliefs and further their attitudes toward social robots. Specifically, the eye tracking indicators used in this study are Fixation, First Visit, Total Viewed Stay Time, and Number of Revisits. Also, Areas of Interest are selected to the face, eyes, lips, and full-body of a social robot. In the first relationship, we check which element of the social robot design the consumer’s gaze stays on, and how the gaze on each element affects consumer beliefs. The consumer beliefs are considered as the social robot’s emotional expression, humanness, and facial prominence. Second, we explore whether the formation of consumer attitudes is possible through two major channels. One is the path that the consumer beliefs formed through the gaze influence their attitude, and the other is the path that the consumer gaze response directly influences the attitude. This study made a theoretical contribution in that it finally analysed the path of consumer attitude formation from various angles by linking the gaze tracking reaction and consumer perception. In addition, it is expected to make practical contributions in the suggestion of specific design insights that can be used as a reference for designing social robots.


2022 ◽  
pp. 800-820
Author(s):  
Vassilis G. Kaburlasos ◽  
Eleni Vrochidou

The use of robots as educational learning tools is quite extensive worldwide, yet it is rather limited in special education. In particular, the use of robots in the field of special education is under skepticism since robots are frequently believed to be expensive with limited capacity. The latter may change with the advent of social robots, which can be used in special education as affordable tools for delivering sophisticated stimuli to children with learning difficulties also due to preexisting conditions. Pilot studies occasionally demonstrate the effectiveness of social robots in specific domains. This chapter overviews the engagement of social robots in special education including the authors' preliminary work in this field; moreover, it discusses their proposal for potential future extensions involving more autonomous (i.e., intelligent) social robots as well as feedback from human brain signals.


2022 ◽  
Author(s):  
Sarajane Marques Peres ◽  
Shih-Chia Huang ◽  
Patrick Hung
Keyword(s):  

2021 ◽  
Author(s):  
Sunil Srivatsav Samsani

<div>The evolution of social robots has increased with the advent of recent artificial intelligence techniques. Alongside humans, social robots play active roles in various household and industrial applications. However, the safety of humans becomes a significant concern when robots navigate in a complex and crowded environment. In literature, the safety of humans in relation to social robots has been addressed by various methods; however, most of these methods compromise the time efficiency of the robot. For robots, safety and time-efficiency are two contrast elements where one dominates the other. To strike a balance between them, a multi-reward formulation in the reinforcement learning framework is proposed, which improves the safety together with time-efficiency of the robot. The multi-reward formulation includes both positive and negative rewards that encourage and punish the robot, respectively. The proposed reward formulation is tested on state-of-the-art methods of multi-agent navigation. In addition, an ablation study is performed to evaluate the importance of individual rewards. Experimental results signify that the proposed approach balances the safety and the time-efficiency of the robot while navigating in a crowded environment.</div>


2021 ◽  
Author(s):  
Sunil Srivatsav Samsani

<div>The evolution of social robots has increased with the advent of recent artificial intelligence techniques. Alongside humans, social robots play active roles in various household and industrial applications. However, the safety of humans becomes a significant concern when robots navigate in a complex and crowded environment. In literature, the safety of humans in relation to social robots has been addressed by various methods; however, most of these methods compromise the time efficiency of the robot. For robots, safety and time-efficiency are two contrast elements where one dominates the other. To strike a balance between them, a multi-reward formulation in the reinforcement learning framework is proposed, which improves the safety together with time-efficiency of the robot. The multi-reward formulation includes both positive and negative rewards that encourage and punish the robot, respectively. The proposed reward formulation is tested on state-of-the-art methods of multi-agent navigation. In addition, an ablation study is performed to evaluate the importance of individual rewards. Experimental results signify that the proposed approach balances the safety and the time-efficiency of the robot while navigating in a crowded environment.</div>


Sign in / Sign up

Export Citation Format

Share Document