social robot
Recently Published Documents


TOTAL DOCUMENTS

642
(FIVE YEARS 340)

H-INDEX

29
(FIVE YEARS 7)

2022 ◽  
Vol 11 (1) ◽  
pp. 1-39
Author(s):  
Minja Axelsson ◽  
Raquel Oliveira ◽  
Mattia Racca ◽  
Ville Kyrki

Design teams of social robots are often multidisciplinary, due to the broad knowledge from different scientific domains needed to develop such complex technology. However, tools to facilitate multidisciplinary collaboration are scarce. We introduce a framework for the participatory design of social robots and corresponding canvas tool for participatory design. The canvases can be applied in different parts of the design process to facilitate collaboration between experts of different fields, as well as to incorporate prospective users of the robot into the design process. We investigate the usability of the proposed canvases with two social robot design case studies: a robot that played games online with teenage users and a librarian robot that guided users at a public library. We observe through participants’ feedback that the canvases have the advantages of (1) providing structure, clarity, and a clear process to the design; (2) encouraging designers and users to share their viewpoints to progress toward a shared one; and (3) providing an educational and enjoyable design experience for the teams.


2022 ◽  
Vol 127 ◽  
pp. 107041
Author(s):  
Xin Qin ◽  
Chen Chen ◽  
Kai Chi Yam ◽  
Limei Cao ◽  
Wanlu Li ◽  
...  
Keyword(s):  

Author(s):  
Aike C. Horstmann ◽  
Nicole C. Krämer

AbstractSince social robots are rapidly advancing and thus increasingly entering people’s everyday environments, interactions with robots also progress. For these interactions to be designed and executed successfully, this study considers insights of attribution theory to explore the circumstances under which people attribute responsibility for the robot’s actions to the robot. In an experimental online study with a 2 × 2 × 2 between-subjects design (N = 394), people read a vignette describing the social robot Pepper either as an assistant or a competitor and its feedback, which was either positive or negative during a subsequently executed quiz, to be generated autonomously by the robot or to be pre-programmed by programmers. Results showed that feedback believed to be autonomous leads to more attributed agency, responsibility, and competence to the robot than feedback believed to be pre-programmed. Moreover, the more agency is ascribed to the robot, the better the evaluation of its sociability and the interaction with it. However, only the valence of the feedback affects the evaluation of the robot’s sociability and the interaction with it directly, which points to the occurrence of a fundamental attribution error.


Sensors ◽  
2022 ◽  
Vol 22 (2) ◽  
pp. 621
Author(s):  
Chris Lytridis ◽  
Vassilis G. Kaburlasos ◽  
Christos Bazinas ◽  
George A. Papakostas ◽  
George Sidiropoulos ◽  
...  

Recent years have witnessed the proliferation of social robots in various domains including special education. However, specialized tools to assess their effect on human behavior, as well as to holistically design social robot applications, are often missing. In response, this work presents novel tools for analysis of human behavior data regarding robot-assisted special education. The objectives include, first, an understanding of human behavior in response to an array of robot actions and, second, an improved intervention design based on suitable mathematical instruments. To achieve these objectives, Lattice Computing (LC) models in conjunction with machine learning techniques have been employed to construct a representation of a child’s behavioral state. Using data collected during real-world robot-assisted interventions with children diagnosed with Autism Spectrum Disorder (ASD) and the aforementioned behavioral state representation, time series of behavioral states were constructed. The paper then investigates the causal relationship between specific robot actions and the observed child behavioral states in order to determine how the different interaction modalities of the social robot affected the child’s behavior.


2022 ◽  
Vol 8 ◽  
Author(s):  
Anastasia K. Ostrowski ◽  
Jenny Fu ◽  
Vasiliki Zygouras ◽  
Hae Won Park ◽  
Cynthia Breazeal

As voice-user interfaces (VUIs), such as smart speakers like Amazon Alexa or social robots like Jibo, enter multi-user environments like our homes, it is critical to understand how group members perceive and interact with these devices. VUIs engage socially with users, leveraging multi-modal cues including speech, graphics, expressive sounds, and movement. The combination of these cues can affect how users perceive and interact with these devices. Through a set of three elicitation studies, we explore family interactions (N = 34 families, 92 participants, ages 4–69) with three commercially available VUIs with varying levels of social embodiment. The motivation for these three studies began when researchers noticed that families interacted differently with three agents when familiarizing themselves with the agents and, therefore, we sought to further investigate this trend in three subsequent studies designed as a conceptional replication study. Each study included three activities to examine participants’ interactions with and perceptions of the three VUIS in each study, including an agent exploration activity, perceived personality activity, and user experience ranking activity. Consistent for each study, participants interacted significantly more with an agent with a higher degree of social embodiment, i.e., a social robot such as Jibo, and perceived the agent as more trustworthy, having higher emotional engagement, and having higher companionship. There were some nuances in interaction and perception with different brands and types of smart speakers, i.e., Google Home versus Amazon Echo, or Amazon Show versus Amazon Echo Spot between the studies. In the last study, a behavioral analysis was conducted to investigate interactions between family members and with the VUIs, revealing that participants interacted more with the social robot and interacted more with their family members around the interactions with the social robot. This paper explores these findings and elaborates upon how these findings can direct future VUI development for group settings, especially in familial settings.


2022 ◽  
Vol 132 ◽  
pp. 01017
Author(s):  
Sangjip Ha ◽  
Eun-ju Yi ◽  
In-jin Yoo ◽  
Do-Hyung Park

This study intends to utilize eye tracking for the appearance of a robot, which is one of the trends in social robot design research. We suggest a research model with the entire stage from the consumer gaze response to the perceived consumer beliefs and further their attitudes toward social robots. Specifically, the eye tracking indicators used in this study are Fixation, First Visit, Total Viewed Stay Time, and Number of Revisits. Also, Areas of Interest are selected to the face, eyes, lips, and full-body of a social robot. In the first relationship, we check which element of the social robot design the consumer’s gaze stays on, and how the gaze on each element affects consumer beliefs. The consumer beliefs are considered as the social robot’s emotional expression, humanness, and facial prominence. Second, we explore whether the formation of consumer attitudes is possible through two major channels. One is the path that the consumer beliefs formed through the gaze influence their attitude, and the other is the path that the consumer gaze response directly influences the attitude. This study made a theoretical contribution in that it finally analysed the path of consumer attitude formation from various angles by linking the gaze tracking reaction and consumer perception. In addition, it is expected to make practical contributions in the suggestion of specific design insights that can be used as a reference for designing social robots.


2022 ◽  
Vol 64 ◽  
pp. 102813
Author(s):  
Áurea Subero-Navarro ◽  
Jorge Pelegrín-Borondo ◽  
Eva Reinares-Lara ◽  
Cristina Olarte-Pascual

Symmetry ◽  
2021 ◽  
Vol 14 (1) ◽  
pp. 30
Author(s):  
Qinglang Guo ◽  
Haiyong Xie ◽  
Yangyang Li ◽  
Wen Ma ◽  
Chao Zhang

The online social media ecosystem is becoming more and more confused because of more and more fake information and the social media of malicious users’ fake content; at the same time, unspeakable pain has been brought to mankind. Social robot detection uses supervised classification based on artificial feature extraction. However, user privacy is also involved in using these methods, and the hidden feature information is also ignored, such as semi-supervised algorithms with low utilization rates and graph features. In this work, we symmetrically combine BERT and GCN (Graph Convolutional Network, GCN) and propose a novel model that combines large scale pretraining and transductive learning for social robot detection, BGSRD. BGSRD constructs a heterogeneous graph over the dataset and represents Twitter as nodes using BERT representations. Corpus learning via text graph convolution network is a single text graph, which is mainly built for corpus-based on word co-occurrence and document word relationship. BERT and GCN modules can be jointly trained in BGSRD to achieve the best of merit, training data and unlabeled test data can spread label influence through graph convolution and can be carried out in the large-scale pre-training of massive raw data and the transduction learning of joint learning representation. The experiment shows that a better performance can also be achieved by BGSRD on a wide range of social robot detection datasets.


Sign in / Sign up

Export Citation Format

Share Document