human partner
Recently Published Documents


TOTAL DOCUMENTS

61
(FIVE YEARS 27)

H-INDEX

12
(FIVE YEARS 1)

PLoS ONE ◽  
2021 ◽  
Vol 16 (12) ◽  
pp. e0261790
Author(s):  
Giulia Cimarelli ◽  
Julia Schindlbauer ◽  
Teresa Pegger ◽  
Verena Wesian ◽  
Zsófia Virányi

Domestic dogs display behavioural patterns towards their owners that fulfil the four criteria of attachment. As such, they use their owners as a secure base, exploring the environment and manipulating objects more when accompanied by their owners than when alone. Although there are some indications that owners serve as a better secure base than other human beings, the evidence regarding a strong owner-stranger differentiation in a manipulative context is not straightforward. In the present study, we conducted two experiments in which pet dogs were tested in an object-manipulation task in the presence of the owner and of a stranger, varying how the human partner would behave (i.e. remaining silent or encouraging the dog, Experiment 1), and when alone (Experiment 2). Further, to gain a better insight into the mechanisms behind a potential owner-stranger differentiation, we investigated the effect of dogs’ previous life history (i.e. having lived in a shelter or having lived in the same household since puppyhood). Overall, we found that strangers do not provide a secure base effect and that former shelter dogs show a stronger owner-stranger differentiation than other family dogs. As former shelter dogs show more behavioural signs correlated with anxiety towards the novel environment and the stranger, we concluded that having been re-homed does not necessarily affect the likelihood of forming a secure bond with the new owner but might have an impact on how dogs interact with novel stimuli, including unfamiliar humans. These results confirm the owner’s unique role in providing security to their dogs and have practical implications for the bond formation in pet dogs with a past in a shelter.


2021 ◽  
Author(s):  
Kyveli Kompatsiari ◽  
Francesco Bossi ◽  
Agnieszka Wykowska

Eye contact established by a human partner has been shown to affect various cognitive processes of the receiver. However, little is known about humans’ responses to eye contact established by a humanoid robot. Here, we aimed at examining humans’ oscillatory brain response to eye contact with a humanoid robot. Eye contact (or lack thereof) was embedded in a gaze cueing task and preceded the phase of gaze-related attentional orienting. In addition to examining the effect of eye contact on the recipient, we also tested its impact on gaze cueing effects. Results showed that participants rated eye contact as more engaging and responded with higher desynchronization of alpha-band activity in left fronto-central and central electrode clusters when the robot established eye contact with them, compared to no eye contact condition. However, eye contact did not modulate gaze cueing effects. The results are interpreted in terms of the functional roles involved in alpha central rhythms (potentially interpretable also as mu rhythm), including joint attention and engagement in social interaction.


2021 ◽  
Vol 15 ◽  
Author(s):  
Omar Eldardeer ◽  
Jonas Gonzalez-Billandon ◽  
Lukas Grasse ◽  
Matthew Tata ◽  
Francesco Rea

One of the fundamental prerequisites for effective collaborations between interactive partners is the mutual sharing of the attentional focus on the same perceptual events. This is referred to as joint attention. In psychological, cognitive, and social sciences, its defining elements have been widely pinpointed. Also the field of human-robot interaction has extensively exploited joint attention which has been identified as a fundamental prerequisite for proficient human-robot collaborations. However, joint attention between robots and human partners is often encoded in prefixed robot behaviours that do not fully address the dynamics of interactive scenarios. We provide autonomous attentional behaviour for robotics based on a multi-sensory perception that robustly relocates the focus of attention on the same targets the human partner attends. Further, we investigated how such joint attention between a human and a robot partner improved with a new biologically-inspired memory-based attention component. We assessed the model with the humanoid robot iCub involved in performing a joint task with a human partner in a real-world unstructured scenario. The model showed a robust performance on capturing the stimulation, making a localisation decision in the right time frame, and then executing the right action. We then compared the attention performance of the robot against the human performance when stimulated from the same source across different modalities (audio-visual and audio only). The comparison showed that the model is behaving with temporal dynamics compatible with those of humans. This provides an effective solution for memory-based joint attention in real-world unstructured environments. Further, we analyzed the localisation performances (reaction time and accuracy), the results showed that the robot performed better in an audio-visual condition than an audio only condition. The performance of the robot in the audio-visual condition was relatively comparable with the behaviour of the human participants whereas it was less efficient in audio-only localisation. After a detailed analysis of the internal components of the architecture, we conclude that the differences in performance are due to egonoise which significantly affects the audio-only localisation performance.


Author(s):  
Dario Pasquali ◽  
Jonas Gonzalez-Billandon ◽  
Alexander Mois Aroyo ◽  
Giulio Sandini ◽  
Alessandra Sciutti ◽  
...  

AbstractRobots destined to tasks like teaching or caregiving have to build a long-lasting social rapport with their human partners. This requires, from the robot side, to be capable of assessing whether the partner is trustworthy. To this aim a robot should be able to assess whether someone is lying or not, while preserving the pleasantness of the social interaction. We present an approach to promptly detect lies based on the pupil dilation, as intrinsic marker of the lie-associated cognitive load that can be applied in an ecological human–robot interaction, autonomously led by a robot. We demonstrated the validity of the approach with an experiment, in which the iCub humanoid robot engages the human partner by playing the role of a magician in a card game and detects in real-time the partner deceptive behavior. On top of that, we show how the robot can leverage on the gained knowledge about the deceptive behavior of each human partner, to better detect subsequent lies of that individual. Also, we explore whether machine learning models could improve lie detection performances for both known individuals (within-participants) over multiple interaction with the same partner, and with novel partners (between-participant). The proposed setup, interaction and models enable iCub to understand when its partners are lying, which is a fundamental skill for evaluating their trustworthiness and hence improving social human–robot interaction.


PLoS ONE ◽  
2021 ◽  
Vol 16 (7) ◽  
pp. e0253277
Author(s):  
Jim McGetrick ◽  
Lisa Poncet ◽  
Marietta Amann ◽  
Johannes Schullern-Schrattenhofen ◽  
Leona Fux ◽  
...  

Domestic dogs have been shown to reciprocate help received from conspecifics in food-giving tasks. However, it is not yet known whether dogs also reciprocate help received from humans. Here, we investigated whether dogs reciprocate the receipt of food from humans. In an experience phase, subjects encountered a helpful human who provided them with food by activating a food dispenser, and an unhelpful human who did not provide them with food. Subjects later had the opportunity to return food to each human type, in a test phase, via the same mechanism. In addition, a free interaction session was conducted in which the subject was free to interact with its owner and with whichever human partner it had encountered on that day. Two studies were carried out, which differed in the complexity of the experience phase and the time lag between the experience phase and test phase. Subjects did not reciprocate the receipt of food in either study. Furthermore, no difference was observed in the duration subjects spent in proximity to, or the latency to approach, the two human partners. Although our results suggest that dogs do not reciprocate help received from humans, they also suggest that the dogs did not recognize the cooperative or uncooperative act of the humans during the experience phase. It is plausible that aspects of the experimental design hindered the emergence of any potential reciprocity. However, it is also possible that dogs are simply not prosocial towards humans in food-giving contexts.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Joshua Zonca ◽  
Anna Folsø ◽  
Alessandra Sciutti

AbstractIndirect reciprocity is a pervasive social norm that promotes human cooperation. Helping someone establishes a good reputation, increasing the probability of receiving help from others. Here we hypothesize that indirect reciprocity regulates not only cooperative behavior but also the exchange of opinions within a social group. In a novel interactive perceptual task (Experiment 1), we show that participants relied more on the judgments of an alleged human partner when a second alleged peer had been endorsing participants’ opinions. By doing so, participants did not take into account the reliability of their partners’ judgments and did not maximize behavioral accuracy and monetary reward. This effect declined when participants did not expect future interactions with their partners, suggesting the emergence of downstream mechanisms of reciprocity linked to the management of reputation. Importantly, all these effects disappeared when participants knew that the partners’ responses were computer-generated (Experiment 2). Our results suggest that, within a social group, individuals may weight others’ opinions through indirect reciprocity, highlighting the emergence of normative distortions in the process of information transmission among humans.


2021 ◽  
Vol 5 ◽  
Author(s):  
Nils F. Tolksdorf ◽  
Camilla E. Crawshaw ◽  
Katharina J. Rohlfing

Social robots have emerged as a new digital technology that is increasingly being implemented in the educational landscape. While social robots could be deployed to assist young children with their learning in a variety of different ways, the typical approach in educational practices is to supplement the learning process rather than to replace the human caregiver, e.g., the teacher, parent, educator or therapist. When functioning in the role of an educational assistant, social robots will likely constitute a part of a triadic interaction with the child and the human caregiver. Surprisingly, there is little research that systematically investigates the role of the caregiver by examining the ways in which children involve or check in with them during their interaction with another partner—a phenomenon that is known as social referencing. In the present study, we investigated social referencing in the context of a dyadic child–robot interaction. Over the course of four sessions within our longitudinal language-learning study, we observed how 20 pre-school children aged 4–5 years checked in with their accompanying caregivers who were not actively involved in the language-learning procedure. The children participating in the study were randomly assigned to either an interaction with a social robot or a human partner. Our results revealed that all children across both conditions utilized social referencing behaviors to address their caregiver. However, we found that the children who interacted with the social robot did so significantly more frequently in each of the four sessions than those who interacted with the human partner. Further analyses showed that no significant change in their behavior over the course of the sessions could be observed. Findings are discussed with regard to the caregiver's role during children's interactions with social robots and the implications for future interaction design.


2020 ◽  
Vol 4 (Supplement_1) ◽  
pp. 929-929
Author(s):  
Anna Ueda ◽  
Hideyuki Takahashi

Abstract The study explores the effectiveness and efficacy of using robotics in clinical settings to facilitate Life Review. Life Review is a process in which subjects retrospectively analyze major life events with a conversation partner in order to find meaning and to synthesize a narrative. In this experiment, Life Review was conducted with 5 elderly subjects and two types of partners: a human and a robot. The partners utilized a set of trigger questions to review past events with their subjects. Two sequences of Life Review, each comprising four sessions, were completed. Four sessions involved a human partner, and four involved a robot partner. The recorded correspondences in Life Review were transcribed, and the utterances of the participants with the two partners were compared and analyzed qualitatively. This preliminary study was the first attempt to explore the benefits of conducting Life Review with robotic conversation partners. The results showcased distinct differences between a human partner and a robotic partner. Specifically, subjects in sessions with a human partner showed stronger awareness of generational gaps between the human partner rather than the robotic partner. In contrast, sessions with a robotic partner included more universally transmissive values. The outcome suggests Life Review with robots can potentially provide elderly patients greater safety and comfort in telling their unique life narratives. The usage of robotic partners in Life Review provides a promising and novel research area into improving and re-imagining mental health access and outcomes for patients.


Sign in / Sign up

Export Citation Format

Share Document