scholarly journals Socializing Sensorimotor Contingencies

2021 ◽  
Vol 15 ◽  
Author(s):  
Annika Lübbert ◽  
Florian Göschl ◽  
Hanna Krause ◽  
Till R. Schneider ◽  
Alexander Maye ◽  
...  

The aim of this review is to highlight the idea of grounding social cognition in sensorimotor interactions shared across agents. We discuss an action-oriented account that emerges from a broader interpretation of the concept of sensorimotor contingencies. We suggest that dynamic informational and sensorimotor coupling across agents can mediate the deployment of action-effect contingencies in social contexts. We propose this concept of socializing sensorimotor contingencies (socSMCs) as a shared framework of analysis for processes within and across brains and bodies, and their physical and social environments. In doing so, we integrate insights from different fields, including neuroscience, psychology, and research on human–robot interaction. We review studies on dynamic embodied interaction and highlight empirical findings that suggest an important role of sensorimotor and informational entrainment in social contexts. Furthermore, we discuss links to closely related concepts, such as enactivism, models of coordination dynamics and others, and clarify differences to approaches that focus on mentalizing and high-level cognitive representations. Moreover, we consider conceptual implications of rethinking cognition as social sensorimotor coupling. The insight that social cognitive phenomena like joint attention, mutual trust or empathy rely heavily on the informational and sensorimotor coupling between agents may provide novel remedies for people with disturbed social cognition and for situations of disturbed social interaction. Furthermore, our proposal has potential applications in the field of human–robot interaction where socSMCs principles might lead to more natural and intuitive interfaces for human users.

Author(s):  
Vignesh Prasad ◽  
Ruth Stock-Homburg ◽  
Jan Peters

AbstractFor some years now, the use of social, anthropomorphic robots in various situations has been on the rise. These are robots developed to interact with humans and are equipped with corresponding extremities. They already support human users in various industries, such as retail, gastronomy, hotels, education and healthcare. During such Human-Robot Interaction (HRI) scenarios, physical touch plays a central role in the various applications of social robots as interactive non-verbal behaviour is a key factor in making the interaction more natural. Shaking hands is a simple, natural interaction used commonly in many social contexts and is seen as a symbol of greeting, farewell and congratulations. In this paper, we take a look at the existing state of Human-Robot Handshaking research, categorise the works based on their focus areas, draw out the major findings of these areas while analysing their pitfalls. We mainly see that some form of synchronisation exists during the different phases of the interaction. In addition to this, we also find that additional factors like gaze, voice facial expressions etc. can affect the perception of a robotic handshake and that internal factors like personality and mood can affect the way in which handshaking behaviours are executed by humans. Based on the findings and insights, we finally discuss possible ways forward for research on such physically interactive behaviours.


2020 ◽  
Vol 43 (6) ◽  
pp. 373-384 ◽  
Author(s):  
Anna Henschel ◽  
Ruud Hortensius ◽  
Emily S. Cross

Complexity ◽  
2019 ◽  
Vol 2019 ◽  
pp. 1-16
Author(s):  
Maurice Lamb ◽  
Patrick Nalepka ◽  
Rachel W. Kallen ◽  
Tamara Lorenz ◽  
Steven J. Harrison ◽  
...  

Interactive or collaborative pick-and-place tasks occur during all kinds of daily activities, for example, when two or more individuals pass plates, glasses, and utensils back and forth between each other when setting a dinner table or loading a dishwasher together. In the near future, participation in these collaborative pick-and-place tasks could also include robotic assistants. However, for human-machine and human-robot interactions, interactive pick-and-place tasks present a unique set of challenges. A key challenge is that high-level task-representational algorithms and preplanned action or motor programs quickly become intractable, even for simple interaction scenarios. Here we address this challenge by introducing a bioinspired behavioral dynamic model of free-flowing cooperative pick-and-place behaviors based on low-dimensional dynamical movement primitives and nonlinear action selection functions. Further, we demonstrate that this model can be successfully implemented as an artificial agent control architecture to produce effective and robust human-like behavior during human-agent interactions. Participants were unable to explicitly detect whether they were working with an artificial (model controlled) agent or another human-coactor, further illustrating the potential effectiveness of the proposed modeling approach for developing systems of robust real/embodied human-robot interaction more generally.


2021 ◽  
Author(s):  
Stefano Dalla Gasperina ◽  
Valeria Longatelli ◽  
Francesco Braghin ◽  
Alessandra Laura Giulia Pedrocchi ◽  
Marta Gandolla

Abstract Background: Appropriate training modalities for post-stroke upper-limb rehabilitation are key features for effective recovery after the acute event. This work presents a novel human-robot cooperative control framework that promotes compliant motion and renders different high-level human-robot interaction rehabilitation modalities under a unified low-level control scheme. Methods: The presented control law is based on a loadcell-based impedance controller provided with positive-feedback compensation terms for disturbances rejection and dynamics compensation. We developed an elbow flexion-extension experimental setup, and we conducted experiments to evaluate the controller performances. Seven high-level modalities, characterized by different levels of (i) impedance-based corrective assistance, (ii) weight counterbalance assistance, and (iii) resistance, have been defined and tested with 14 healthy volunteers.Results: The unified controller demonstrated suitability to promote good transparency and render compliant and high-impedance behavior at the joint. Superficial electromyography results showed different muscular activation patterns according to the rehabilitation modalities. Results suggested to avoid weight counterbalance assistance, since it could induce different motor relearning with respect to purely impedance-based corrective strategies. Conclusion: We proved that the proposed control framework could implement different physical human-robot interaction modalities and promote the assist-as-needed paradigm, helping the user to accomplish the task, while maintaining physiological muscular activation patterns. Future insights involve the extension to multiple degrees of freedom robots and the investigation of an adaptation control law that makes the controller learn and adapt in a therapist-like manner.


2019 ◽  
Vol 374 (1771) ◽  
pp. 20180433 ◽  
Author(s):  
Emily C. Collins

This opinion paper discusses how human–robot interaction (HRI) methodologies can be robustly developed by drawing on insights from fields outside of HRI that explore human–other interactions. The paper presents a framework that draws parallels between HRIs, and human–human, human–animal and human–object interaction literature, by considering the morphology and use of a robot to aid the development of robust HRI methodologies. The paper then briefly presents some novel empirical work as proof of concept to exemplify how the framework can help researchers define the mechanism of effect taking place within specific HRIs. The empirical work draws on known mechanisms of effect in animal-assisted therapy, and behavioural observations of touch patterns and their relation to individual differences in caring and attachment styles, and details how this trans-disciplinary approach to HRI methodology development was used to explore how an interaction with an animal-like robot was impacting a user. In doing so, this opinion piece outlines how useful objective, psychological measures of social cognition can be for deepening our understanding of HRI, and developing richer HRI methodologies, which take us away from questions that simply ask ‘Is this a good robot?’, and closer towards questions that ask ‘What mechanism of effect is occurring here, through which effective HRI is being performed?’ This paper further proposes that in using trans-disciplinary methodologies, experimental HRI can also be used to study human social cognition in and of itself. This article is part of the theme issue ‘From social brains to social robots: applying neurocognitive insights to human–robot interaction’.


Author(s):  
Andrew Best ◽  
Samantha F. Warta ◽  
Katelynn A. Kapalo ◽  
Stephen M. Fiore

Using research in social cognition as a foundation, we studied rapid versus reflective mental state attributions and the degree to which machine learning classifiers can be trained to make such judgments. We observed differences in response times between conditions, but did not find significant differences in the accuracy of mental state attributions. We additionally demonstrate how to train machine classifiers to identify mental states. We discuss advantages of using an interdisciplinary approach to understand and improve human-robot interaction and to further the development of social cognition in artificial intelligence.


Sign in / Sign up

Export Citation Format

Share Document