An innovative high-level human-robot interaction for disabled persons

Author(s):  
P. Nilas ◽  
P. Rani ◽  
N. Sarkar
Complexity ◽  
2019 ◽  
Vol 2019 ◽  
pp. 1-16
Author(s):  
Maurice Lamb ◽  
Patrick Nalepka ◽  
Rachel W. Kallen ◽  
Tamara Lorenz ◽  
Steven J. Harrison ◽  
...  

Interactive or collaborative pick-and-place tasks occur during all kinds of daily activities, for example, when two or more individuals pass plates, glasses, and utensils back and forth between each other when setting a dinner table or loading a dishwasher together. In the near future, participation in these collaborative pick-and-place tasks could also include robotic assistants. However, for human-machine and human-robot interactions, interactive pick-and-place tasks present a unique set of challenges. A key challenge is that high-level task-representational algorithms and preplanned action or motor programs quickly become intractable, even for simple interaction scenarios. Here we address this challenge by introducing a bioinspired behavioral dynamic model of free-flowing cooperative pick-and-place behaviors based on low-dimensional dynamical movement primitives and nonlinear action selection functions. Further, we demonstrate that this model can be successfully implemented as an artificial agent control architecture to produce effective and robust human-like behavior during human-agent interactions. Participants were unable to explicitly detect whether they were working with an artificial (model controlled) agent or another human-coactor, further illustrating the potential effectiveness of the proposed modeling approach for developing systems of robust real/embodied human-robot interaction more generally.


2021 ◽  
Author(s):  
Stefano Dalla Gasperina ◽  
Valeria Longatelli ◽  
Francesco Braghin ◽  
Alessandra Laura Giulia Pedrocchi ◽  
Marta Gandolla

Abstract Background: Appropriate training modalities for post-stroke upper-limb rehabilitation are key features for effective recovery after the acute event. This work presents a novel human-robot cooperative control framework that promotes compliant motion and renders different high-level human-robot interaction rehabilitation modalities under a unified low-level control scheme. Methods: The presented control law is based on a loadcell-based impedance controller provided with positive-feedback compensation terms for disturbances rejection and dynamics compensation. We developed an elbow flexion-extension experimental setup, and we conducted experiments to evaluate the controller performances. Seven high-level modalities, characterized by different levels of (i) impedance-based corrective assistance, (ii) weight counterbalance assistance, and (iii) resistance, have been defined and tested with 14 healthy volunteers.Results: The unified controller demonstrated suitability to promote good transparency and render compliant and high-impedance behavior at the joint. Superficial electromyography results showed different muscular activation patterns according to the rehabilitation modalities. Results suggested to avoid weight counterbalance assistance, since it could induce different motor relearning with respect to purely impedance-based corrective strategies. Conclusion: We proved that the proposed control framework could implement different physical human-robot interaction modalities and promote the assist-as-needed paradigm, helping the user to accomplish the task, while maintaining physiological muscular activation patterns. Future insights involve the extension to multiple degrees of freedom robots and the investigation of an adaptation control law that makes the controller learn and adapt in a therapist-like manner.


2020 ◽  
Vol 17 (3) ◽  
pp. 172988142092529
Author(s):  
Junhao Xiao ◽  
Pan Wang ◽  
Huimin Lu ◽  
Hui Zhang

Human–robot interaction is a vital part of human–robot collaborative space exploration, which bridges the high-level decision and path planning intelligence of human and the accurate sensing and modelling ability of the robot. However, most conventional human–robot interaction approaches rely on video streams for the operator to understand the robot’s surrounding, which lacks situational awareness and force the operator to be stressed and fatigued. This research aims to improve efficiency and promote the natural level of interaction for human–robot collaboration. We present a human–robot interaction method based on real-time mapping and online virtual reality visualization, which is implemented and verified for rescue robotics. At the robot side, a dense point cloud map is built in real-time by LiDAR-IMU tightly fusion; the resulting map is further transformed into three-dimensional normal distributions transform representation. Wireless communication is employed to transmit the three-dimensional normal distributions transform map to the remote control station in an incremental manner. At the remote control station, the received map is rendered in virtual reality using parameterized ellipsoid cells. The operator controls the robot with three modes. In complex areas, the operator can use interactive devices to give low-level motion commands. In the less unstructured region, the operator can specify a path or even a target point. Afterwards, the robot follows the path or navigates to the target point autonomously. In other words, these two modes rely more on the robot’s autonomy. By virtue of virtual reality visualization, the operator can have a more comprehensive understanding of the space to be explored. In this case, the high-level decision and path planning intelligence of human and the accurate sensing and modelling ability of the robot can be well integrated as a whole. Although the method is proposed for rescue robots, it can also be used in other out-of-sight teleoperation-based human–robot collaboration systems, including but not limited to manufacturing, space, undersea, surgery, agriculture and military operations.


2013 ◽  
pp. 281-301
Author(s):  
Mohan Sridharan

Developments in sensor technology and sensory input processing algorithms have enabled the use of mobile robots in real-world domains. As they are increasingly deployed to interact with humans in our homes and offices, robots need the ability to operate autonomously based on sensory cues and high-level feedback from non-expert human participants. Towards this objective, this chapter describes an integrated framework that jointly addresses the learning, adaptation, and interaction challenges associated with robust human-robot interaction in real-world application domains. The novel probabilistic framework consists of: (a) a bootstrap learning algorithm that enables a robot to learn layered graphical models of environmental objects and adapt to unforeseen dynamic changes; (b) a hierarchical planning algorithm based on partially observable Markov decision processes (POMDPs) that enables the robot to reliably and efficiently tailor learning, sensing, and processing to the task at hand; and (c) an augmented reinforcement learning algorithm that enables the robot to acquire limited high-level feedback from non-expert human participants, and merge human feedback with the information extracted from sensory cues. Instances of these algorithms are implemented and fully evaluated on mobile robots and in simulated domains using vision as the primary source of information in conjunction with range data and simplistic verbal inputs. Furthermore, a strategy is outlined to integrate these components to achieve robust human-robot interaction in real-world application domains.


Robotics ◽  
2013 ◽  
pp. 1255-1275
Author(s):  
Mohan Sridharan

Developments in sensor technology and sensory input processing algorithms have enabled the use of mobile robots in real-world domains. As they are increasingly deployed to interact with humans in our homes and offices, robots need the ability to operate autonomously based on sensory cues and high-level feedback from non-expert human participants. Towards this objective, this chapter describes an integrated framework that jointly addresses the learning, adaptation, and interaction challenges associated with robust human-robot interaction in real-world application domains. The novel probabilistic framework consists of: (a) a bootstrap learning algorithm that enables a robot to learn layered graphical models of environmental objects and adapt to unforeseen dynamic changes; (b) a hierarchical planning algorithm based on partially observable Markov decision processes (POMDPs) that enables the robot to reliably and efficiently tailor learning, sensing, and processing to the task at hand; and (c) an augmented reinforcement learning algorithm that enables the robot to acquire limited high-level feedback from non-expert human participants, and merge human feedback with the information extracted from sensory cues. Instances of these algorithms are implemented and fully evaluated on mobile robots and in simulated domains using vision as the primary source of information in conjunction with range data and simplistic verbal inputs. Furthermore, a strategy is outlined to integrate these components to achieve robust human-robot interaction in real-world application domains.


2021 ◽  
Vol 15 ◽  
Author(s):  
Annika Lübbert ◽  
Florian Göschl ◽  
Hanna Krause ◽  
Till R. Schneider ◽  
Alexander Maye ◽  
...  

The aim of this review is to highlight the idea of grounding social cognition in sensorimotor interactions shared across agents. We discuss an action-oriented account that emerges from a broader interpretation of the concept of sensorimotor contingencies. We suggest that dynamic informational and sensorimotor coupling across agents can mediate the deployment of action-effect contingencies in social contexts. We propose this concept of socializing sensorimotor contingencies (socSMCs) as a shared framework of analysis for processes within and across brains and bodies, and their physical and social environments. In doing so, we integrate insights from different fields, including neuroscience, psychology, and research on human–robot interaction. We review studies on dynamic embodied interaction and highlight empirical findings that suggest an important role of sensorimotor and informational entrainment in social contexts. Furthermore, we discuss links to closely related concepts, such as enactivism, models of coordination dynamics and others, and clarify differences to approaches that focus on mentalizing and high-level cognitive representations. Moreover, we consider conceptual implications of rethinking cognition as social sensorimotor coupling. The insight that social cognitive phenomena like joint attention, mutual trust or empathy rely heavily on the informational and sensorimotor coupling between agents may provide novel remedies for people with disturbed social cognition and for situations of disturbed social interaction. Furthermore, our proposal has potential applications in the field of human–robot interaction where socSMCs principles might lead to more natural and intuitive interfaces for human users.


Sign in / Sign up

Export Citation Format

Share Document