Enhanced Transparency for Physical Human-Robot Interaction Using Human Hand Impedance Compensation

2018 ◽  
Vol 23 (6) ◽  
pp. 2662-2670
Author(s):  
Kyeong Ha Lee ◽  
Seung Guk Baek ◽  
Hyuk Jin Lee ◽  
Hyouk Ryeol Choi ◽  
Hyungpil Moon ◽  
...  
Author(s):  
Adhau P ◽  
◽  
Kadwane S. G ◽  
Shital Telrandhe ◽  
Rajguru V. S ◽  
...  

Human robot interaction have been ever the topic of research to research scholars owing to its importance to help humanity. Robust human interacting robot where commands from Electromyogram (EMG) signals is recently being investigated. This article involves study of motions a system that allows signals recorded directly from a human body and thereafter can be used for control of a small robotic arm. The various gestures are recognized by placing the electrodes or sensors on the human hand. These gestures are then identified by using neural network. The neural network will thus train the signals. The offline control of the arm is done by controlling the motors of the robotic arm.


Author(s):  
Feifei Bian ◽  
Danmei Ren ◽  
Ruifeng Li ◽  
Peidong Liang

Purpose The purpose of this paper is to eliminate instability which may occur when a human stiffens his arms in physical human–robot interaction by estimating the human hand stiffness and presenting a modified vibration index. Design/methodology/approach Human hand stiffness is first estimated in real time as a prior indicator of instability by capturing the arm configuration and modeling the level of muscle co-contraction in the human’s arms. A time-domain vibration index based on the interaction force is then modified to reduce the delay in instability detection. The instability is confirmed when the vibration index exceeds a given threshold. The virtual damping coefficient in admittance controller is adjusted accordingly to ensure stability in physical human–robot interaction. Findings By estimating the human hand stiffness and modifying the vibration index, the instability which may occur in stiff environment in physical human–robot interaction is detected and eliminated, and the time delay is reduced. The experimental results demonstrate significant improvement in stabilizing the system when the human operator stiffens his arms. Originality/value The originality is in estimating the human hand stiffness online as a prior indicator of instability by capturing the arm configuration and modeling the level of muscle co-contraction in the human’s arms. A modification of the vibration index is also an originality to reduce the time delay of instability detection.


2020 ◽  
Vol 1 ◽  
pp. 1027-1036
Author(s):  
A. Orabona ◽  
A. Palazzi ◽  
S. Graziosi ◽  
F. Ferrise ◽  
M. Bordegoni

AbstractThe recent interest in human-robot interaction requires the development of new gripping solutions, compared to those already available and widely used. One of the most advanced solutions in nature is that of the human hand, and several research contributions try to replicate its functionality. Technological advances in manufacturing technologies and design tools are opening possibilities in the design of new solutions. The paper reports the results of the design of an underactuated artificial robotic hand, designed by exploiting the benefits offered by additive manufacturing technologies.


Robotica ◽  
2014 ◽  
Vol 33 (2) ◽  
pp. 314-331 ◽  
Author(s):  
B. V. Adorno ◽  
A. P. L. Bó ◽  
P. Fraisse

SUMMARYThis paper presents a novel approach for the description of physical human-robot interaction (pHRI) tasks that involve two-arm coordination, and where tasks are described by the relative pose between the human hand and the robot hand. We develop a unified kinematic model that takes into account the human-robot system from a holistic point of view, and we also propose a kinematic control strategy for pHRI that comprises different levels of shared autonomy. Since the kinematic model takes into account the complete human-robot interaction system and the kinematic control law is closed loop at the interaction level, the kinematic constraints of the task are enforced during its execution. Experiments are performed in order to validate the proposed approach, including a particular case where the robot controls the human arm by means of functional electrical stimulation (FES), which may potentially provide useful solutions for the interaction between assistant robots and impaired individuals (e.g., quadriplegics and hemiplegics).


2021 ◽  
Vol 11 (12) ◽  
pp. 5651
Author(s):  
Yu Wang ◽  
Yuanyuan Yang ◽  
Baoliang Zhao ◽  
Xiaozhi Qi ◽  
Ying Hu ◽  
...  

In order to achieve effective physical human–robot interaction, human dynamic characteristics needs to be considered in admittance control. This paper proposes a variable admittance control method for physical human–robot interaction based on trajectory prediction of human hand motion. By predicting the moving direction of the robot end tool under human guidance, the admittance control parameters are adjusted to reduce the interaction force. The end tool trajectory of the robot under human guidance is used for offline training of long and short-term memory neural network to generate trajectory predictors. Then the trajectory predictors are used in variable admittance control to predict the trajectory and movement direction of the robot end tool in real time. The variable admittance controller adjusts the damping matrix to reduce the damping value in the moving direction. Experiment results show that, using the constant admittance method as a benchmark, the interaction force of the proposed method is reduced by 23%, the trajectory error is reduced by 51%, and the operating jerk is reduced by at least 21%, which proves that the proposed method improves the accuracy and compliance of the operation.


Author(s):  
Muhammet Fatih Aslan ◽  
Akif Durdu ◽  
Kadir Sabancı ◽  
Kemal Erdogan

Purpose In this study, human activity with finite and specific ranking is modeled with finite state machine, and an application for human–robot interaction was realized. A robot arm was designed that makes specific movements. The purpose of this paper is to create a language associated to a complex task, which was then used to teach individuals by the robot that knows the language. Design/methodology/approach Although the complex task is known by the robot, it is not known by the human. When the application is started, the robot continuously checks the specific task performed by the human. To carry out the control, the human hand is tracked. For this, the image processing techniques and the particle filter (PF) based on the Bayesian tracking method are used. To determine the complex task performed by the human, the task is divided into a series of sub-tasks. To identify the sequence of the sub-tasks, a push-down automata that uses a context-free grammar language structure is developed. Depending on the correctness of the sequence of the sub-tasks performed by humans, the robot produces different outputs. Findings This application was carried out for 15 individuals. In total, 11 out of the 15 individuals completed the complex task correctly by following the different outputs. Originality/value This type of study is suitable for applications to improve human intelligence and to enable people to learn quickly. Also, the risky tasks of a person working in a production or assembly line can be controlled with such applications by the robots.


Sign in / Sign up

Export Citation Format

Share Document