Deep Reinforcement Learning for Robotic Hand Manipulation

Author(s):  
Muhammed Saeed ◽  
Mohammed Nagdi ◽  
Benjamin Rosman ◽  
Hiba H. S. M. Ali
Author(s):  
Edwin Valarezo Añazco ◽  
Patricio Rivera Lopez ◽  
Nahyeon Park ◽  
Jiheon Oh ◽  
Gahyeon Ryu ◽  
...  

Author(s):  
Mingfang Liu ◽  
Zhirui Zhao ◽  
Wei Zhang ◽  
Lina Hao

Humanoid robotic hand actuated by shape memory alloy (SMA) represents a new emerging technology. SMA has a wide range of potential applications in many different fields, ranging from industrial assembly to biomedicine applications, due to the characteristic of high power-to-weight ratio, low driving voltages and noiselessness. However, nonlinearities of SMA and complex dynamic models of SMA-based robotic hands result in difficulties in controlling. In this paper, a humanoid SMA-based robotic hand composed of five fingers is presented with the ability of adaptive grasping. Reinforcement learning as a model-free control strategy can search for optimal control of systems with nonlinear and uncertainty. Therefore, an adaptive SA-Q-Learning (ASA-Q-learning) controller is proposed to control the humanoid robotic finger. The performance of ASA-Q-learning controller is compared with SA-Q-learning and PID controller through experimentation. Results have shown that ASA-Q-learning controller can control the humanoid SMA-based robotic hand effectively with faster convergence rate and higher control precision than SA-Q-learning and PID controller, and is feasible for implementation in a model-free system.


Sensors ◽  
2021 ◽  
Vol 21 (16) ◽  
pp. 5301
Author(s):  
Patricio Rivera ◽  
Edwin Valarezo Valarezo Añazco ◽  
Tae-Seong Kim

Anthropomorphic robotic hands are designed to attain dexterous movements and flexibility much like human hands. Achieving human-like object manipulation remains a challenge especially due to the control complexity of the anthropomorphic robotic hand with a high degree of freedom. In this work, we propose a deep reinforcement learning (DRL) to train a policy using a synergy space for generating natural grasping and relocation of variously shaped objects using an anthropomorphic robotic hand. A synergy space is created using a continuous normalizing flow network with point clouds of haptic areas, representing natural hand poses obtained from human grasping demonstrations. The DRL policy accesses the synergistic representation and derives natural hand poses through a deep regressor for object grasping and relocation tasks. Our proposed synergy-based DRL achieves an average success rate of 88.38% for the object manipulation tasks, while the standard DRL without synergy space only achieves 50.66%. Qualitative results show the proposed synergy-based DRL policy produces human-like finger placements over the surface of each object including apple, banana, flashlight, camera, lightbulb, and hammer.


Decision ◽  
2016 ◽  
Vol 3 (2) ◽  
pp. 115-131 ◽  
Author(s):  
Helen Steingroever ◽  
Ruud Wetzels ◽  
Eric-Jan Wagenmakers

Sign in / Sign up

Export Citation Format

Share Document