Real-time Robotic Arm Control Embedded System using Hand Gestures

2021 ◽  
Vol 19 (11) ◽  
pp. 45-53
Author(s):  
Chung-Geun Kim ◽  
Eun-Su Kim ◽  
Jae-Wook Shin ◽  
Bum-Yong Park
2018 ◽  
Vol 10 (1) ◽  
pp. 35-40 ◽  
Author(s):  
Saad Abdullah ◽  
◽  
Muhammad A. Khan ◽  
Mauro Serpelloni ◽  
Emilio Sardini ◽  
...  

2016 ◽  
Vol 2016.26 (0) ◽  
pp. 3103
Author(s):  
Teruaki ITO ◽  
Yuki KAWAKAMI ◽  
Tomio WATANABE

2020 ◽  
Vol 2020 ◽  
pp. 1-9
Author(s):  
Chern-Sheng Lin ◽  
Pei-Chi Chen ◽  
Yu-Ching Pan ◽  
Che-Ming Chang ◽  
Kuo-Liang Huang

This study focused on utilizing the Kinect depth sensor to track double-hand gestures and control a real-time robotic arm. The control system is mainly composed of the microprocessor, a color camera, the depth sensor, and the robotic arm. The Kinect depth sensor was used to take photos of the human body to analyze the skeleton of a human body and obtain the relevant information. Such information was used to identify the gestures of the left hand and the left palm of the user. The gesture of left hand was used as an input command device. The gesture of the right hand was used for imitation movement teaching of robotic arm. From the depth sensor, the real-time images of the human body and the deep information of each joint were collected and converted to the relative positions of the robotic arm. Combining forward kinematics and inverse kinematics and D-H link, the gesture information of the right hand was calculated, which was converted via coordinates into each angle of the motor of the robotic arm. From the color camera, when the left palm was not detected, the user could simply use the right hand to control the action and movement of the real-time robotic arm. When the left palm was detected and 5 fingertips were identified, it meant the start of recording the real-time imitation movement of the robotic arm by the right hand. When 0 fingertip was identified, it meant the stoppage of the above recording. When 2 fingertips were identified, the user could not only control the real-time robotic arm but also repeat the recorded actions.


2017 ◽  
Vol 2017 ◽  
pp. 1-8 ◽  
Author(s):  
Qiang Gao ◽  
Lixiang Dou ◽  
Abdelkader Nasreddine Belkacem ◽  
Chao Chen

A novel hybrid brain-computer interface (BCI) based on the electroencephalogram (EEG) signal which consists of a motor imagery- (MI-) based online interactive brain-controlled switch, “teeth clenching” state detector, and a steady-state visual evoked potential- (SSVEP-) based BCI was proposed to provide multidimensional BCI control. MI-based BCI was used as single-pole double throw brain switch (SPDTBS). By combining the SPDTBS with 4-class SSEVP-based BCI, movement of robotic arm was controlled in three-dimensional (3D) space. In addition, muscle artifact (EMG) of “teeth clenching” condition recorded from EEG signal was detected and employed as interrupter, which can initialize the statement of SPDTBS. Real-time writing task was implemented to verify the reliability of the proposed noninvasive hybrid EEG-EMG-BCI. Eight subjects participated in this study and succeeded to manipulate a robotic arm in 3D space to write some English letters. The mean decoding accuracy of writing task was 0.93±0.03. Four subjects achieved the optimal criteria of writing the word “HI” which is the minimum movement of robotic arm directions (15 steps). Other subjects had needed to take from 2 to 4 additional steps to finish the whole process. These results suggested that our proposed hybrid noninvasive EEG-EMG-BCI was robust and efficient for real-time multidimensional robotic arm control.


Sign in / Sign up

Export Citation Format

Share Document