Deep Convolutional Neural Network for Decoding EMG for Human Computer Interaction

Author(s):  
Qi Wang ◽  
Xianping Wang
Author(s):  
K. Martin Sagayam ◽  
A. Diana Andrushia ◽  
Ahona Ghosh ◽  
Omer Deperlioglu ◽  
Ahmed A. Elngar

In recent technology, there is tremendous growth in computer applications that highlight human–computer interaction (HCI), such as augmented reality (AR), and Internet of Things (IoT). As a consequence, hand gesture recognition was highlighted as a very up-to-date research area in computer vision. The body language is a vital method to communicate between people, as well as emphasis on voice messages, or as a complete message on its own. Thus, automatic hand gestures recognition systems can be used to increase human–computer interaction. Therefore, many approaches for hand gesture recognition systems have been designed. However, most of these methods include hybrid processes such as image pre-processing, segmentation, and classification. This paper describes how to create hand gesture model easily and quickly with a well-tuned deep convolutional neural network. Experiments were performed using the Cambridge Hand Gesture data set for illustration of success and efficiency of the convolutional neural network. The accuracy was achieved as 96.66%, where sensitivity and specificity were found to be 85% and 98.12%, respectively, according to the average values obtained at the end of 20 times of operation. These results were compared with the existing works using the same dataset and it was found to have higher values than the hybrid methods.


2020 ◽  
Vol 2020 (4) ◽  
pp. 4-14
Author(s):  
Vladimir Budak ◽  
Ekaterina Ilyina

The article proposes the classification of lenses with different symmetrical beam angles and offers a scale as a spot-light’s palette. A collection of spotlight’s images was created and classified according to the proposed scale. The analysis of 788 pcs of existing lenses and reflectors with different LEDs and COBs carried out, and the dependence of the axial light intensity from beam angle was obtained. A transfer training of new deep convolutional neural network (CNN) based on the pre-trained GoogleNet was performed using this collection. GradCAM analysis showed that the trained network correctly identifies the features of objects. This work allows us to classify arbitrary spotlights with an accuracy of about 80 %. Thus, light designer can determine the class of spotlight and corresponding type of lens with its technical parameters using this new model based on CCN.


Sign in / Sign up

Export Citation Format

Share Document