scholarly journals Functional MRI based simulations of ECoG grid configurations for optimal measurement of spatially distributed hand-gesture information

2020 ◽  
Author(s):  
Max van den Boom ◽  
Kai J. Miller ◽  
Nick F. Ramsey ◽  
Dora Hermes

AbstractIn electrocorticography (ECoG), the physical characteristics of the electrode grid determine which aspect of the neurophysiology is measured. For particular cases, the ECoG grid may be tailored to capture specific features, such as in the development and use of brain-computer-interfaces (BCI). Neural representations of hand movement are increasingly used to control ECoG based BCIs. However, it remains unclear which grid configurations are the most optimal to capture the dynamics of hand gesture information. Here, we investigate how the design and surgical placement of grids would affect the usability of ECoG measurements. High resolution 7T functional MRI was used as a proxy for neural activity in ten healthy participants to simulate various grid configurations, and evaluated the performance of each configuration for decoding hand gestures. The grid configurations varied in number of electrodes, electrode distance and electrode size. Optimal decoding of hand gestures occurred in grid configurations with a higher number of densely-packed, large-size, electrodes up to a grid of ~5×5 electrodes. When restricting the grid placement to a highly informative region of primary sensorimotor cortex, optimal parameters converged to about 3×3 electrodes, an inter-electrode distance of 8mm, and an electrode size of 3mm radius (performing at ~70% 3-class classification accuracy). Our approach might be used to identify the most informative region, find the optimal grid configuration and assist in positioning of the grid to achieve high BCI performance for the decoding of hand-gestures prior to surgical implantation.

Author(s):  
Srinivas K ◽  
Manoj Kumar Rajagopal

To recognize different hand gestures and achieve efficient classification to understand static and dynamic hand movements used for communications.Static and dynamic hand movements are first captured using gesture recognition devices including Kinect device, hand movement sensors, connecting electrodes, and accelerometers. These gestures are processed using hand gesture recognition algorithms such as multivariate fuzzy decision tree, hidden Markov models (HMM), dynamic time warping framework, latent regression forest, support vector machine, and surface electromyogram. Hand movements made by both single and double hands are captured by gesture capture devices with proper illumination conditions. These captured gestures are processed for occlusions and fingers close interactions for identification of right gesture and to classify the gesture and ignore the intermittent gestures. Real-time hand gestures recognition needs robust algorithms like HMM to detect only the intended gesture. Classified gestures are then compared for the effectiveness with training and tested standard datasets like sign language alphabets and KTH datasets. Hand gesture recognition plays a very important role in some of the applications such as sign language recognition, robotics, television control, rehabilitation, and music orchestration.


Author(s):  
Sukhendra Singh ◽  
G. N. Rathna ◽  
Vivek Singhal

Introduction: Sign language is the only way to communicate for speech-impaired people. But this sign language is not known to normal people so this is the cause of barrier in communicating. This is the problem faced by speech impaired people. In this paper, we have presented our solution which captured hand gestures with Kinect camera and classified the hand gesture into its correct symbol. Method: We used Kinect camera not the ordinary web camera because the ordinary camera does not capture its 3d orientation or depth of an image from camera however Kinect camera can capture 3d image and this will make classification more accurate. Result: Kinect camera will produce a different image for hand gestures for ‘2’ and ‘V’ and similarly for ‘1’ and ‘I’ however, normal web camera will not be able to distinguish between these two. We used hand gesture for Indian sign language and our dataset had 46339, RGB images and 46339 depth images. 80% of the total images were used for training and the remaining 20% for testing. In total 36 hand gestures were considered to capture alphabets and alphabets from A-Z and 10 for numeric, 26 for digits from 0-9 were considered to capture alphabets and Keywords. Conclusion: Along with real-time implementation, we have also shown the comparison of the performance of the various machine learning models in which we have found out the accuracy of CNN on depth- images has given the most accurate performance than other models. All these resulted were obtained on PYNQ Z2 board.


2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Shahzad Ahmed ◽  
Dingyang Wang ◽  
Junyoung Park ◽  
Sung Ho Cho

AbstractIn the past few decades, deep learning algorithms have become more prevalent for signal detection and classification. To design machine learning algorithms, however, an adequate dataset is required. Motivated by the existence of several open-source camera-based hand gesture datasets, this descriptor presents UWB-Gestures, the first public dataset of twelve dynamic hand gestures acquired with ultra-wideband (UWB) impulse radars. The dataset contains a total of 9,600 samples gathered from eight different human volunteers. UWB-Gestures eliminates the need to employ UWB radar hardware to train and test the algorithm. Additionally, the dataset can provide a competitive environment for the research community to compare the accuracy of different hand gesture recognition (HGR) algorithms, enabling the provision of reproducible research results in the field of HGR through UWB radars. Three radars were placed at three different locations to acquire the data, and the respective data were saved independently for flexibility.


2018 ◽  
Vol 14 (7) ◽  
pp. 155014771879075 ◽  
Author(s):  
Kiwon Rhee ◽  
Hyun-Chool Shin

In the recognition of electromyogram-based hand gestures, the recognition accuracy may be degraded during the actual stage of practical applications for various reasons such as electrode positioning bias and different subjects. Besides these, the change in electromyogram signals due to different arm postures even for identical hand gestures is also an important issue. We propose an electromyogram-based hand gesture recognition technique robust to diverse arm postures. The proposed method uses both the signals of the accelerometer and electromyogram simultaneously to recognize correct hand gestures even for various arm postures. For the recognition of hand gestures, the electromyogram signals are statistically modeled considering the arm postures. In the experiments, we compared the cases that took into account the arm postures with the cases that disregarded the arm postures for the recognition of hand gestures. In the cases in which varied arm postures were disregarded, the recognition accuracy for correct hand gestures was 54.1%, whereas the cases using the method proposed in this study showed an 85.7% average recognition accuracy for hand gestures, an improvement of more than 31.6%. In this study, accelerometer and electromyogram signals were used simultaneously, which compensated the effect of different arm postures on the electromyogram signals and therefore improved the recognition accuracy of hand gestures.


2020 ◽  
Vol 2 (1) ◽  
pp. 60-73
Author(s):  
Rahmiy Kurniasary ◽  
Ismail Sukardi ◽  
Ahmad Syarifuddin

Hand gesture method including requires high memorization ability, some students are not active and focus in synchronizing the pronunciation of lafadz verses and doing hand gestures in learning to memorize and interpret the Qur'an. The purpose of this study was to determine the application of the method of hand gesture in learning to memorize and interpret the Qur'an of students inX garade in Madrasah Aliyah Negeri 1 Prabumulih. The research method used is descriptive qualitative analysis that discusses the application of the method of hand gesture in learning to memorize and interpret the Qur'an of students inX grade in Madrasah Aliyah Negeri 1 Prabumulih. The type of approach used descriptive qualitative with data collection techniques through observation, interviews, documentation and triangulation. Analysis of data qualitatively through three stages, namely data reduction, data presentation and conclusion stages. The results of research conducted by researchers are, first, the steps in the application of hand sign method by the teacher of Al-Qur'an Hadith in X.IPA3 includes teacher activities, namely the teacher explains the material and gives examples of verses to be memorized and interpreted using method of hand gestures on learning video shows on the projector. Student activities, namely students apply the method of hand gesture to the verse that has been taught. Second, supporting factors in the application of hand gesture methods in the form of internal factors, namely from the level of willingness and ability to memorize, external namely in terms of the use of media, teacher skills and a pleasant learning atmosphere. Third, the inhibiting factor in the application of the hand gesture method is the time required by each student, the level of student willingness, skills in making hand gestures and synchronization between the pronunciation of lafadz with hand movements.


2020 ◽  
pp. 1-15
Author(s):  
Anna Bishop ◽  
Erica A. Cartmill

Abstract Classic Maya (a.d. 250–900) art is filled with expressive figures in a variety of highly stylized poses and postures. These poses are so specific that they appear to be intentionally communicative, yet their meanings remain elusive. A few studies have scratched the surface of this issue, suggesting that a correlation exists between body language and social roles in Maya art. The present study examines whether one type of body language (hand gestures) in Classic Maya art represents and reflects elements of social structure. This analysis uses a coding approach derived from studies of hand gesture in conversation to apply an interactional approach to a static medium, thereby broadening the methods used to analyze gesture in ancient art. Statistics are used to evaluate patterns of gesture use in palace scenes across 289 figures on 94 different vases, with results indicating that the form and angling of gestures are related to social hierarchy. Furthermore, this study considers not just the individual status of each figure, but the interaction between figures. The results not only shed light on how gesture was depicted in Maya art, but also demonstrate how figural representation reflects social structure.


2020 ◽  
Vol 6 (8) ◽  
pp. 73 ◽  
Author(s):  
Munir Oudah ◽  
Ali Al-Naji ◽  
Javaan Chahl

Hand gestures are a form of nonverbal communication that can be used in several fields such as communication between deaf-mute people, robot control, human–computer interaction (HCI), home automation and medical applications. Research papers based on hand gestures have adopted many different techniques, including those based on instrumented sensor technology and computer vision. In other words, the hand sign can be classified under many headings, such as posture and gesture, as well as dynamic and static, or a hybrid of the two. This paper focuses on a review of the literature on hand gesture techniques and introduces their merits and limitations under different circumstances. In addition, it tabulates the performance of these methods, focusing on computer vision techniques that deal with the similarity and difference points, technique of hand segmentation used, classification algorithms and drawbacks, number and types of gestures, dataset used, detection range (distance) and type of camera used. This paper is a thorough general overview of hand gesture methods with a brief discussion of some possible applications.


2020 ◽  
Vol 2020 ◽  
pp. 1-14
Author(s):  
Guoliang Chen ◽  
Kaikai Ge

In this paper, a fusion method based on multiple features and hidden Markov model (HMM) is proposed for recognizing dynamic hand gestures corresponding to an operator’s instructions in robot teleoperation. In the first place, a valid dynamic hand gesture from continuously obtained data according to the velocity of the moving hand needs to be separated. Secondly, a feature set is introduced for dynamic hand gesture expression, which includes four sorts of features: palm posture, bending angle, the opening angle of the fingers, and gesture trajectory. Finally, HMM classifiers based on these features are built, and a weighted calculation model fusing the probabilities of four sorts of features is presented. The proposed method is evaluated by recognizing dynamic hand gestures acquired by leap motion (LM), and it reaches recognition rates of about 90.63% for LM-Gesture3D dataset created by the paper and 93.3% for Letter-gesture dataset, respectively.


2017 ◽  
Vol 10 (27) ◽  
pp. 1329-1342 ◽  
Author(s):  
Javier O. Pinzon Arenas ◽  
Robinson Jimenez Moreno ◽  
Paula C. Useche Murillo

This paper presents the implementation of a Region-based Convolutional Neural Network focused on the recognition and localization of hand gestures, in this case 2 types of gestures: open and closed hand, in order to achieve the recognition of such gestures in dynamic backgrounds. The neural network is trained and validated, achieving a 99.4% validation accuracy in gesture recognition and a 25% average accuracy in RoI localization, which is then tested in real time, where its operation is verified through times taken for recognition, execution behavior through trained and untrained gestures, and complex backgrounds.


SCITECH Nepal ◽  
2019 ◽  
Vol 14 (1) ◽  
pp. 22-29
Author(s):  
Sanish Manandhar ◽  
Sushana Bajracharya ◽  
Sanjeev Karki ◽  
Ashish Kumar Jha

The main purpose of this paper is to confer the system that converts a given sign used by disabled person into its appropriate textual, audio, and pictorial form using components such as Arduino Mega, Flex sensors, Accelerometer, which could be under standby a common person. A wearable glove controller is design with fl ex sensors attached on each finger, which allows the system to sense the finger movements, and aGy-61 accelerometer, which are uses to sense the hand movement of the disabled person. The wearable input glove controller sends the collected input signal to the system for processing. The system uses Random forest algorithm to predict the correct output to an accuracy of 85% on current training model.


Sign in / Sign up

Export Citation Format

Share Document