scholarly journals Using Nonlinear Dynamics of EEG Signals to Decode Hand Movement Directions Under Bimanual Movement

Author(s):  
Jiarong Wang ◽  
Luzheng Bi ◽  
Weijie Fei

Abstract Background: Decoding hand movement parameters from electroencephalograms (EEG) signals can provide intuitive control for brain-computer interfaces (BCIs). However, most existing studies of EEG-based hand movement decoding are focused on single hand movement. Since the both-hand movement is common in human augmentation systems, to address the decoding of hand movement under the opposite hand movement, we investigate the neural signatures and decoding of the primary hand movement direction from EEG signals under the opposite hand movement. Methods: The decoding model was developed by using an echo state network (ESN) to extract nonlinear dynamics parameters of movement-related cortical potentials (MRCPs) as decoding features and linear discriminant analysis as a classifier. Results: Significant differences in MRCPs between movement conditions with and without an opposite hand movement were found. Furthermore, using the ESN-based models, the decoding accuracies reached 86.03± 7.32% and 88.45± 6.16% under the conditions without and with the opposite hand movement, 20 respectively. Conclusions: These findings showed that the proposed method performed well in decoding the primary hand movement directions under the conditions with and without the opposite hand movement. This study may open a new avenue to decode hand movement parameters from EEG signals and lay a foundation for the future development of BCI-based human augmentation systems.

2003 ◽  
Vol 89 (2) ◽  
pp. 1136-1142 ◽  
Author(s):  
Yoram Ben-Shaul ◽  
Eran Stark ◽  
Itay Asher ◽  
Rotem Drori ◽  
Zoltan Nadasdy ◽  
...  

Although previous studies have shown that activity of neurons in the motor cortex is related to various movement parameters, including the direction of movement, the spatial pattern by which these parameters are represented is still unresolved. The current work was designed to study the pattern of representation of the preferred direction (PD) of hand movement over the cortical surface. By studying pairwise PD differences, and by applying a novel implementation of the circular variance during preparation and movement periods in the context of a center-out task, we demonstrate a nonrandom distribution of PDs over the premotor and motor cortical surface of two monkeys. Our analysis shows that, whereas PDs of units recorded by nonadjacent electrodes are not more similar than expected by chance, PDs of units recorded by adjacent electrodes are. PDs of units recorded by a single electrode display the greatest similarity. Comparison of PD distributions during preparation and movement reveals that PDs of nearby units tend to be more similar during the preparation period. However, even for pairs of units recorded by a single electrode, the mean PD difference is typically large (45° and 75° during preparation and movement, respectively), so that a strictly modular representation of hand movement direction over the cortical surface is not supported by our data.


Author(s):  
Lochi Yu ◽  
Cristian Ureña

Since the first recordings of brain electrical activity more than 100 years ago remarkable contributions have been done to understand the brain functionality and its interaction with environment. Regardless of the nature of the brain-computer interface BCI, a world of opportunities and possibilities has been opened not only for people with severe disabilities but also for those who are pursuing innovative human interfaces. Deeper understanding of the EEG signals along with refined technologies for its recording is helping to improve the performance of EEG based BCIs. Better processing and features extraction methods, like Independent Component Analysis (ICA) and Wavelet Transform (WT) respectively, are giving promising results that need to be explored. Different types of classifiers and combination of them have been used on EEG BCIs. Linear, neural and nonlinear Bayesian have been the most used classifiers providing accuracies ranges between 60% and 90%. Some demand more computational resources like Support Vector Machines (SVM) classifiers but give good generality. Linear Discriminant Analysis (LDA) classifiers provide poor generality but low computational resources, making them optimal for some real time BCIs. Better classifiers must be developed to tackle the large patterns variability across different subjects by using every available resource, method or technology.


2018 ◽  
Vol 7 (2) ◽  
pp. 279-285
Author(s):  
Sandy Akbar Dewangga ◽  
Handayani Tjandrasa ◽  
Darlis Herumurti

Brain-computer interfaces have been explored for years with the intent of using human thoughts to control mechanical system. By capturing the transmission of signals directly from the human brain or electroencephalogram (EEG), human thoughts can be made as motion commands to the robot. This paper presents a prototype for an electroencephalogram (EEG) based brain-actuated robot control system using mental commands. In this study, Linear Discriminant Analysis (LDA) and Support Vector Machine (SVM) method were combined to establish the best model. Dataset containing features of EEG signals were obtained from the subject non-invasively using Emotiv EPOC headset. The best model was then used by Brain-Computer Interface (BCI) to classify the EEG signals into robot motion commands to control the robot directly. The result of the classification gave the average accuracy of 69.06%.


2021 ◽  
pp. 1-16
Author(s):  
First A. Wenbo Huang ◽  
Second B. Changyuan Wang ◽  
Third C. Hongbo Jia

Traditional intention inference methods rely solely on EEG, eye movement or tactile feedback, and the recognition rate is low. To improve the accuracy of a pilot’s intention recognition, a human-computer interaction intention inference method is proposed in this paper with the fusion of EEG, eye movement and tactile feedback. Firstly, EEG signals are collected near the frontal lobe of the human brain to extract features, which includes eight channels, i.e., AF7, F7, FT7, T7, AF8, F8, FT8, and T8. Secondly, the signal datas are preprocessed by baseline removal, normalization, and least-squares noise reduction. Thirdly, the support vector machine (SVM) is applied to carry out multiple binary classifications of the eye movement direction. Finally, the 8-direction recognition of the eye movement direction is realized through data fusion. Experimental results have shown that the accuracy of classification with the proposed method can reach 75.77%, 76.7%, 83.38%, 83.64%, 60.49%,60.93%, 66.03% and 64.49%, respectively. Compared with traditional methods, the classification accuracy and the realization process of the proposed algorithm are higher and simpler. The feasibility and effectiveness of EEG signals are further verified to identify eye movement directions for intention recognition.


2021 ◽  
Vol 4 (3) ◽  
pp. 23-29
Author(s):  
Areej H. Al-Anbary ◽  
Salih M. Al-Qaraawi ‎

Recently, algorithms of machine learning are widely used with the field of electroencephalography (EEG)-Brain-Computer interfaces (BCI). In this paper, a sign language software model based on the EEG brain signal was implemented, to help the speechless persons to communicate their thoughts to others.  The preprocessing stage for the EEG signals was performed by applying the Principle Component Analysis (PCA) algorithm to extract the important features and reducing the data redundancy. A model for classifying ten classes of EEG signals, including  Facial Expression(FE) and some Motor Execution(ME) processes, had been designed. A neural network of three hidden layers with deep learning classifier had been used in this work. Data set from four different subjects were collected using a 14 channels Emotiv epoc+ device. A classification results with accuracy 95.75% were obtained ‎for the collected samples. An optimization process was performed on the predicted class with the aid of user, and then sign class will be connected to the specified sentence under a predesigned lock up table.


2008 ◽  
Vol 26 (7) ◽  
pp. 655-663 ◽  
Author(s):  
Stefan Mark Rueckriegel ◽  
Friederike Blankenburg ◽  
Roland Burghardt ◽  
Stefan Ehrlich ◽  
Günter Henze ◽  
...  

2013 ◽  
pp. 1516-1534
Author(s):  
Lochi Yu ◽  
Cristian Ureña

Since the first recordings of brain electrical activity more than 100 years ago remarkable contributions have been done to understand the brain functionality and its interaction with environment. Regardless of the nature of the brain-computer interface BCI, a world of opportunities and possibilities has been opened not only for people with severe disabilities but also for those who are pursuing innovative human interfaces. Deeper understanding of the EEG signals along with refined technologies for its recording is helping to improve the performance of EEG based BCIs. Better processing and features extraction methods, like Independent Component Analysis (ICA) and Wavelet Transform (WT) respectively, are giving promising results that need to be explored. Different types of classifiers and combination of them have been used on EEG BCIs. Linear, neural and nonlinear Bayesian have been the most used classifiers providing accuracies ranges between 60% and 90%. Some demand more computational resources like Support Vector Machines (SVM) classifiers but give good generality. Linear Discriminant Analysis (LDA) classifiers provide poor generality but low computational resources, making them optimal for some real time BCIs. Better classifiers must be developed to tackle the large patterns variability across different subjects by using every available resource, method or technology.


Sign in / Sign up

Export Citation Format

Share Document