scholarly journals Recognition of EEG Signals from Imagined Vowels Using Deep Learning Methods

Sensors ◽  
2021 ◽  
Vol 21 (19) ◽  
pp. 6503
Author(s):  
Luis Carlos Sarmiento ◽  
Sergio Villamizar ◽  
Omar López ◽  
Ana Claros Collazos ◽  
Jhon Sarmiento ◽  
...  

The use of imagined speech with electroencephalographic (EEG) signals is a promising field of brain-computer interfaces (BCI) that seeks communication between areas of the cerebral cortex related to language and devices or machines. However, the complexity of this brain process makes the analysis and classification of this type of signals a relevant topic of research. The goals of this study were: to develop a new algorithm based on Deep Learning (DL), referred to as CNNeeg1-1, to recognize EEG signals in imagined vowel tasks; to create an imagined speech database with 50 subjects specialized in imagined vowels from the Spanish language (/a/,/e/,/i/,/o/,/u/); and to contrast the performance of the CNNeeg1-1 algorithm with the DL Shallow CNN and EEGNet benchmark algorithms using an open access database (BD1) and the newly developed database (BD2). In this study, a mixed variance analysis of variance was conducted to assess the intra-subject and inter-subject training of the proposed algorithms. The results show that for intra-subject training analysis, the best performance among the Shallow CNN, EEGNet, and CNNeeg1-1 methods in classifying imagined vowels (/a/,/e/,/i/,/o/,/u/) was exhibited by CNNeeg1-1, with an accuracy of 65.62% for BD1 database and 85.66% for BD2 database.

2021 ◽  
Vol 11 (11) ◽  
pp. 4922
Author(s):  
Tengfei Ma ◽  
Wentian Chen ◽  
Xin Li ◽  
Yuting Xia ◽  
Xinhua Zhu ◽  
...  

To explore whether the brain contains pattern differences in the rock–paper–scissors (RPS) imagery task, this paper attempts to classify this task using fNIRS and deep learning. In this study, we designed an RPS task with a total duration of 25 min and 40 s, and recruited 22 volunteers for the experiment. We used the fNIRS acquisition device (FOIRE-3000) to record the cerebral neural activities of these participants in the RPS task. The time series classification (TSC) algorithm was introduced into the time-domain fNIRS signal classification. Experiments show that CNN-based TSC methods can achieve 97% accuracy in RPS classification. CNN-based TSC method is suitable for the classification of fNIRS signals in RPS motor imagery tasks, and may find new application directions for the development of brain–computer interfaces (BCI).


2021 ◽  
Author(s):  
Tao Wu ◽  
Xiangzeng Kong ◽  
Yiwen Wang ◽  
Xue Yang ◽  
Jingxuan Liu ◽  
...  

2021 ◽  
Author(s):  
Ana Siravenha ◽  
Walisson Gomes ◽  
Renan Tourinho ◽  
Sergio Viademonte ◽  
Bruno Gomes

Classification of electroencephalography (EEG) signals is a complex task. EEG is a non-stationary time process with low signal to noise ratio. Among many methods usedfor EEG classification, those based on Deep Learning (DL) have been relatively successful in providing high classification accuracies. In the present study we aimed at classify resting state EEGs measured from workers of a mining complex. Just after the EEG has been collected, the workers undergonetraining in a 4D virtual reality simulator that emulates the iron ore excavation from which parameters related to their performance were analyzed by the technical staff who classified the workers into four groups based on their productivity. Twoconvolutional neural networks (ConvNets) were then used to classify the workers EEG bases on the same productivity label provided by the technical staff. The neural data was used in three configurations in order to evaluate the amount of datarequired for a high accuracy classification. Isolated, the channel T5 achieved 83% of accuracy, the subtraction of channels P3 and Pz achieved 99% and using all channels simultaneously was 99.40% assertive. This study provides results that add to the recent literature showing that even simple DL architectures are able to handle complex time series such as the EEG. In addition, it pin points an application in industry with vast possibilities of expansion.


2020 ◽  
pp. 1-1
Author(s):  
Leila Farsi ◽  
Siuly Siuly ◽  
Enamul Kabir ◽  
Hua Wang

Sensors ◽  
2019 ◽  
Vol 19 (13) ◽  
pp. 2854 ◽  
Author(s):  
Kwon-Woo Ha ◽  
Jin-Woo Jeong

Various convolutional neural network (CNN)-based approaches have been recently proposed to improve the performance of motor imagery based-brain-computer interfaces (BCIs). However, the classification accuracy of CNNs is compromised when target data are distorted. Specifically for motor imagery electroencephalogram (EEG), the measured signals, even from the same person, are not consistent and can be significantly distorted. To overcome these limitations, we propose to apply a capsule network (CapsNet) for learning various properties of EEG signals, thereby achieving better and more robust performance than previous CNN methods. The proposed CapsNet-based framework classifies the two-class motor imagery, namely right-hand and left-hand movements. The motor imagery EEG signals are first transformed into 2D images using the short-time Fourier transform (STFT) algorithm and then used for training and testing the capsule network. The performance of the proposed framework was evaluated on the BCI competition IV 2b dataset. The proposed framework outperformed state-of-the-art CNN-based methods and various conventional machine learning approaches. The experimental results demonstrate the feasibility of the proposed approach for classification of motor imagery EEG signals.


Author(s):  
Robinson Jiménez-Moreno ◽  
Javier Orlando Pinzón-Arenas ◽  
César Giovany Pachón-Suescún

This article presents a work oriented to assistive robotics, where a scenario is established for a robot to reach a tool in the hand of a user, when they have verbally requested it by his name. For this, three convolutional neural networks are trained, one for recognition of a group of tools, which obtained an accuracy of 98% identifying the tools established for the application, that are scalpel, screwdriver and scissors; one for speech recognition, trained with the names of the tools in Spanish language, where its validation accuracy reach a 97.5% in the recognition of the words; and another for recognition of the user's hand, taking in consideration the classification of 2 gestures: Open and Closed hand, where a 96.25% accuracy was achieved. With those networks, tests in real time are performed, presenting results in the delivery of each tool with a 100% of accuracy, i.e. the robot was able to identify correctly what the user requested, recognize correctly each tool and deliver the one need when the user opened their hand, taking an average time of 45 seconds in the execution of the application.


Sensors ◽  
2020 ◽  
Vol 20 (24) ◽  
pp. 7083
Author(s):  
Agnieszka Wosiak ◽  
Aleksandra Dura

Based on the growing interest in encephalography to enhance human–computer interaction (HCI) and develop brain–computer interfaces (BCIs) for control and monitoring applications, efficient information retrieval from EEG sensors is of great importance. It is difficult due to noise from the internal and external artifacts and physiological interferences. The enhancement of the EEG-based emotion recognition processes can be achieved by selecting features that should be taken into account in further analysis. Therefore, the automatic feature selection of EEG signals is an important research area. We propose a multistep hybrid approach incorporating the Reversed Correlation Algorithm for automated frequency band—electrode combinations selection. Our method is simple to use and significantly reduces the number of sensors to only three channels. The proposed method has been verified by experiments performed on the DEAP dataset. The obtained effects have been evaluated regarding the accuracy of two emotions—valence and arousal. In comparison to other research studies, our method achieved classification results that were 4.20–8.44% greater. Moreover, it can be perceived as a universal EEG signal classification technique, as it belongs to unsupervised methods.


Sign in / Sign up

Export Citation Format

Share Document