scholarly journals A Single-Trial P300 Detector Based on Symbolized EEG and Autoencoded-(1D)CNN to Improve ITR Performance in BCIs

Sensors ◽  
2021 ◽  
Vol 21 (12) ◽  
pp. 3961
Author(s):  
Daniela De Venuto ◽  
Giovanni Mezzina

In this paper, we propose a breakthrough single-trial P300 detector that maximizes the information translate rate (ITR) of the brain–computer interface (BCI), keeping high recognition accuracy performance. The architecture, designed to improve the portability of the algorithm, demonstrated full implementability on a dedicated embedded platform. The proposed P300 detector is based on the combination of a novel pre-processing stage based on the EEG signals symbolization and an autoencoded convolutional neural network (CNN). The proposed system acquires data from only six EEG channels; thus, it treats them with a low-complexity preprocessing stage including baseline correction, windsorizing and symbolization. The symbolized EEG signals are then sent to an autoencoder model to emphasize those temporal features that can be meaningful for the following CNN stage. This latter consists of a seven-layer CNN, including a 1D convolutional layer and three dense ones. Two datasets have been analyzed to assess the algorithm performance: one from a P300 speller application in BCI competition III data and one from self-collected data during a fluid prototype car driving experiment. Experimental results on the P300 speller dataset showed that the proposed method achieves an average ITR (on two subjects) of 16.83 bits/min, outperforming by +5.75 bits/min the state-of-the-art for this parameter. Jointly with the speed increase, the recognition performance returned disruptive results in terms of the harmonic mean of precision and recall (F1-Score), which achieve 51.78 ± 6.24%. The same method used in the prototype car driving led to an ITR of ~33 bit/min with an F1-Score of 70.00% in a single-trial P300 detection context, allowing fluid usage of the BCI for driving purposes. The realized network has been validated on an STM32L4 microcontroller target, for complexity and implementation assessment. The implementation showed an overall resource occupation of 5.57% of the total available ROM, ~3% of the available RAM, requiring less than 3.5 ms to provide the classification outcome.

2015 ◽  
Vol 2015 ◽  
pp. 1-8 ◽  
Author(s):  
Ernest Nlandu Kamavuako ◽  
Mads Jochumsen ◽  
Imran Khan Niazi ◽  
Kim Dremstrup

Detection of movement intention from the movement-related cortical potential (MRCP) derived from the electroencephalogram (EEG) signals has shown to be important in combination with assistive devices for effective neurofeedback in rehabilitation. In this study, we compare time and frequency domain features to detect movement intention from EEG signals prior to movement execution. Data were recoded from 24 able-bodied subjects, 12 performing real movements, and 12 performing imaginary movements. Furthermore, six stroke patients with lower limb paresis were included. Temporal and spectral features were investigated in combination with linear discriminant analysis and compared with template matching. The results showed that spectral features were best suited for differentiating between movement intention and noise across different tasks. The ensemble average across tasks when using spectral features was (error = 3.4 ± 0.8%, sensitivity = 97.2 ± 0.9%, and specificity = 97 ± 1%) significantly better (P<0.01) than temporal features (error = 15 ± 1.4%, sensitivity: 85 ± 1.3%, and specificity: 84 ± 2%). The proposed approach also (error = 3.4 ± 0.8%) outperformed template matching (error = 26.9 ± 2.3%) significantly (P>0.001). Results imply that frequency information is important for detecting movement intention, which is promising for the application of this approach to provide patient-driven real-time neurofeedback.


2021 ◽  
Author(s):  
Huimin Li ◽  
Ying Zeng ◽  
Xiyu Song ◽  
Li Tong ◽  
Jun Shu ◽  
...  

2018 ◽  
Vol 7 (2.24) ◽  
pp. 159
Author(s):  
Durga Prasad K ◽  
Manjunathachari K ◽  
Giri Prasad M.N

This paper focus on Image retrieval using Sketch based image retrieval system. The low complexity model for image representation has given the sketch based image retrieval (SBIR) a optimal selection for next generation application in low resource environment. The SBIR approach uses the geometrical region representation to describe the feature and utilize for recognition. In the SBIR model, the features represented define the image. Towards the improvement of SBIR recognition performance, in this paper a new invariant modeling using “orientation feature transformed modeling” is proposed. The approach gives the enhancement of invariant property and retrieval performance improvement in transformed domain. The experimental results illustrate the significance of invariant orientation feature representation in SBIR over the conventional models.  


2017 ◽  
Author(s):  
◽  
G. Quiroz

One of the most interesting brain machine interface (BMI) applications, is the control of assistive devices for rehabilitation of neuromotor pathologies. This means that assistive devices (prostheses, orthoses, or exoskeletons) are able to detect user motion intention, by the acquisition and interpretation of electroencephalographic (EEG) signals. Such interpretation is based on the time, frequency or space features of the EEG signals. For this reason, in this paper a coherence-based EEG study is proposed during locomotion that along with the graph theory allows to establish spatio-temporal parameters that are characteristic in this study. The results show that along with the temporal features of the signal it is possible to find spatial patterns in order to classify motion tasks of interest. In this manner, the connectivity analysis alongside graphs provides reliable information about the spatio-temporal characteristics of the neural activity, showing a dynamic pattern in the connectivity during locomotions tasks.


2017 ◽  
Author(s):  
Saleh Alzahrani ◽  
Charles W Anderson

Objective: The P300 signal is an electroencephalography (EEG) positive deflection observed 300 ms to 600 ms after an infrequent, but expected, stimulus is presented to a subject. The aim of this study was to investigate the capability of Emotiv EPOC+ headset to capture and record the P300 wave. Moreover, the effects of using different matrix sizes, flash duration, and colors were studied. Methods: Participants attended to one cell of either 6x6 or 3x3 matrix while the rows and columns flashed randomly at different duration (100 ms or 175 ms). The EEG signals were sent wirelessly to OpenViBE software, which is used to run the P300 speller. Results: The results provide evidence of capability of the Emotiv EPOC+ headset to detect the P300 signals from two channels, O1 and O2. In addition, when the matrix size increases, the P300 amplitude increases. The results also show that longer flash duration resulted in larger P300 amplitude. Also, the effect of using colored matrix was clear on the O2 channel. Furthermore, results show that participants reached accuracy above 70% after three to four training sessions. Conclusion: The results confirmed the capability of the Emotiv EPOC+ headset for detecting P300 signals. In addition, matrix size, flash duration, and colors can affect the P300 speller performance. Significance: Such an affordable and portable headset could be utilized to control P300-based BCI or other BCI systems especially for the out-of-the-lab applications.


Electronics ◽  
2019 ◽  
Vol 8 (11) ◽  
pp. 1208 ◽  
Author(s):  
Kang Yue ◽  
Danli Wang

Visual fatigue evaluation plays an important role in applications such as virtual reality since the visual fatigue symptoms always affect the user experience seriously. Existing visual evaluation methods require hand-crafted features for classification, and conduct feature extraction and classification in a separated manner. In this paper, we conduct a designed experiment to collect electroencephalogram (EEG) signals of various visual fatigue levels, and present a multi-scale convolutional neural network (CNN) architecture named MorletInceptionNet to detect visual fatigue using EEG as input, which exploits the spatial-temporal structure of multichannel EEG signals. Our MorletInceptionNet adopts a joint space-time-frequency features extraction scheme in which Morlet wavelet-like kernels are used for time-frequency raw feature extraction and inception architecture are further used to extract multi-scale temporal features. Then, the multi-scale temporal features are concatenated and fed to the fully connected layer for visual fatigue evaluation using classification. In experiment evaluation, we compare our method with five state-of-the-art methods, and the results demonstrate that our model achieve overally the best performance better performance for two widely used evaluation metrics, i.e., classification accuracy and kappa value. Furthermore, we use input-perturbation network-prediction correlation maps to conduct in-depth analysis into the reason why the proposed method outperforms other methods. The results suggest that our model is sensitive to the perturbation of β (14–30 Hz) and γ (30–40 Hz) bands. Furthermore, their spatial patterns are of high correlation with that of the corresponding power spectral densities which are used as evaluation features traditionally. This finding provides evidence of the hypothesis that the proposed model can learn the joint time-frequency-space features to distinguish fatigue levels automatically.


Sign in / Sign up

Export Citation Format

Share Document