scholarly journals Riemannian classification of single-trial surface EEG and sources during checkerboard and navigational images in humans

PLoS ONE ◽  
2022 ◽  
Vol 17 (1) ◽  
pp. e0262417
Cédric Simar ◽  
Robin Petit ◽  
Nichita Bozga ◽  
Axelle Leroy ◽  
Ana-Maria Cebolla ◽  

Objective Different visual stimuli are classically used for triggering visual evoked potentials comprising well-defined components linked to the content of the displayed image. These evoked components result from the average of ongoing EEG signals in which additive and oscillatory mechanisms contribute to the component morphology. The evoked related potentials often resulted from a mixed situation (power variation and phase-locking) making basic and clinical interpretations difficult. Besides, the grand average methodology produced artificial constructs that do not reflect individual peculiarities. This motivated new approaches based on single-trial analysis as recently used in the brain-computer interface field. Approach We hypothesize that EEG signals may include specific information about the visual features of the displayed image and that such distinctive traits can be identified by state-of-the-art classification algorithms based on Riemannian geometry. The same classification algorithms are also applied to the dipole sources estimated by sLORETA. Main results and significance We show that our classification pipeline can effectively discriminate between the display of different visual items (Checkerboard versus 3D navigational image) in single EEG trials throughout multiple subjects. The present methodology reaches a single-trial classification accuracy of about 84% and 93% for inter-subject and intra-subject classification respectively using surface EEG. Interestingly, we note that the classification algorithms trained on sLORETA sources estimation fail to generalize among multiple subjects (63%), which may be due to either the average head model used by sLORETA or the subsequent spatial filtering failing to extract discriminative information, but reach an intra-subject classification accuracy of 82%.

Praveen K. Parashiva ◽  
Vinod A Prasad

Abstract When the outcome of an event is not the same as expected, the cognitive state that monitors performance elicits a time-locked brain response termed as Error-Related Potential (ErrP). Objective – In the existing work, ErrP is not recorded when there is a disassociation between an object and its description. The objective of this work is to propose a Serial Visual Presentation (SVP) experimental paradigm to record ErrP when an image and its label are disassociated. Additionally, this work aims to propose a novel method for detecting ErrP on a single-trial basis. Method – The method followed in this work includes designing of SVP paradigm in which labeled images from six categories (bike, car, flower, fruit, cat, and dog) are presented serially. In this work, a text (visual) or an audio clip describing the image in one word is presented as the label. Further, the ErrP is detected on a single-trial basis using novel electrode-averaged features. Results - The ErrP data recorded from 11 subjects’ have consistent characteristics compared to existing ErrP literature. Detection of ErrP on a single-trial basis is carried out using a novel feature extraction method on two type labeling types separately. The best average classification accuracy achieved is 69.09±4.70% and 63.33±4.56% for the audio and visual type of labeling the image, respectively. The proposed feature extraction method achieved higher classification accuracy when compared with two existing feature extraction methods. Significance - The significance of this work is that it can be used as a Brain-Computer Interface (BCI) system for quantitative evaluation and treatment of mild cognitive impairment. This work can also find non-clinical BCI applications such as image annotation.

2020 ◽  
Vol 10 (10) ◽  
pp. 726
Rupesh Kumar Chikara ◽  
Li-Wei Ko

The stop signal task has been used to quantify the human inhibitory control. The inter-subject and intra-subject variability was investigated under the inhibition of human response with a realistic environmental scenario. In present study, we used a battleground scenario where a sniper-scope picture was the background, a target picture was a go signal, and a nontarget picture was a stop signal. The task instructions were to respond on the target image and inhibit the response if a nontarget image appeared. This scenario produced a threatening situation and endorsed the evaluation of how subject’s response inhibition manifests in a real situation. In this study, 32 channels of electroencephalography (EEG) signals were collected from 20 participants during successful stop (response inhibition) and failed stop (response) trials. These EEG signals were used to predict two possible outcomes: successful stop or failed stop. The inter-subject variability (between-subjects) and intra-subject variability (within-subjects) affect the performance of participants in the classification system. The EEG signals of successful stop versus failed stop trials were classified using quadratic discriminant analysis (QDA) and linear discriminant analysis (LDA) (i.e., parametric) and K-nearest neighbor classifier (KNNC) and Parzen density-based (PARZEN) (i.e., nonparametric) under inter- and intra-subject variability. The EEG activities were found to increase during response inhibition in the frontal cortex (F3 and F4), presupplementary motor area (C3 and C4), parietal lobe (P3 and P4), and occipital (O1 and O2) lobe. Therefore, power spectral density (PSD) of EEG signals (1-50Hz) in F3, F4, C3, C4, P3, P4, O1, and O2 electrodes were measured in successful stop and failed stop trials. The PSD of the EEG signals was used as the feature input for the classifiers. Our proposed method shows an intra-subject classification accuracy of 97.61% for subject 15 with QDA classifier in C3 (left motor cortex) and an overall inter-subject classification accuracy of 71.66% ± 9.81% with the KNNC classifier in F3 (left frontal lobe). These results display how inter-subject and intra-subject variability affects the performance of the classification system. These findings can be used effectively to improve the psychopathology of attention deficit hyperactivity disorder (ADHD), obsessive-compulsive disorder (OCD), schizophrenia, and suicidality.

2021 ◽  
Vol 11 (11) ◽  
pp. 1424
Yuhong Zhang ◽  
Yuan Liao ◽  
Yudi Zhang ◽  
Liya Huang

In order to avoid erroneous braking responses when vehicle drivers are faced with a stressful setting, a K-order propagation number algorithm–Feature selection–Classification System (KFCS)is developed in this paper to detect emergency braking intentions in simulated driving scenarios using electroencephalography (EEG) signals. Two approaches are employed in KFCS to extract EEG features and to improve classification performance: the K-Order Propagation Number Algorithm is the former, calculating the node importance from the perspective of brain networks as a novel approach; the latter uses a set of feature extraction algorithms to adjust the thresholds. Working with the data collected from seven subjects, the highest classification accuracy of a single trial can reach over 90%, with an overall accuracy of 83%. Furthermore, this paper attempts to investigate the mechanisms of brain activeness under two scenarios by using a topography technique at the sensor-data level. The results suggest that the active regions at two states is different, which leaves further exploration for future investigations.

Sensors ◽  
2021 ◽  
Vol 21 (12) ◽  
pp. 3961
Daniela De Venuto ◽  
Giovanni Mezzina

In this paper, we propose a breakthrough single-trial P300 detector that maximizes the information translate rate (ITR) of the brain–computer interface (BCI), keeping high recognition accuracy performance. The architecture, designed to improve the portability of the algorithm, demonstrated full implementability on a dedicated embedded platform. The proposed P300 detector is based on the combination of a novel pre-processing stage based on the EEG signals symbolization and an autoencoded convolutional neural network (CNN). The proposed system acquires data from only six EEG channels; thus, it treats them with a low-complexity preprocessing stage including baseline correction, windsorizing and symbolization. The symbolized EEG signals are then sent to an autoencoder model to emphasize those temporal features that can be meaningful for the following CNN stage. This latter consists of a seven-layer CNN, including a 1D convolutional layer and three dense ones. Two datasets have been analyzed to assess the algorithm performance: one from a P300 speller application in BCI competition III data and one from self-collected data during a fluid prototype car driving experiment. Experimental results on the P300 speller dataset showed that the proposed method achieves an average ITR (on two subjects) of 16.83 bits/min, outperforming by +5.75 bits/min the state-of-the-art for this parameter. Jointly with the speed increase, the recognition performance returned disruptive results in terms of the harmonic mean of precision and recall (F1-Score), which achieve 51.78 ± 6.24%. The same method used in the prototype car driving led to an ITR of ~33 bit/min with an F1-Score of 70.00% in a single-trial P300 detection context, allowing fluid usage of the BCI for driving purposes. The realized network has been validated on an STM32L4 microcontroller target, for complexity and implementation assessment. The implementation showed an overall resource occupation of 5.57% of the total available ROM, ~3% of the available RAM, requiring less than 3.5 ms to provide the classification outcome.

2019 ◽  
Vol 9 (11) ◽  
pp. 326 ◽  
Hong Zeng ◽  
Zhenhua Wu ◽  
Jiaming Zhang ◽  
Chen Yang ◽  
Hua Zhang ◽  

Deep learning (DL) methods have been used increasingly widely, such as in the fields of speech and image recognition. However, how to design an appropriate DL model to accurately and efficiently classify electroencephalogram (EEG) signals is still a challenge, mainly because EEG signals are characterized by significant differences between two different subjects or vary over time within a single subject, non-stability, strong randomness, low signal-to-noise ratio. SincNet is an efficient classifier for speaker recognition, but it has some drawbacks in dealing with EEG signals classification. In this paper, we improve and propose a SincNet-based classifier, SincNet-R, which consists of three convolutional layers, and three deep neural network (DNN) layers. We then make use of SincNet-R to test the classification accuracy and robustness by emotional EEG signals. The comparable results with original SincNet model and other traditional classifiers such as CNN, LSTM and SVM, show that our proposed SincNet-R model has higher classification accuracy and better algorithm robustness.

2007 ◽  
Vol 28 (7) ◽  
pp. 602-613 ◽  
Christian-G. Bénar ◽  
Daniele Schön ◽  
Stephan Grimault ◽  
Bruno Nazarian ◽  
Boris Burle ◽  

2021 ◽  
Ahmet Batuhan Polat ◽  
Ozgun Akcay ◽  
Fusun Balik Sanli

<p>Obtaining high accuracy in land cover classification is a non-trivial problem in geosciences for monitoring urban and rural areas. In this study, different classification algorithms were tested with different types of data, and besides the effects of seasonal changes on these classification algorithms and the evaluation of the data used are investigated. In addition, the effect of increasing classification training samples on classification accuracy has been revealed as a result of the study. Sentinel-1 Synthetic Aperture Radar (SAR) images and Sentinel-2 multispectral optical images were used as datasets. Object-based approach was used for the classification of various fused image combinations. The classification algorithms Support Vector Machines (SVM), Random Forest (RF) and K-Nearest Neighborhood (kNN) methods were used for this process. In addition, Normalized Difference Vegetation Index (NDVI) was examined separately to define the exact contribution to the classification accuracy.  As a result, the overall accuracies were compared by classifying the fused data generated by combining optical and SAR images. It has been determined that the increase in the number of training samples improve the classification accuracy. Moreover, it was determined that the object-based classification obtained from single SAR imagery produced the lowest classification accuracy among the used different dataset combinations in this study. In addition, it has been shown that NDVI data does not increase the accuracy of the classification in the winter season as the trees shed their leaves due to climate conditions.</p>

Sign in / Sign up

Export Citation Format

Share Document