scholarly journals Emergency Braking Intention Detect System Based on K-Order Propagation Number Algorithm: A Network Perspective

2021 ◽  
Vol 11 (11) ◽  
pp. 1424
Author(s):  
Yuhong Zhang ◽  
Yuan Liao ◽  
Yudi Zhang ◽  
Liya Huang

In order to avoid erroneous braking responses when vehicle drivers are faced with a stressful setting, a K-order propagation number algorithm–Feature selection–Classification System (KFCS)is developed in this paper to detect emergency braking intentions in simulated driving scenarios using electroencephalography (EEG) signals. Two approaches are employed in KFCS to extract EEG features and to improve classification performance: the K-Order Propagation Number Algorithm is the former, calculating the node importance from the perspective of brain networks as a novel approach; the latter uses a set of feature extraction algorithms to adjust the thresholds. Working with the data collected from seven subjects, the highest classification accuracy of a single trial can reach over 90%, with an overall accuracy of 83%. Furthermore, this paper attempts to investigate the mechanisms of brain activeness under two scenarios by using a topography technique at the sensor-data level. The results suggest that the active regions at two states is different, which leaves further exploration for future investigations.

2021 ◽  
Vol 38 (6) ◽  
pp. 1689-1698
Author(s):  
Suat Toraman ◽  
Ömer Osman Dursun

Human emotion recognition with machine learning methods through electroencephalographic (EEG) signals has become a highly interesting subject for researchers. Although it is simple to define emotions that can be expressed physically such as speech, facial expressions, and gestures, it is more difficult to define psychological emotions that are expressed internally. The most important stimuli in revealing inner emotions are aural and visual stimuli. In this study, EEG signals using both aural and visual stimuli were examined and emotions were evaluated in both binary and multi-class emotion recognitions models. A general emotion recognition model was proposed for non-subject-based classification. Unlike in previous studies, a subject-based testing was performed for the first time in the literature. Capsule Networks, a new neural network model, has been developed for binary and multi-class emotion recognition. In the proposed method, a novel fusion strategy was introduced for binary-class emotion recognition and the model was tested using the GAMEEMO dataset. Binary-class emotion recognition achieved a classification accuracy which was 10% better than the classification performance achieved in other studies in the literature. Based on these findings, we suggest that the proposed method will bring a different perspective to emotion recognition.


2021 ◽  
Vol 15 ◽  
Author(s):  
Fangfang Long ◽  
Shanguang Zhao ◽  
Xin Wei ◽  
Siew-Cheok Ng ◽  
Xiaoli Ni ◽  
...  

The EEG features of different emotions were extracted based on multi-channel and forehead channels in this study. The EEG signals of 26 subjects were collected by the emotional video evoked method. The results show that the energy ratio and differential entropy of the frequency band can be used to classify positive and negative emotions effectively, and the best effect can be achieved by using an SVM classifier. When only the forehead and forehead signals are used, the highest classification accuracy can reach 66%. When the data of all channels are used, the highest accuracy of the model can reach 82%. After channel selection, the best model of this study can be obtained. The accuracy is more than 86%.


PLoS ONE ◽  
2022 ◽  
Vol 17 (1) ◽  
pp. e0262417
Author(s):  
Cédric Simar ◽  
Robin Petit ◽  
Nichita Bozga ◽  
Axelle Leroy ◽  
Ana-Maria Cebolla ◽  
...  

Objective Different visual stimuli are classically used for triggering visual evoked potentials comprising well-defined components linked to the content of the displayed image. These evoked components result from the average of ongoing EEG signals in which additive and oscillatory mechanisms contribute to the component morphology. The evoked related potentials often resulted from a mixed situation (power variation and phase-locking) making basic and clinical interpretations difficult. Besides, the grand average methodology produced artificial constructs that do not reflect individual peculiarities. This motivated new approaches based on single-trial analysis as recently used in the brain-computer interface field. Approach We hypothesize that EEG signals may include specific information about the visual features of the displayed image and that such distinctive traits can be identified by state-of-the-art classification algorithms based on Riemannian geometry. The same classification algorithms are also applied to the dipole sources estimated by sLORETA. Main results and significance We show that our classification pipeline can effectively discriminate between the display of different visual items (Checkerboard versus 3D navigational image) in single EEG trials throughout multiple subjects. The present methodology reaches a single-trial classification accuracy of about 84% and 93% for inter-subject and intra-subject classification respectively using surface EEG. Interestingly, we note that the classification algorithms trained on sLORETA sources estimation fail to generalize among multiple subjects (63%), which may be due to either the average head model used by sLORETA or the subsequent spatial filtering failing to extract discriminative information, but reach an intra-subject classification accuracy of 82%.


2021 ◽  
Vol 15 ◽  
Author(s):  
Siqi Cai ◽  
Peiwen Li ◽  
Enze Su ◽  
Longhan Xie

Humans show a remarkable perceptual ability to select the speech stream of interest among multiple competing speakers. Previous studies demonstrated that auditory attention detection (AAD) can infer which speaker is attended by analyzing a listener's electroencephalography (EEG) activities. However, previous AAD approaches perform poorly on short signal segments, more advanced decoding strategies are needed to realize robust real-time AAD. In this study, we propose a novel approach, i.e., cross-modal attention-based AAD (CMAA), to exploit the discriminative features and the correlation between audio and EEG signals. With this mechanism, we hope to dynamically adapt the interactions and fuse cross-modal information by directly attending to audio and EEG features, thereby detecting the auditory attention activities manifested in brain signals. We also validate the CMAA model through data visualization and comprehensive experiments on a publicly available database. Experiments show that the CMAA achieves accuracy values of 82.8, 86.4, and 87.6% for 1-, 2-, and 5-s decision windows under anechoic conditions, respectively; for a 2-s decision window, it achieves an average of 84.1% under real-world reverberant conditions. The proposed CMAA network not only achieves better performance than the conventional linear model, but also outperforms the state-of-the-art non-linear approaches. These results and data visualization suggest that the CMAA model can dynamically adapt the interactions and fuse cross-modal information by directly attending to audio and EEG features in order to improve the AAD performance.


2018 ◽  
Vol 32 (08) ◽  
pp. 1850086 ◽  
Author(s):  
Yang Liu ◽  
Jiang Wang ◽  
Lihui Cai ◽  
Yingyuan Chen ◽  
Yingmei Qin

As a pattern of cross-frequency coupling (CFC), phase–amplitude coupling (PAC) depicts the interaction between the phase and amplitude of distinct frequency bands from the same signal, and has been proved to be closely related to the brain’s cognitive and memory activities. This work utilized PAC and support vector machine (SVM) classifier to identify the epileptic seizures from electroencephalogram (EEG) data. The entropy-based modulation index (MI) matrixes are used to express the strength of PAC, from which we extracted features as the input for classifier. Based on the Bonn database, which contains five datasets of EEG segments obtained from healthy volunteers and epileptic subjects, a 100% classification accuracy is achieved for identifying seizure ictal from healthy data, and an accuracy of 97.67% is reached in the classification of ictal EEG signals from inter-ictal EEGs. Based on the CHB–MIT database which is a group of continuously recorded epileptic EEGs by scalp electrodes, a 97.50% classification accuracy is obtained and a raising sign of MI value is found at 6[Formula: see text]s before seizure onset. The classification performance in this work is effective, and PAC can be considered as a useful tool for detecting and predicting the epileptic seizures and providing reference for clinical diagnosis.


2021 ◽  
Vol 15 ◽  
Author(s):  
Feifei Qi ◽  
Wenlong Wang ◽  
Xiaofeng Xie ◽  
Zhenghui Gu ◽  
Zhu Liang Yu ◽  
...  

Achieving high classification performance is challenging due to non-stationarity and low signal-to-noise ratio (low SNR) characteristics of EEG signals. Spatial filtering is commonly used to improve the SNR yet the individual differences in the underlying temporal or frequency information is often ignored. This paper investigates motor imagery signals via orthogonal wavelet decomposition, by which the raw signals are decomposed into multiple unrelated sub-band components. Furthermore, channel-wise spectral filtering via weighting the sub-band components are implemented jointly with spatial filtering to improve the discriminability of EEG signals, with an l2-norm regularization term embedded in the objective function to address the underlying over-fitting issue. Finally, sparse Bayesian learning with Gaussian prior is applied to the extracted power features, yielding an RVM classifier. The classification performance of SEOWADE is significantly better than those of several competing algorithms (CSP, FBCSP, CSSP, CSSSP, and shallow ConvNet). Moreover, scalp weight maps of the spatial filters optimized by SEOWADE are more neurophysiologically meaningful. In summary, these results demonstrate the effectiveness of SEOWADE in extracting relevant spatio-temporal information for single-trial EEG classification.


Author(s):  
Yuting Wang ◽  
Shujian Wang ◽  
Ming Xu

This paper puts forward a new method of landscape recognition and evaluation by using aerial video and EEG technology. In this study, seven typical landscape types (forest, wetland, grassland, desert, water, farmland, and city) were selected. Different electroencephalogram (EEG) signals were generated through different inner experiences and feelings felt by people watching video stimuli of the different landscape types. The electroencephalogram (EEG) features were extracted to obtain the mean amplitude spectrum (MAS), power spectrum density (PSD), differential entropy (DE), differential asymmetry (DASM), rational asymmetry (RASM), and differential caudality (DCAU) in the five frequency bands of delta, theta, alpha, beta, and gamma. According to electroencephalogram (EEG) features, four classifiers including the back propagation (BP) neural network, k-nearest neighbor classification (KNN), random forest (RF), and support vector machine (SVM) were used to classify the landscape types. The results showed that the support vector machine (SVM) classifier and the random forest (RF) classifier had the highest accuracy of landscape recognition, which reached 98.24% and 96.72%, respectively. Among the six classification features selected, the classification accuracy of MAS, PSD, and DE with frequency domain features were higher than those of the spatial domain features of DASM, RASM and DCAU. In different wave bands, the average classification accuracy of all subjects was 98.24% in the gamma band, 94.62% in the beta band, and 97.29% in the total band. This study identifies and classifies landscape perception based on multi-channel EEG signals, which provides a new idea and method for the quantification of human perception.


2017 ◽  
Vol 2017 ◽  
pp. 1-12 ◽  
Author(s):  
Rensong Liu ◽  
Zhiwen Zhang ◽  
Feng Duan ◽  
Xin Zhou ◽  
Zixuan Meng

Motor imagery (MI) electroencephalograph (EEG) signals are widely applied in brain-computer interface (BCI). However, classified MI states are limited, and their classification accuracy rates are low because of the characteristics of nonlinearity and nonstationarity. This study proposes a novel MI pattern recognition system that is based on complex algorithms for classifying MI EEG signals. In electrooculogram (EOG) artifact preprocessing, band-pass filtering is performed to obtain the frequency band of MI-related signals, and then, canonical correlation analysis (CCA) combined with wavelet threshold denoising (WTD) is used for EOG artifact preprocessing. We propose a regularized common spatial pattern (R-CSP) algorithm for EEG feature extraction by incorporating the principle of generic learning. A new classifier combining the K-nearest neighbor (KNN) and support vector machine (SVM) approaches is used to classify four anisomerous states, namely, imaginary movements with the left hand, right foot, and right shoulder and the resting state. The highest classification accuracy rate is 92.5%, and the average classification accuracy rate is 87%. The proposed complex algorithm identification method can significantly improve the identification rate of the minority samples and the overall classification performance.


2020 ◽  
Author(s):  
Nalika Ulapane ◽  
Karthick Thiyagarajan ◽  
sarath kodagoda

<div>Classification has become a vital task in modern machine learning and Artificial Intelligence applications, including smart sensing. Numerous machine learning techniques are available to perform classification. Similarly, numerous practices, such as feature selection (i.e., selection of a subset of descriptor variables that optimally describe the output), are available to improve classifier performance. In this paper, we consider the case of a given supervised learning classification task that has to be performed making use of continuous-valued features. It is assumed that an optimal subset of features has already been selected. Therefore, no further feature reduction, or feature addition, is to be carried out. Then, we attempt to improve the classification performance by passing the given feature set through a transformation that produces a new feature set which we have named the “Binary Spectrum”. Via a case study example done on some Pulsed Eddy Current sensor data captured from an infrastructure monitoring task, we demonstrate how the classification accuracy of a Support Vector Machine (SVM) classifier increases through the use of this Binary Spectrum feature, indicating the feature transformation’s potential for broader usage.</div><div><br></div>


Sensors ◽  
2021 ◽  
Vol 21 (12) ◽  
pp. 3961
Author(s):  
Daniela De Venuto ◽  
Giovanni Mezzina

In this paper, we propose a breakthrough single-trial P300 detector that maximizes the information translate rate (ITR) of the brain–computer interface (BCI), keeping high recognition accuracy performance. The architecture, designed to improve the portability of the algorithm, demonstrated full implementability on a dedicated embedded platform. The proposed P300 detector is based on the combination of a novel pre-processing stage based on the EEG signals symbolization and an autoencoded convolutional neural network (CNN). The proposed system acquires data from only six EEG channels; thus, it treats them with a low-complexity preprocessing stage including baseline correction, windsorizing and symbolization. The symbolized EEG signals are then sent to an autoencoder model to emphasize those temporal features that can be meaningful for the following CNN stage. This latter consists of a seven-layer CNN, including a 1D convolutional layer and three dense ones. Two datasets have been analyzed to assess the algorithm performance: one from a P300 speller application in BCI competition III data and one from self-collected data during a fluid prototype car driving experiment. Experimental results on the P300 speller dataset showed that the proposed method achieves an average ITR (on two subjects) of 16.83 bits/min, outperforming by +5.75 bits/min the state-of-the-art for this parameter. Jointly with the speed increase, the recognition performance returned disruptive results in terms of the harmonic mean of precision and recall (F1-Score), which achieve 51.78 ± 6.24%. The same method used in the prototype car driving led to an ITR of ~33 bit/min with an F1-Score of 70.00% in a single-trial P300 detection context, allowing fluid usage of the BCI for driving purposes. The realized network has been validated on an STM32L4 microcontroller target, for complexity and implementation assessment. The implementation showed an overall resource occupation of 5.57% of the total available ROM, ~3% of the available RAM, requiring less than 3.5 ms to provide the classification outcome.


Sign in / Sign up

Export Citation Format

Share Document