Classification of emotions from EEG signals using time-order representation based on the S-transform and convolutional neural network

2020 ◽  
Vol 56 (25) ◽  
pp. 1359-1361
Author(s):  
S.K. Khare ◽  
A. Nishad ◽  
A. Upadhyay ◽  
V. Bajaj
2021 ◽  
Vol 11 (21) ◽  
pp. 9948
Author(s):  
Amira Echtioui ◽  
Ayoub Mlaouah ◽  
Wassim Zouch ◽  
Mohamed Ghorbel ◽  
Chokri Mhiri ◽  
...  

Recently, Electroencephalography (EEG) motor imagery (MI) signals have received increasing attention because it became possible to use these signals to encode a person’s intention to perform an action. Researchers have used MI signals to help people with partial or total paralysis, control devices such as exoskeletons, wheelchairs, prostheses, and even independent driving. Therefore, classifying the motor imagery tasks of these signals is important for a Brain-Computer Interface (BCI) system. Classifying the MI tasks from EEG signals is difficult to offer a good decoder due to the dynamic nature of the signal, its low signal-to-noise ratio, complexity, and dependence on the sensor positions. In this paper, we investigate five multilayer methods for classifying MI tasks: proposed methods based on Artificial Neural Network, Convolutional Neural Network 1 (CNN1), CNN2, CNN1 with CNN2 merged, and the modified CNN1 with CNN2 merged. These proposed methods use different spatial and temporal characteristics extracted from raw EEG data. We demonstrate that our proposed CNN1-based method outperforms state-of-the-art machine/deep learning techniques for EEG classification by an accuracy value of 68.77% and use spatial and frequency characteristics on the BCI Competition IV-2a dataset, which includes nine subjects performing four MI tasks (left/right hand, feet, and tongue). The experimental results demonstrate the feasibility of this proposed method for the classification of MI-EEG signals and can be applied successfully to BCI systems where the amount of data is large due to daily recording.


2020 ◽  
Vol 40 (5) ◽  
pp. 663-672
Author(s):  
Nijisha Shajil ◽  
Sasikala Mohan ◽  
Poonguzhali Srinivasan ◽  
Janani Arivudaiyanambi ◽  
Arunnagiri Arasappan Murrugesan

2021 ◽  
Author(s):  
Navneet Tibrewal ◽  
Nikki Leeuwis ◽  
Maryam Alimardani

Motor Imagery (MI) is a mental process by which an individual rehearses body movements without actually performing physical actions. Motor Imagery Brain-Computer Interfaces (MI-BCIs) are AI-driven systems that capture brain activity patterns associated with this mental process and convert them into commands for external devices. Traditionally, MI-BCIs operate on Machine Learning (ML) algorithms, which require extensive signal processing and feature engineering to extract changes in sensorimotor rhythms (SMR). However, in recent years, Deep Learning (DL) models have gained popularity for EEG classification as they provide a solution for automatic extraction of spatio-temporal features in the signals. In this study, EEG signals from 54 subjects who performed a MI task of left- or right-hand grasp was employed to compare the performance of two MI-BCI classifiers; a ML approach vs. a DL approach. In the ML approach, Common Spatial Patterns (CSP) was used for feature extraction and then Linear Discriminant Analysis (LDA) model was employed for binary classification of the MI task. In the DL approach, a Convolutional Neural Network (CNN) model was constructed on the raw EEG signals. The mean classification accuracies achieved by the CNN and CSP+LDA models were 69.42% and 52.56%, respectively. Further analysis showed that the DL approach improved the classification accuracy for all subjects within the range of 2.37 to 28.28% and that the improvement was significantly stronger for low performers. Our findings show promise for employment of DL models in future MI-BCI systems, particularly for BCI inefficient users who are unable to produce desired sensorimotor patterns for conventional ML approaches.


2020 ◽  
Vol 2020 (4) ◽  
pp. 4-14
Author(s):  
Vladimir Budak ◽  
Ekaterina Ilyina

The article proposes the classification of lenses with different symmetrical beam angles and offers a scale as a spot-light’s palette. A collection of spotlight’s images was created and classified according to the proposed scale. The analysis of 788 pcs of existing lenses and reflectors with different LEDs and COBs carried out, and the dependence of the axial light intensity from beam angle was obtained. A transfer training of new deep convolutional neural network (CNN) based on the pre-trained GoogleNet was performed using this collection. GradCAM analysis showed that the trained network correctly identifies the features of objects. This work allows us to classify arbitrary spotlights with an accuracy of about 80 %. Thus, light designer can determine the class of spotlight and corresponding type of lens with its technical parameters using this new model based on CCN.


Sign in / Sign up

Export Citation Format

Share Document