Low Complexity Neural Networks to Classify EEG Signals Associated to Emotional Stimuli

Author(s):  
Adrian Rodriguez Aguiñaga ◽  
Miguel Angel Lopez Ramirez
Author(s):  
А.Д. Обухов ◽  
М.Н. Краснянский ◽  
М.С. Николюкин

Рассматривается проблема выбора оптимальных параметров интерфейса в информационных системах с целью его персонализации под предпочтения пользователя и возможности его оборудования. В настоящее время для ее решения используется алгоритмическое обеспечение и статистическая обработка предпочтений пользователей, что не обеспечивает достаточной гибкости и точности. Поэтому в данной работе предлагается применение разработанного метода адаптации параметров интерфейса, основанного на анализе и обработке пользовательской информации с помощью нейронных сетей. Научная новизна метода заключается в автоматизации сбора, анализа данных и настройки интерфейса за счет использования и интеграции нейронных сетей в информационную систему. Рассмотрена практическая реализация предлагаемого метода на Python. Экспертная оценка адаптивности интерфейса тестовой информационной системы после внедрения разработанного метода показала его перспективность и эффективность. Разработанный метод показывает лучшую точность и низкую сложность программной реализации относительно классического алгоритмического подхода. Полученные результаты могут использоваться для автоматизации процесса выбора компонентов интерфейса различных информационных систем. Дальнейшие исследования заключаются в развитии и интеграции разработанного метода в рамках фреймворка адаптации информационных систем Here we consider the problem of choosing the optimal parameters of the interface in information systems with the aim of personalizing it for the preferences of the user and the capabilities of his equipment. Currently, algorithmic support and statistical processing of user preferences are used to solve it, which does not provide sufficient flexibility and accuracy. Therefore, in this work, we propose the application of the developed method for adapting interface parameters based on the analysis and processing of user information using neural networks. The scientific novelty of the method is to automate the collection, analysis of data and interface settings through the use and integration of neural networks in the information system. We consider the practical implementation of the proposed method in Python. An expert assessment of the adaptability of the interface of the test information system after the implementation of the developed method showed its availability and efficiency. The developed method shows the best accuracy and low complexity of software implementation relative to the classical algorithmic approach. The results obtained can be used to automate the selection of interface components for various information systems. Further research consists in the development and integration of the developed method within the framework of the information systems adaptation framework


Sensors ◽  
2021 ◽  
Vol 21 (12) ◽  
pp. 3961
Author(s):  
Daniela De Venuto ◽  
Giovanni Mezzina

In this paper, we propose a breakthrough single-trial P300 detector that maximizes the information translate rate (ITR) of the brain–computer interface (BCI), keeping high recognition accuracy performance. The architecture, designed to improve the portability of the algorithm, demonstrated full implementability on a dedicated embedded platform. The proposed P300 detector is based on the combination of a novel pre-processing stage based on the EEG signals symbolization and an autoencoded convolutional neural network (CNN). The proposed system acquires data from only six EEG channels; thus, it treats them with a low-complexity preprocessing stage including baseline correction, windsorizing and symbolization. The symbolized EEG signals are then sent to an autoencoder model to emphasize those temporal features that can be meaningful for the following CNN stage. This latter consists of a seven-layer CNN, including a 1D convolutional layer and three dense ones. Two datasets have been analyzed to assess the algorithm performance: one from a P300 speller application in BCI competition III data and one from self-collected data during a fluid prototype car driving experiment. Experimental results on the P300 speller dataset showed that the proposed method achieves an average ITR (on two subjects) of 16.83 bits/min, outperforming by +5.75 bits/min the state-of-the-art for this parameter. Jointly with the speed increase, the recognition performance returned disruptive results in terms of the harmonic mean of precision and recall (F1-Score), which achieve 51.78 ± 6.24%. The same method used in the prototype car driving led to an ITR of ~33 bit/min with an F1-Score of 70.00% in a single-trial P300 detection context, allowing fluid usage of the BCI for driving purposes. The realized network has been validated on an STM32L4 microcontroller target, for complexity and implementation assessment. The implementation showed an overall resource occupation of 5.57% of the total available ROM, ~3% of the available RAM, requiring less than 3.5 ms to provide the classification outcome.


Author(s):  
Slim Yacoub ◽  
Ines Ben Abdelaziz ◽  
Mohamed Ali Cherni ◽  
Badreddine Mandhouj ◽  
Mounir Sayadi ◽  
...  

Author(s):  
Badreddine Mandhouj ◽  
Sami Bouzaiane ◽  
Mohamed Ali Cherni ◽  
Ines Ben Abdelaziz ◽  
Slim Yacoub ◽  
...  

1997 ◽  
Vol 9 (1) ◽  
pp. 1-42 ◽  
Author(s):  
Sepp Hochreiter ◽  
Jürgen Schmidhuber

We present a new algorithm for finding low-complexity neural networks with high generalization capability. The algorithm searches for a “flat” minimum of the error function. A flat minimum is a large connected region in weight space where the error remains approximately constant. An MDL-based, Bayesian argument suggests that flat minima correspond to “simple” networks and low expected overfitting. The argument is based on a Gibbs algorithm variant and a novel way of splitting generalization error into underfitting and overfitting error. Unlike many previous approaches, ours does not require gaussian assumptions and does not depend on a “good” weight prior. Instead we have a prior over input output functions, thus taking into account net architecture and training set. Although our algorithm requires the computation of second-order derivatives, it has backpropagation's order of complexity. Automatically, it effectively prunes units, weights, and input lines. Various experiments with feedforward and recurrent nets are described. In an application to stock market prediction, flat minimum search outperforms conventional backprop, weight decay, and “optimal brain surgeon/optimal brain damage.”


Sign in / Sign up

Export Citation Format

Share Document