Robots controlled by neural networks trained based on brain signals

Author(s):  
Genci Capi ◽  
Toshihide Takahashi ◽  
Kazunori Urushiyama ◽  
Shigenori Kawahara
2021 ◽  
Author(s):  
David A. Tovar ◽  
Tijl Grootswagers ◽  
James Jun ◽  
Oakyoon Cha ◽  
Randolph Blake ◽  
...  

Humans are able to recognize objects under a variety of noisy conditions, so models of the human visual system must account for how this feat is accomplished. In this study, we investigated how image perturbations, specifically reducing images to their low spatial frequency (LSF) components, affected correspondence between convolutional neural networks (CNNs) and brain signals recorded using magnetoencephalography (MEG). Using the high temporal resolution of MEG, we found that CNN-Brain correspondence for deeper and more complex layers across CNN architectures emerged earlier for LSF images than for their unfiltered broadband counterparts. The early emergence of LSF components is consistent with the coarse-to-fine theoretical framework for visual image processing, but surprisingly shows that LSF signals from images are more prominent when high spatial frequencies are removed. In addition, we decomposed MEG signals into oscillatory components and found correspondence varied based on frequency bands, painting a full picture of how CNN-Brain correspondence varies with time, frequency, and MEG sensor locations. Finally, we varied image properties of CNN training sets, and found marked changes in CNN processing dynamics and correspondence to brain activity. In sum, we show that image perturbations affect CNN-Brain correspondence in unexpected ways, as well as provide a rich methodological framework for assessing CNN-Brain correspondence across space, time, and frequency.


Author(s):  
Leonardo Ojeda ◽  
Roberto Vega ◽  
Luis Eduardo Falcon ◽  
Gildardo Sanchez-Ante ◽  
Humberto Sossa ◽  
...  

2018 ◽  
Author(s):  
Ramiro Gatti ◽  
Yanina Atum ◽  
Luciano Schiaffino ◽  
Mads Jochumsen ◽  
José Biurrun Manresa

AbstractBuilding accurate movement decoding models from brain signals is crucial for many biomedical applications. Decoding specific movement features, such as speed and force, may provide additional useful information at the expense of increasing the complexity of the decoding problem. Recent attempts to predict movement speed and force from the electroencephalogram (EEG) achieved classification accuracy levels not better than chance, stressing the demand for more accurate prediction strategies. Thus, the aim of this study was to improve the prediction accuracy of hand movement speed and force from single-trial EEG signals recorded from healthy volunteers. A strategy based on convolutional neural networks (ConvNets) was tested, since it has previously shown good performance in the classification of EEG signals. ConvNets achieved an overall accuracy of 84% in the classification of two different levels of speed and force (4-class classification) from single-trial EEG. These results represent a substantial improvement over previously reported results, suggesting that hand movement speed and force can be accurately predicted from single-trial EEG.


Emotions are important for Humans both at work place and in their life. Emotions helps us to communicate with others, to take decisions, in understand others etc., Emotions recognition not only helps us to solve the mental illness but also are important in various application such as Brain Computer Interface , medical care and entertainment This paper mainly deals with how Emotions are Classified through EEG Signals using SVM (Support Vector machine) and DNN (Deep Neural Networks) . Applying the most appropriate algorithm to detect the emotional state of a person and play the corresponding song in the playlist. Brain signals can be collected using EEG (electroencephalography) devices


Sign in / Sign up

Export Citation Format

Share Document