decoding accuracy
Recently Published Documents


TOTAL DOCUMENTS

120
(FIVE YEARS 57)

H-INDEX

18
(FIVE YEARS 2)

2022 ◽  
Author(s):  
Philip Kennedy ◽  
A. Ganesh ◽  
A.J. Cervantes

Abstract Summary The motivation of someone who is locked-in, that is, paralyzed and mute, is to find relief for their loss of function. The data presented in this report is part of an attempt to restore one of those lost functions, namely, speech. An essential feature of the development of a speech prosthetic is optimal decoding of patterns of recorded neural signals during silent or covert speech, that is, speaking ‘inside the head’ with no audible output due to the paralysis of the articulators. The aim of this paper is to illustrate the importance of both fast and slow single unit firings recorded from an individual with locked-in syndrome and from an intact participant speaking silently. Long duration electrodes were implanted in the motor speech cortex for up to 13 years in the locked-in participant. The data herein provide evidence that slow firing single units are essential for optimal decoding accuracy. Additional evidence indicates that slow firing single units can be conditioned in the locked-in participant five years after implantation, further supporting their role in decoding.


Author(s):  
Xiaowei Che ◽  
Yuanjie Zheng ◽  
Xin Chen ◽  
Sutao Song ◽  
Shouxin Li

Color has an important role in object recognition and visual working memory (VWM). Decoding color VWM in the human brain is helpful to understand the mechanism of visual cognitive process and evaluate memory ability. Recently, several studies showed that color could be decoded from scalp electroencephalogram (EEG) signals during the encoding stage of VWM, which process visible information with strong neural coding. Whether color could be decoded from other VWM processing stages, especially the maintaining stage which processes invisible information, is still unknown. Here, we constructed an EEG color graph convolutional network model (ECo-GCN) to decode colors during different VWM stages. Based on graph convolutional networks, ECo-GCN considers the graph structure of EEG signals and may be more efficient in color decoding. We found that (1) decoding accuracies for colors during the encoding, early, and late maintaining stages were 81.58%, 79.36%, and 77.06%, respectively, exceeding those during the pre-stimuli stage (67.34%), and (2) the decoding accuracy during maintaining stage could predict participants’ memory performance. The results suggest that EEG signals during the maintaining stage may be more sensitive than behavioral measurement to predict the VWM performance of human, and ECo-GCN provides an effective approach to explore human cognitive function.


Sensors ◽  
2021 ◽  
Vol 21 (22) ◽  
pp. 7713
Author(s):  
Zengyu Qing ◽  
Zongxing Lu ◽  
Yingjie Cai ◽  
Jing Wang

The surface Electromyography (sEMG) signal contains information about movement intention generated by the human brain, and it is the most intuitive and common solution to control robots, orthotics, prosthetics and rehabilitation equipment. In recent years, gesture decoding based on sEMG signals has received a lot of research attention. In this paper, the effects of muscle fatigue, forearm angle and acquisition time on the accuracy of gesture decoding were researched. Taking 11 static gestures as samples, four specific muscles (i.e., superficial flexor digitorum (SFD), flexor carpi ulnaris (FCU), extensor carpi radialis longus (ECRL) and finger extensor (FE)) were selected to sample sEMG signals. Root Mean Square (RMS), Waveform Length (WL), Zero Crossing (ZC) and Slope Sign Change (SSC) were chosen as signal eigenvalues; Linear Discriminant Analysis (LDA) and Probabilistic Neural Network (PNN) were used to construct classification models, and finally, the decoding accuracies of the classification models were obtained under different influencing elements. The experimental results showed that the decoding accuracy of the classification model decreased by an average of 7%, 10%, and 13% considering muscle fatigue, forearm angle and acquisition time, respectively. Furthermore, the acquisition time had the biggest impact on decoding accuracy, with a maximum reduction of nearly 20%.


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Weilun Sun ◽  
Ilseob Choi ◽  
Stoyan Stoyanov ◽  
Oleg Senkov ◽  
Evgeni Ponimaskin ◽  
...  

AbstractThe retrosplenial cortex (RSC) has diverse functional inputs and is engaged by various sensory, spatial, and associative learning tasks. We examine how multiple functional aspects are integrated on the single-cell level in the RSC and how the encoding of task-related parameters changes across learning. Using a visuospatial context discrimination paradigm and two-photon calcium imaging in behaving mice, a large proportion of dysgranular RSC neurons was found to encode multiple task-related dimensions while forming context-value associations across learning. During reversal learning requiring increased cognitive flexibility, we revealed an increased proportion of multidimensional encoding neurons that showed higher decoding accuracy for behaviorally relevant context-value associations. Chemogenetic inactivation of RSC led to decreased behavioral context discrimination during learning phases in which context-value associations were formed, while recall of previously formed associations remained intact. RSC inactivation resulted in a persistent positive behavioral bias in valuing contexts, indicating a role for the RSC in context-value updating.


2021 ◽  
pp. 1-30
Author(s):  
Wei Liu ◽  
Nils Kohn ◽  
Guillén Fernández

Abstract Flexible behavior requires switching between different task conditions. It is known that such task switching is associated with costs in terms of slowed RT, reduced accuracy, or both. The neural correlates of task switching have usually been studied by requiring participants to switch between distinct task conditions that recruit different brain networks. Here, we investigated the transition of neural states underlying switching between two opposite memory-related processes (i.e., memory retrieval and memory suppression) in a memory task. We investigated 26 healthy participants who performed a think/no-think task while being in the fMRI scanner. Behaviorally, we show that it was more difficult for participants to suppress unwanted memories when a no-think was preceded by a think trial instead of another no-think trial. Neurally, we demonstrate that think–no-think switches were associated with an increase in control-related and a decrease in memory-related brain activity. Neural representations of task condition, assessed by decoding accuracy, were lower immediately after task switching compared with the nonswitch transitions, suggesting a switch-induced delay in the neural transition toward the required task condition. This suggestion is corroborated by an association between condition-specific representational strength and condition-specific performance in switch trials. Taken together, we provided neural evidence from the time-resolved decoding approach to support the notion that carryover of the previous task set activation is associated with the switching cost, leading to less successful memory suppression.


2021 ◽  
Author(s):  
Steven M. Peterson ◽  
Rajesh P. N. Rao ◽  
Bingni W. Brunton

AbstractRecent advances in neural decoding have accelerated the development of brain-computer interfaces aimed at assisting users with everyday tasks such as speaking, walking, and manipulating objects. However, current approaches for training neural decoders commonly require large quantities of labeled data, which can be laborious or infeasible to obtain in real-world settings. One intriguing alternative uses self-supervised models that share self-generated pseudo-labels between two data streams; such models have shown exceptional performance on unlabeled audio and video data, but it remains unclear how well they extend to neural decoding. Here, we learn neural decoders without labels by leveraging multiple simultaneously recorded data streams, including neural, kinematic, and physiological signals. Specifically, we apply cross-modal, self-supervised deep clustering to decode movements from brain recordings; these decoders are compared to supervised and unimodal, self-supervised models. We find that sharing pseudo-labels between two data streams during training substantially increases decoding performance compared to unimodal, self-supervised models, with accuracies approaching those of supervised decoders trained on labeled data. Next, we develop decoders trained on three modalities that match or slightly exceed the performance of supervised models, achieving state-of-the-art neural decoding accuracy. Cross-modal decoding is a flexible, promising approach for robust, adaptive neural decoding in real-world applications without any labels.


Author(s):  
Yuxi Shi ◽  
Gowrishankar Ganesh ◽  
Hideyuki Ando ◽  
Yasuharu Koike ◽  
Eiichi Yoshida ◽  
...  

A significant problem in brain–computer interface (BCI) research is decoding — obtaining required information from very weak noisy electroencephalograph signals and extracting considerable information from limited data. Traditional intention decoding methods, which obtain information from induced or spontaneous brain activity, have shortcomings in terms of performance, computational expense and usage burden. Here, a new methodology called prediction error decoding was used for motor imagery (MI) detection and compared with direct intention decoding. Galvanic vestibular stimulation (GVS) was used to induce subliminal sensory feedback between the forehead and mastoids without any burden. Prediction errors were generated between the GVS-induced sensory feedback and the MI direction. The corresponding prediction error decoding of the front/back MI task was validated. A test decoding accuracy of 77.83–78.86% (median) was achieved during GVS for every 100[Formula: see text]ms interval. A nonzero weight parameter-based channel screening (WPS) method was proposed to select channels individually and commonly during GVS. When the WPS common-selected mode was compared with the WPS individual-selected mode and a classical channel selection method based on correlation coefficients (CCS), a satisfactory decoding performance of the selected channels was observed. The results indicated the positive impact of measuring common specific channels of the BCI.


2021 ◽  
Vol 12 ◽  
Author(s):  
Ivine Kuruvila ◽  
Jan Muncke ◽  
Eghart Fischer ◽  
Ulrich Hoppe

Human brain performs remarkably well in segregating a particular speaker from interfering ones in a multispeaker scenario. We can quantitatively evaluate the segregation capability by modeling a relationship between the speech signals present in an auditory scene, and the listener's cortical signals measured using electroencephalography (EEG). This has opened up avenues to integrate neuro-feedback into hearing aids where the device can infer user's attention and enhance the attended speaker. Commonly used algorithms to infer the auditory attention are based on linear systems theory where cues such as speech envelopes are mapped on to the EEG signals. Here, we present a joint convolutional neural network (CNN)—long short-term memory (LSTM) model to infer the auditory attention. Our joint CNN-LSTM model takes the EEG signals and the spectrogram of the multiple speakers as inputs and classifies the attention to one of the speakers. We evaluated the reliability of our network using three different datasets comprising of 61 subjects, where each subject undertook a dual-speaker experiment. The three datasets analyzed corresponded to speech stimuli presented in three different languages namely German, Danish, and Dutch. Using the proposed joint CNN-LSTM model, we obtained a median decoding accuracy of 77.2% at a trial duration of 3 s. Furthermore, we evaluated the amount of sparsity that the model can tolerate by means of magnitude pruning and found a tolerance of up to 50% sparsity without substantial loss of decoding accuracy.


Sign in / Sign up

Export Citation Format

Share Document