neural decoding
Recently Published Documents


TOTAL DOCUMENTS

224
(FIVE YEARS 96)

H-INDEX

21
(FIVE YEARS 6)

2022 ◽  
Vol 12 (1) ◽  
Author(s):  
Thirza Dado ◽  
Yağmur Güçlütürk ◽  
Luca Ambrogioni ◽  
Gabriëlle Ras ◽  
Sander Bosch ◽  
...  

AbstractNeural decoding can be conceptualized as the problem of mapping brain responses back to sensory stimuli via a feature space. We introduce (i) a novel experimental paradigm that uses well-controlled yet highly naturalistic stimuli with a priori known feature representations and (ii) an implementation thereof for HYPerrealistic reconstruction of PERception (HYPER) of faces from brain recordings. To this end, we embrace the use of generative adversarial networks (GANs) at the earliest step of our neural decoding pipeline by acquiring fMRI data as participants perceive face images synthesized by the generator network of a GAN. We show that the latent vectors used for generation effectively capture the same defining stimulus properties as the fMRI measurements. As such, these latents (conditioned on the GAN) are used as the in-between feature representations underlying the perceived images that can be predicted in neural decoding for (re-)generation of the originally perceived stimuli, leading to the most accurate reconstructions of perception to date.


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Qianli Yang ◽  
Edgar Walker ◽  
R. James Cotton ◽  
Andreas S. Tolias ◽  
Xaq Pitkow

AbstractSensory data about most natural task-relevant variables are entangled with task-irrelevant nuisance variables. The neurons that encode these relevant signals typically constitute a nonlinear population code. Here we present a theoretical framework for quantifying how the brain uses or decodes its nonlinear information. Our theory obeys fundamental mathematical limitations on information content inherited from the sensory periphery, describing redundant codes when there are many more cortical neurons than primary sensory neurons. The theory predicts that if the brain uses its nonlinear population codes optimally, then more informative patterns should be more correlated with choices. More specifically, the theory predicts a simple, easily computed quantitative relationship between fluctuating neural activity and behavioral choices that reveals the decoding efficiency. This relationship holds for optimal feedforward networks of modest complexity, when experiments are performed under natural nuisance variation. We analyze recordings from primary visual cortex of monkeys discriminating the distribution from which oriented stimuli were drawn, and find these data are consistent with the hypothesis of near-optimal nonlinear decoding.


Smart Health ◽  
2021 ◽  
pp. 100229
Author(s):  
Huining Li ◽  
Huan Chen ◽  
Chenhan Xu ◽  
Anarghya Das ◽  
Xingyu Chen ◽  
...  

Author(s):  
Ishita Basu ◽  
Ali Yousefi ◽  
Britni Crocker ◽  
Rina Zelmann ◽  
Angelique C. Paulk ◽  
...  

2021 ◽  
Vol 168 ◽  
pp. S180
Author(s):  
Jing Ding ◽  
Zhiyuan Cao ◽  
Yi Kang ◽  
Shiyu Zhao ◽  
Jiacai Zhang

Sensors ◽  
2021 ◽  
Vol 21 (19) ◽  
pp. 6372
Author(s):  
Shih-Hung Yang ◽  
Jyun-We Huang ◽  
Chun-Jui Huang ◽  
Po-Hsiung Chiu ◽  
Hsin-Yi Lai ◽  
...  

Intracortical brain–computer interfaces (iBCIs) translate neural activity into control commands, thereby allowing paralyzed persons to control devices via their brain signals. Recurrent neural networks (RNNs) are widely used as neural decoders because they can learn neural response dynamics from continuous neural activity. Nevertheless, excessively long or short input neural activity for an RNN may decrease its decoding performance. Based on the temporal attention module exploiting relations in features over time, we propose a temporal attention-aware timestep selection (TTS) method that improves the interpretability of the salience of each timestep in an input neural activity. Furthermore, TTS determines the appropriate input neural activity length for accurate neural decoding. Experimental results show that the proposed TTS efficiently selects 28 essential timesteps for RNN-based neural decoders, outperforming state-of-the-art neural decoders on two nonhuman primate datasets (R2=0.76±0.05 for monkey Indy and CC=0.91±0.01 for monkey N). In addition, it reduces the computation time for offline training (reducing 5–12%) and online prediction (reducing 16–18%). When visualizing the attention mechanism in TTS, the preparatory neural activity is consecutively highlighted during arm movement, and the most recent neural activity is highlighted during the resting state in nonhuman primates. Selecting only a few essential timesteps for an RNN-based neural decoder provides sufficient decoding performance and requires only a short computation time.


Sensors ◽  
2021 ◽  
Vol 21 (18) ◽  
pp. 6235
Author(s):  
Chun-Hsien Hsu ◽  
Ya-Ning Wu

Neural decoding is useful to explore the timing and source location in which the brain encodes information. Higher classification accuracy means that an analysis is more likely to succeed in extracting useful information from noises. In this paper, we present the application of a nonlinear, nonstationary signal decomposition technique—the empirical mode decomposition (EMD), on MEG data. We discuss the fundamental concepts and importance of nonlinear methods when it comes to analyzing brainwave signals and demonstrate the procedure on a set of open-source MEG facial recognition task dataset. The improved clarity of data allowed further decoding analysis to capture distinguishing features between conditions that were formerly over-looked in the existing literature, while raising interesting questions concerning hemispheric dominance to the encoding process of facial and identity information.


2021 ◽  
Author(s):  
Steven M. Peterson ◽  
Rajesh P. N. Rao ◽  
Bingni W. Brunton

AbstractRecent advances in neural decoding have accelerated the development of brain-computer interfaces aimed at assisting users with everyday tasks such as speaking, walking, and manipulating objects. However, current approaches for training neural decoders commonly require large quantities of labeled data, which can be laborious or infeasible to obtain in real-world settings. One intriguing alternative uses self-supervised models that share self-generated pseudo-labels between two data streams; such models have shown exceptional performance on unlabeled audio and video data, but it remains unclear how well they extend to neural decoding. Here, we learn neural decoders without labels by leveraging multiple simultaneously recorded data streams, including neural, kinematic, and physiological signals. Specifically, we apply cross-modal, self-supervised deep clustering to decode movements from brain recordings; these decoders are compared to supervised and unimodal, self-supervised models. We find that sharing pseudo-labels between two data streams during training substantially increases decoding performance compared to unimodal, self-supervised models, with accuracies approaching those of supervised decoders trained on labeled data. Next, we develop decoders trained on three modalities that match or slightly exceed the performance of supervised models, achieving state-of-the-art neural decoding accuracy. Cross-modal decoding is a flexible, promising approach for robust, adaptive neural decoding in real-world applications without any labels.


Sign in / Sign up

Export Citation Format

Share Document