neural data
Recently Published Documents


TOTAL DOCUMENTS

373
(FIVE YEARS 161)

H-INDEX

27
(FIVE YEARS 5)

eLife ◽  
2022 ◽  
Vol 11 ◽  
Author(s):  
Baohua Zhou ◽  
Zifan Li ◽  
Sunnie Kim ◽  
John Lafferty ◽  
Damon A Clark

Animals have evolved sophisticated visual circuits to solve a vital inference problem: detecting whether or not a visual signal corresponds to an object on a collision course. Such events are detected by specific circuits sensitive to visual looming, or objects increasing in size. Various computational models have been developed for these circuits, but how the collision-detection inference problem itself shapes the computational structures of these circuits remains unknown. Here, inspired by the distinctive structures of LPLC2 neurons in the visual system of Drosophila, we build anatomically-constrained shallow neural network models and train them to identify visual signals that correspond to impending collisions. Surprisingly, the optimization arrives at two distinct, opposing solutions, only one of which matches the actual dendritic weighting of LPLC2 neurons. Both solutions can solve the inference problem with high accuracy when the population size is large enough. The LPLC2-like solutions reproduces experimentally observed LPLC2 neuron responses for many stimuli, and reproduces canonical tuning of loom sensitive neurons, even though the models are never trained on neural data. Thus, LPLC2 neuron properties and tuning are predicted by optimizing an anatomically-constrained neural network to detect impending collisions. More generally, these results illustrate how optimizing inference tasks that are important for an animal's perceptual goals can reveal and explain computational properties of specific sensory neurons.


2021 ◽  
Vol 15 ◽  
Author(s):  
Suguru Wakita ◽  
Taiki Orima ◽  
Isamu Motoyoshi

Recent advances in brain decoding have made it possible to classify image categories based on neural activity. Increasing numbers of studies have further attempted to reconstruct the image itself. However, because images of objects and scenes inherently involve spatial layout information, the reconstruction usually requires retinotopically organized neural data with high spatial resolution, such as fMRI signals. In contrast, spatial layout does not matter in the perception of “texture,” which is known to be represented as spatially global image statistics in the visual cortex. This property of “texture” enables us to reconstruct the perceived image from EEG signals, which have a low spatial resolution. Here, we propose an MVAE-based approach for reconstructing texture images from visual evoked potentials measured from observers viewing natural textures such as the textures of various surfaces and object ensembles. This approach allowed us to reconstruct images that perceptually resemble the original textures with a photographic appearance. The present approach can be used as a method for decoding the highly detailed “impression” of sensory stimuli from brain activity.


2021 ◽  
pp. 221-224
Author(s):  
Oshin Vartanian

Environmental psychology is concerned with understanding the impact of the environment—built and natural—on the mind. Neuroscience of architecture can contribute to this aim by elucidating the workings of the brain in relation to features of our physical environment. Toward that end, Vartanian et al. (2013) examined the impact of contour on aesthetic judgments and approach-avoidance decisions while viewing images of room interiors in the functional magnetic resonance imaging (fMRI) scanner. Participants found curvilinear rooms more beautiful than rectilinear rooms, and viewing curvilinear rooms in that context activated the anterior cingulate cortex—a region involved in processing emotion. That observation, coupled with the finding that pleasantness accounted for majority of variance in beauty judgments, supports the idea that our preference for curvilinear design is driven by affect. This study represents an example of how neural data can reveal mechanisms that underlie our aesthetic preferences in the domain of architecture.


Author(s):  
Nathan J Hall ◽  
David J Herzfeld ◽  
Stephen G Lisberger

We evaluate existing spike sorters and present a new one that resolves many sorting challenges. The new sorter, called "full binary pursuit" or FBP, comprises multiple steps. First, it thresholds and clusters to identify the waveforms of all unique neurons in the recording. Second, it uses greedy binary pursuit to optimally assign all the spike events in the original voltages to separable neurons. Third, it resolves spike events that are described more accurately as the superposition of spikes from two other neurons. Fourth, it resolves situations where the recorded neurons drift in amplitude or across electrode contacts during a long recording session. Comparison with other sorters on ground-truth datasets reveals many of the failure modes of spike sorting. We examine overall spike sorter performance in ground-truth datasets and suggest post-sorting analyses that can improve the veracity of neural analyses by minimizing the intrusion of failure modes into analysis and interpretation of neural data. Our analysis reveals the tradeoff between the number of channels a sorter can process, speed of sorting, and some of the failure modes of spike sorting. FBP works best on data from 32 channels or fewer. It trades speed and number of channels for avoidance of specific failure modes that would be challenges for some use cases. We conclude that all spike sorting algorithms studied have advantages and shortcomings, and the appropriate use of a spike sorter requires a detailed assessment of the data being sorted and the experimental goals for analyses.


2021 ◽  
Author(s):  
Sophia Shatek ◽  
Amanda K Robinson ◽  
Tijl Grootswagers ◽  
Thomas A. Carlson

The ability to perceive moving objects is crucial for survival and threat identification. The association between the ability to move and being alive is learned early in childhood, yet not all moving objects are alive. Natural, non-agentive movement (e.g., clouds, fire) causes confusion in children and adults under time pressure. Recent neuroimaging evidence has shown that the visual system processes objects on a spectrum according to their ability to engage in self-propelled, goal-directed movement. Most prior work has used only moving stimuli that are also animate, so it is difficult to disentangle the effect of movement from aliveness or animacy in representational categorisation. In the current study, we investigated the relationship between movement and aliveness using both behavioural and neural measures. We examined electroencephalographic (EEG) data recorded while participants viewed static images of moving or non-moving objects that were either natural or artificial. Participants classified the images according to aliveness, or according to capacity for movement. Behavioural classification showed two key categorisation biases: moving natural things were often mistaken to be alive, and often classified as not moving. Movement explained significant variance in the neural data, during both a classification task and passive viewing. These results show that capacity for movement is an important dimension in the structure of human visual object representations.


2021 ◽  
Author(s):  
Manuel Beiran ◽  
Nicolas Meirhaeghe ◽  
Hansem Sohn ◽  
Mehrdad Jazayeri ◽  
Srdjan Ostojic

Biological brains possess an unparalleled ability to generalize adaptive behavioral responses from only a few examples. How neural processes enable this capacity to extrapolate is a fundamental open question. A prominent but underexplored hypothesis suggests that generalization is facilitated by a low-dimensional organization of collective neural activity. Here we tested this hypothesis in the framework of flexible timing tasks where dynamics play a key role. Examining trained recurrent neural networks we found that confining the dynamics to a low-dimensional subspace allowed tonic inputs to parametrically control the overall input-output transform and enabled smooth extrapolation to inputs well beyond the training range. Reverse-engineering and theoretical analyses demonstrated that this parametric control of extrapolation relies on a mechanism where tonic inputs modulate the dynamics along non-linear manifolds in activity space while preserving their geometry. Comparisons with neural data from behaving monkeys confirmed the geometric and dynamical signatures of this mechanism.


2021 ◽  
Author(s):  
Yicong Zheng ◽  
Xiaonan L. Liu ◽  
Satoru Nishiyama ◽  
Charan Ranganath ◽  
Randall C. O'Reilly

The hippocampus plays a critical role in the rapid learning of new episodic memories. Many computational models propose that the hippocampus is an autoassociator that relies on Hebbian learning (i.e., "cells that fire together, wire together"). However, Hebbian learning is computationally suboptimal as it modifies weights unnecessarily beyond what is actually needed to achieve effective retrieval, causing more interference and resulting in a lower learning capacity. Our previous computational models have utilized a powerful, biologically plausible form of error-driven learning in hippocampal CA1 and entorhinal cortex (EC) (functioning as a sparse autoencoder) by contrasting local activity states at different phases in the theta cycle. Based on specific neural data and a recent abstract computational model, we propose a new model called Theremin (Total Hippocampal ERror MINimization) that extends error-driven learning to area CA3 --- the mnemonic heart of the hippocampal system. In the model, CA3 responds to the EC monosynaptic input prior to the EC disynaptic input through dentate gyrus (DG), giving rise to a temporal difference between these two activation states, which drives error-driven learning in the EC->CA3 and CA3<->CA3 projections. In effect, DG serves as a teacher to CA3, correcting its patterns into more pattern-separated ones, thereby reducing interference. Results showed that Theremin, compared with our original model, has significantly increased capacity and learning speed. The model makes several novel predictions that can be tested in future studies.


2021 ◽  
Author(s):  
Kamila M Jozwik ◽  
Tim C Kietzmann ◽  
Radoslaw M Cichy ◽  
Nikolaus Kriegeskorte ◽  
Marieke Mur

Deep neural networks (DNNs) are promising models of the cortical computations supporting human object recognition. However, despite their ability to explain a significant portion of variance in neural data, the agreement between models and brain representational dynamics is far from perfect. Here, we address this issue by asking which representational features are currently unaccounted for in neural timeseries data, estimated for multiple areas of the human ventral stream via source-reconstructed magnetoencephalography (MEG) data. In particular, we focus on the ability of visuo-semantic models, consisting of human-generated labels of higher-level object features and categories, to explain variance beyond the explanatory power of DNNs alone. We report a gradual transition in the importance of visuo-semantic features from early to higher-level areas along the ventral stream. While early visual areas are better explained by DNN features, higher-level cortical dynamics are best accounted for by visuo-semantic models. These results suggest that current DNNs fail to fully capture the visuo-semantic features represented in higher-level human visual cortex and suggest a path towards more accurate models of ventral stream computations.


Sign in / Sign up

Export Citation Format

Share Document