stimulus space
Recently Published Documents


TOTAL DOCUMENTS

42
(FIVE YEARS 12)

H-INDEX

10
(FIVE YEARS 1)

2021 ◽  
Vol 15 ◽  
Author(s):  
Srinivas Ravishankar ◽  
Mariya Toneva ◽  
Leila Wehbe

A pervasive challenge in brain imaging is the presence of noise that hinders investigation of underlying neural processes, with Magnetoencephalography (MEG) in particular having very low Signal-to-Noise Ratio (SNR). The established strategy to increase MEG's SNR involves averaging multiple repetitions of data corresponding to the same stimulus. However, repetition of stimulus can be undesirable, because underlying neural activity has been shown to change across trials, and repeating stimuli limits the breadth of the stimulus space experienced by subjects. In particular, the rising popularity of naturalistic studies with a single viewing of a movie or story necessitates the discovery of new approaches to increase SNR. We introduce a simple framework to reduce noise in single-trial MEG data by leveraging correlations in neural responses across subjects as they experience the same stimulus. We demonstrate its use in a naturalistic reading comprehension task with 8 subjects, with MEG data collected while they read the same story a single time. We find that our procedure results in data with reduced noise and allows for better discovery of neural phenomena. As proof-of-concept, we show that the N400m's correlation with word surprisal, an established finding in literature, is far more clearly observed in the denoised data than the original data. The denoised data also shows higher decoding and encoding accuracy than the original data, indicating that the neural signals associated with reading are either preserved or enhanced after the denoising procedure.


2021 ◽  
Vol 21 (9) ◽  
pp. 2027
Author(s):  
Gaeun Son ◽  
Dirk B. Walther ◽  
Michael L. Mack
Keyword(s):  

PLoS Biology ◽  
2021 ◽  
Vol 19 (9) ◽  
pp. e3001393
Author(s):  
Jai Y. Yu ◽  
Loren M. Frank

The receptive field of a neuron describes the regions of a stimulus space where the neuron is consistently active. Sparse spiking outside of the receptive field is often considered to be noise, rather than a reflection of information processing. Whether this characterization is accurate remains unclear. We therefore contrasted the sparse, temporally isolated spiking of hippocampal CA1 place cells to the consistent, temporally adjacent spiking seen within their spatial receptive fields (“place fields”). We found that isolated spikes, which occur during locomotion, are strongly phase coupled to hippocampal theta oscillations and transiently express coherent nonlocal spatial representations. Further, prefrontal cortical activity is coordinated with and can predict the occurrence of future isolated spiking events. Rather than local noise within the hippocampus, sparse, isolated place cell spiking reflects a coordinated cortical–hippocampal process consistent with the generation of nonlocal scenario representations during active navigation.


2021 ◽  
Author(s):  
Joshua Peterson ◽  
Stefan Uddenberg ◽  
Tom Griffiths ◽  
Alexander Todorov ◽  
Jordan W Suchow

The diversity in appearance of human faces and their naturalistic viewing conditions give rise to an expansive stimulus space over which humans perceive numerous psychological traits (e.g., perceived trustworthiness). Current scientific models characterize only few of these traits, and over only a tiny fraction of possible faces. Here we show that generative image models from machine learning combined with over 1 million human judgments can capture more than 30 traits over a near-infinite set of face stimuli. This makes it possible to then seamlessly infer and manipulate the psychological traits of arbitrary face photograph inputs and generate infinite synthetic photorealistic face stimuli along those dimensions. The predictive accuracy of our model approaches human inter-rater reliability, which our simulations suggest would not have been possible with previous datasets having fewer faces, fewer trait ratings, or using low-dimensional feature representations.


2021 ◽  
Author(s):  
Isabella Destefano ◽  
Timothy F. Brady ◽  
Edward Vul

“Similarity” is often thought to dictate memory errors. For example, in visual memory, memory judgements of lures are related to their psychophysical similarity to targets: an approximately exponential function in stimulus space (Schurgin et al. 2020). However, similarity is ill-defined for more complex stimuli, and memory errors seem to depend on all the remembered items, not just pairwise similarity. Such effects can be captured by a model that views similarity as a byproduct of Bayesian generalization (Tenenbaum & Griffiths, 2001). Here we ask whether the propensity of people to generalize from a set to an item predicts memory errors to that item. We use the “number game” generalization task to collect human judgements about set membership for symbolic numbers and show that memory errors for numbers are consistent with these generalization judgements rather than pairwise similarity. These results suggest that generalization propensity, rather than “similarity”, drives memory errors.


2020 ◽  
Vol 32 (12) ◽  
pp. 2342-2355
Author(s):  
Meng-Huan Wu ◽  
David Kleinschmidt ◽  
Lauren Emberson ◽  
Donias Doko ◽  
Shimon Edelman ◽  
...  

The human brain is able to learn difficult categorization tasks, even ones that have linearly inseparable boundaries; however, it is currently unknown how it achieves this computational feat. We investigated this by training participants on an animal categorization task with a linearly inseparable prototype structure in a morph shape space. Participants underwent fMRI scans before and after 4 days of behavioral training. Widespread representational changes were found throughout the brain, including an untangling of the categories' neural patterns that made them more linearly separable after behavioral training. These neural changes were task dependent, as they were only observed while participants were performing the categorization task, not during passive viewing. Moreover, they were found to occur in frontal and parietal areas, rather than ventral temporal cortices, suggesting that they reflected attentional and decisional reweighting, rather than changes in object recognition templates. These results illustrate how the brain can flexibly transform neural representational space to solve computationally challenging tasks.


2020 ◽  
Author(s):  
Arthur Prat-Carrabin ◽  
Michael Woodford

AbstractHuman subjects differentially weight different stimuli in averaging tasks. This has been interpreted as reflecting biased stimulus encoding, but an alternative hypothesis is that stimuli are encoded with noise, then optimally decoded. Moreover, with efficient coding, the amount of noise should vary across stimulus space, and depend on the statistics of stimuli. We investigate these predictions through a task in which participants are asked to compare the averages of two series of numbers, each sampled from a prior distribution that differs across blocks of trials. We show that subjects encode numbers with both a bias and a noise that depend on the number. Infrequently occurring numbers are encoded with more noise. A maximum-likelihood decoding model captures subjects’ behaviour and indicates efficient coding. Finally, our model predicts a relation between the bias and variability of estimates, thus providing a statistically-founded, parsimonious derivation of Wei and Stocker’s “law of human perception”.


Perception ◽  
2019 ◽  
Vol 49 (1) ◽  
pp. 3-20
Author(s):  
Kei Kanari ◽  
Hirohiko Kaneko

We examined whether lightness is determined based on the experience of the relationship between a scene’s illumination and its spatial structure in actual environments. For this purpose, we measured some characteristics of scene structure and the illuminance in actual scenes and found some correlations between them. In the psychophysical experiments, a random-dot stereogram consisting of dots with uniform distribution was used to eliminate the effects of local luminance and texture contrasts. Participants matched the lightness of a presented target patch in the stimulus space to that of a comparison patch by adjusting the latter’s luminance. Results showed that the matched luminance tended to increase when the target patch was interpreted as receiving weak illumination in some conditions. These results suggest that the visual system can probably infer a scene’s illumination from a spatial structure without luminance distribution information under an illumination–spatial structure relation.


Science ◽  
2019 ◽  
Vol 364 (6447) ◽  
pp. 1275-1279 ◽  
Author(s):  
Anupam K. Garg ◽  
Peichao Li ◽  
Mohammad S. Rashid ◽  
Edward M. Callaway

Previous studies support the textbook model that shape and color are extracted by distinct neurons in primate primary visual cortex (V1). However, rigorous testing of this model requires sampling a larger stimulus space than previously possible. We used stable GCaMP6f expression and two-photon calcium imaging to probe a very large spatial and chromatic visual stimulus space and map functional microarchitecture of thousands of neurons with single-cell resolution. Notable proportions of V1 neurons strongly preferred equiluminant color over achromatic stimuli and were also orientation selective, indicating that orientation and color in V1 are mutually processed by overlapping circuits. Single neurons could precisely and unambiguously code for both color and orientation. Further analyses revealed systematic spatial relationships between color tuning, orientation selectivity, and cytochrome oxidase histology.


Sign in / Sign up

Export Citation Format

Share Document