scholarly journals Attentional Modulation of Visual Spatial Integration: Psychophysical Evidence Supported by Population Coding Modeling

2019 ◽  
Vol 31 (9) ◽  
pp. 1329-1342
Author(s):  
Alessandro Grillini ◽  
Remco J. Renken ◽  
Frans W. Cornelissen

Two prominent strategies that the human visual system uses to reduce incoming information are spatial integration and selective attention. Whereas spatial integration summarizes and combines information over the visual field, selective attention can single it out for scrutiny. The way in which these well-known mechanisms—with rather opposing effects—interact remains largely unknown. To address this, we had observers perform a gaze-contingent search task that nudged them to deploy either spatial or feature-based attention to maximize performance. We found that, depending on the type of attention employed, visual spatial integration strength changed either in a strong and localized or a more modest and global manner compared with a baseline condition. Population code modeling revealed that a single mechanism can account for both observations: Attention acts beyond the neuronal encoding stage to tune the spatial integration weights of neural populations. Our study shows how attention and integration interact to optimize the information flow through the brain.

2017 ◽  
Author(s):  
Joel Zylberberg

AbstractTo study sensory representations, neuroscientists record neural activities while presenting different stimuli to the animal. From these data, we identify neurons whose activities depend systematically on each aspect of the stimulus. These neurons are said to be “tuned” to that stimulus feature. It is typically assumed that these tuned neurons represent the stimulus feature in their firing, whereas any “untuned” neurons do not contribute to its representation. Recent experimental work questioned this assumption, showing that in some circumstances, neurons that are untuned to a particular stimulus feature can contribute to its representation. These findings suggest that, by ignoring untuned neurons, our understanding of population coding might be incomplete. At the same time, several key questions remain unanswered: Are the impacts of untuned neurons on population coding due to weak tuning that is nevertheless below the threshold the experimenters set for calling neurons tuned (vs untuned)? Do these effects hold for different population sizes and/or correlation structures? And could neural circuit function ever benefit from having some untuned neurons vs having all neurons be tuned to the stimulus? Using theoretical calculations and analyses of in vivo neural data, I answer those questions by: a) showing how, in the presence of correlated variability, untuned neurons can enhance sensory information coding, for a variety of population sizes and correlation structures; b) demonstrating that this effect does not rely on weak tuning; and c) identifying conditions under which the neural code can be made more informative by replacing some of the tuned neurons with untuned ones. These conditions specify when there is a functional benefit to having untuned neurons.Author SummaryIn the visual system, most neurons’ firing rates are tuned to various aspects of the stimulus (motion, contrast, etc.). For each stimulus feature, however some neurons appear to be untuned: their firing rates do not depend on that stimulus feature. Previous work on information coding in neural populations ignored untuned neurons, assuming that only the neurons tuned to a given stimulus feature contribute to its encoding. Recent experimental work questioned this assumption, showing that neurons with no apparent tuning can sometimes contribute to information coding. However, key questions remain unanswered. First, how do the untuned neurons contribute to information coding, and could this effect rely on those neurons having weak tuning that was overlooked? Second, does the function of a neural circuit ever benefit from having some neurons untuned? Or should every neuron be tuned (even weakly) to every stimulus feature? Here, I use mathematical calculations and analyses of data from the mouse visual cortex to answer those questions. First, I show how (and why) correlations between neurons enable the untuned neurons to contribute to information coding. Second, I show that neural populations can often do a better job of encoding a given stimulus feature when some of the neurons are untuned for that stimulus feature. Thus, it may be best for the brain to segregate its tuning, leaving some neurons untuned for each stimulus feature. Along with helping to explain how the brain processes external stimuli, this work has strong implications for attempts to decode brain signals, to control brain-machine interfaces: better performance could be obtained if the activities of all neurons are decoded, as opposed to only those with strong tuning.


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Katrina R. Quinn ◽  
Lenka Seillier ◽  
Daniel A. Butts ◽  
Hendrikje Nienborg

AbstractFeedback in the brain is thought to convey contextual information that underlies our flexibility to perform different tasks. Empirical and computational work on the visual system suggests this is achieved by targeting task-relevant neuronal subpopulations. We combine two tasks, each resulting in selective modulation by feedback, to test whether the feedback reflected the combination of both selectivities. We used visual feature-discrimination specified at one of two possible locations and uncoupled the decision formation from motor plans to report it, while recording in macaque mid-level visual areas. Here we show that although the behavior is spatially selective, using only task-relevant information, modulation by decision-related feedback is spatially unselective. Population responses reveal similar stimulus-choice alignments irrespective of stimulus relevance. The results suggest a common mechanism across tasks, independent of the spatial selectivity these tasks demand. This may reflect biological constraints and facilitate generalization across tasks. Our findings also support a previously hypothesized link between feature-based attention and decision-related activity.


2015 ◽  
Vol 113 (9) ◽  
pp. 3159-3171 ◽  
Author(s):  
Caroline D. B. Luft ◽  
Alan Meeson ◽  
Andrew E. Welchman ◽  
Zoe Kourtzi

Learning the structure of the environment is critical for interpreting the current scene and predicting upcoming events. However, the brain mechanisms that support our ability to translate knowledge about scene statistics to sensory predictions remain largely unknown. Here we provide evidence that learning of temporal regularities shapes representations in early visual cortex that relate to our ability to predict sensory events. We tested the participants' ability to predict the orientation of a test stimulus after exposure to sequences of leftward- or rightward-oriented gratings. Using fMRI decoding, we identified brain patterns related to the observers' visual predictions rather than stimulus-driven activity. Decoding of predicted orientations following structured sequences was enhanced after training, while decoding of cued orientations following exposure to random sequences did not change. These predictive representations appear to be driven by the same large-scale neural populations that encode actual stimulus orientation and to be specific to the learned sequence structure. Thus our findings provide evidence that learning temporal structures supports our ability to predict future events by reactivating selective sensory representations as early as in primary visual cortex.


2012 ◽  
Vol 2 (6) ◽  
pp. 241-254 ◽  
Author(s):  
Zerrin Atakan

Cannabis is a complex plant, with major compounds such as delta-9-tetrahydrocannabinol and cannabidiol, which have opposing effects. The discovery of its compounds has led to the further discovery of an important neurotransmitter system called the endocannabinoid system. This system is widely distributed in the brain and in the body, and is considered to be responsible for numerous significant functions. There has been a recent and consistent worldwide increase in cannabis potency, with increasing associated health concerns. A number of epidemiological research projects have shown links between dose-related cannabis use and an increased risk of development of an enduring psychotic illness. However, it is also known that not everyone who uses cannabis is affected adversely in the same way. What makes someone more susceptible to its negative effects is not yet known, however there are some emerging vulnerability factors, ranging from certain genes to personality characteristics. In this article we first provide an overview of the biochemical basis of cannabis research by examining the different effects of the two main compounds of the plant and the endocannabinoid system, and then go on to review available information on the possible factors explaining variation of its effects upon different individuals.


2021 ◽  
Vol 44 (1) ◽  
Author(s):  
Rava Azeredo da Silveira ◽  
Fred Rieke

Neurons in the brain represent information in their collective activity. The fidelity of this neural population code depends on whether and how variability in the response of one neuron is shared with other neurons. Two decades of studies have investigated the influence of these noise correlations on the properties of neural coding. We provide an overview of the theoretical developments on the topic. Using simple, qualitative, and general arguments, we discuss, categorize, and relate the various published results. We emphasize the relevance of the fine structure of noise correlation, and we present a new approach to the issue. Throughout this review, we emphasize a geometrical picture of how noise correlations impact the neural code. Expected final online publication date for the Annual Review of Neuroscience, Volume 44 is July 2021. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.


2020 ◽  
Author(s):  
Soma Nonaka ◽  
Kei Majima ◽  
Shuntaro C. Aoki ◽  
Yukiyasu Kamitani

SummaryAchievement of human-level image recognition by deep neural networks (DNNs) has spurred interest in whether and how DNNs are brain-like. Both DNNs and the visual cortex perform hierarchical processing, and correspondence has been shown between hierarchical visual areas and DNN layers in representing visual features. Here, we propose the brain hierarchy (BH) score as a metric to quantify the degree of hierarchical correspondence based on the decoding of individual DNN unit activations from human brain activity. We find that BH scores for 29 pretrained DNNs with varying architectures are negatively correlated with image recognition performance, indicating that recently developed high-performance DNNs are not necessarily brain-like. Experimental manipulations of DNN models suggest that relatively simple feedforward architecture with broad spatial integration is critical to brain-like hierarchy. Our method provides new ways for designing DNNs and understanding the brain in consideration of their representational homology.


Psihologija ◽  
2010 ◽  
Vol 43 (2) ◽  
pp. 155-165 ◽  
Author(s):  
Vanja Kovic ◽  
Kim Plunkett ◽  
Gert Westermann

In this paper we present an ERP study examining the underlying nature of semantic representation of animate and inanimate objects. Time-locking ERP signatures to the onset of auditory stimuli we found topological similarities in animate and inanimate object processing. Moreover, we found no difference between animates and inanimates in the N400 amplitude, when mapping more specific to more general representation (visual to auditory stimuli). These studies provide further evidence for the theory of unitary semantic organization, but no support for the feature-based prediction of segregated conceptual organization. Further comparisons of animate vs. inanimate matches and within-vs. between-category mismatches revealed following results: processing of animate matches elicited more positivity than processing of inanimates within the N400 time-window; also, inanimate mismatches elicited a stronger N400 than did animate mismatches. Based on these findings we argue that one of the possible explanations for finding different and sometimes contradictory results in the literature regarding processing and representations of animates and inanimates in the brain could lie in the variability of selected items within each of the categories, that is, homogeneity of the categories.


2006 ◽  
Vol 16 (10) ◽  
pp. 1045-1050
Author(s):  
Song Weiqun ◽  
Lou Yuejia ◽  
Chi Song ◽  
Ji Xunming ◽  
Ling Feng ◽  
...  

1989 ◽  
Vol 1 (1) ◽  
pp. 92-103 ◽  
Author(s):  
H. Taichi Wang ◽  
Bimal Mathur ◽  
Christof Koch

Computing motion on the basis of the time-varying image intensity is a difficult problem for both artificial and biological vision systems. We show how gradient models, a well-known class of motion algorithms, can be implemented within the magnocellular pathway of the primate's visual system. Our cooperative algorithm computes optical flow in two steps. In the first stage, assumed to be located in primary visual cortex, local motion is measured while spatial integration occurs in the second stage, assumed to be located in the middle temporal area (MT). The final optical flow is extracted in this second stage using population coding, such that the velocity is represented by the vector sum of neurons coding for motion in different directions. Our theory, relating the single-cell to the perceptual level, accounts for a number of psychophysical and electrophysiological observations and illusions.


Sign in / Sign up

Export Citation Format

Share Document