scholarly journals A new approach to solving the feature-binding problem in primate vision

2018 ◽  
Vol 8 (4) ◽  
pp. 20180021 ◽  
Author(s):  
James B. Isbister ◽  
Akihiro Eguchi ◽  
Nasir Ahmad ◽  
Juan M. Galeazzi ◽  
Mark J. Buckley ◽  
...  

We discuss a recently proposed approach to solve the classic feature-binding problem in primate vision that uses neural dynamics known to be present within the visual cortex. Broadly, the feature-binding problem in the visual context concerns not only how a hierarchy of features such as edges and objects within a scene are represented, but also the hierarchical relationships between these features at every spatial scale across the visual field. This is necessary for the visual brain to be able to make sense of its visuospatial world. Solving this problem is an important step towards the development of artificial general intelligence. In neural network simulation studies, it has been found that neurons encoding the binding relations between visual features, known as binding neurons, emerge during visual training when key properties of the visual cortex are incorporated into the models. These biological network properties include (i) bottom-up, lateral and top-down synaptic connections, (ii) spiking neuronal dynamics, (iii) spike timing-dependent plasticity, and (iv) a random distribution of axonal transmission delays (of the order of several milliseconds) in the propagation of spikes between neurons. After training the network on a set of visual stimuli, modelling studies have reported observing the gradual emergence of polychronization through successive layers of the network, in which subpopulations of neurons have learned to emit their spikes in regularly repeating spatio-temporal patterns in response to specific visual stimuli. Such a subpopulation of neurons is known as a polychronous neuronal group (PNG). Some neurons embedded within these PNGs receive convergent inputs from neurons representing lower- and higher-level visual features, and thus appear to encode the hierarchical binding relationship between features. Neural activity with this kind of spatio-temporal structure robustly emerges in the higher network layers even when neurons in the input layer represent visual stimuli with spike timings that are randomized according to a Poisson distribution. The resulting hierarchical representation of visual scenes in such models, including the representation of hierarchical binding relations between lower- and higher-level visual features, is consistent with the hierarchical phenomenology or subjective experience of primate vision and is distinct from approaches interested in segmenting a visual scene into a finite set of objects.

2011 ◽  
Vol 12 (S1) ◽  
Author(s):  
Alberto Mazzoni ◽  
Christoph Kayser ◽  
Yusuke Murayama ◽  
Juan Martinez ◽  
Rodrigo Quian Quiroga ◽  
...  

Author(s):  
R. Oz ◽  
H. Edelman-Klapper ◽  
S. Nivinsky-Margalit ◽  
H. Slovin

AbstractIntra cortical microstimulation (ICMS) in the primary visual cortex (V1) can generate the visual perception of phosphenes and evoke saccades directed to the stimulated location in the retinotopic map. Although ICMS is widely used, little is known about the evoked spatio-temporal patterns of neural activity and their relation to neural responses evoked by visual stimuli or saccade generation. To investigate this, we combined ICMS with Voltage Sensitive Dye Imaging in V1 of behaving monkeys and measured neural activity at high spatial (meso-scale) and temporal resolution. Small visual stimuli and ICMS evoked population activity spreading over few mm that propagated to extrastriate areas. The population responses evoked by ICMS showed faster dynamics and different spatial propagation patterns. Neural activity was higher in trials w/saccades compared with trials w/o saccades. In conclusion, our results uncover the spatio-temporal patterns evoked by ICMS and their relation to visual processing and saccade generation.


2016 ◽  
Author(s):  
Dylan R Muir ◽  
Patricia Molina-Luna ◽  
Morgane M Roth ◽  
Fritjof Helmchen ◽  
Björn M Kampa

AbstractLocal excitatory connections in mouse primary visual cortex (V1) are stronger and more prevalent between neurons that share similar functional response features. However, the details of how functional rules for local connectivity shape neuronal responses in V1 remain unknown. We hypothesised that complex responses to visual stimuli may arise as a consequence of rules for selective excitatory connectivity within the local network in the superficial layers of mouse V1. In mouse V1 many neurons respond to overlapping grating stimuli (plaid stimuli) with highly selective and facilitatory responses, which are not simply predicted by responses to single gratings presented alone. This complexity is surprising, since excitatory neurons in V1 are considered to be mainly tuned to single preferred orientations. Here we examined the consequences for visual processing of two alternative connectivity schemes: in the first case, local connections are aligned with visual properties inherited from feedforward input (a ‘like-to-like’ scheme specifically connecting neurons that share similar preferred orientations); in the second case, local connections group neurons into excitatory subnetworks that combine and amplify multiple feedforward visual properties (a ‘feature binding’ scheme). By comparing predictions from large scale computational models with in vivo recordings of visual representations in mouse V1, we found that responses to plaid stimuli were best explained by a assuming ‘feature binding’ connectivity. Unlike under the ‘like-to-like’ scheme, selective amplification within feature-binding excitatory subnetworks replicated experimentally observed facilitatory responses to plaid stimuli; explained selective plaid responses not predicted by grating selectivity; and was consistent with broad anatomical selectivity observed in mouse V1. Our results show that visual feature binding can occur through local recurrent mechanisms without requiring feedforward convergence, and that such a mechanism is consistent with visual responses and cortical anatomy in mouse V1.Author summaryThe brain is a highly complex structure, with abundant connectivity between nearby neurons in the neocortex, the outermost and evolutionarily most recent part of the brain. Although the network architecture of the neocortex can appear disordered, connections between neurons seem to follow certain rules. These rules most likely determine how information flows through the neural circuits of the brain, but the relationship between particular connectivity rules and the function of the cortical network is not known. We built models of visual cortex in the mouse, assuming distinct rules for connectivity, and examined how the various rules changed the way the models responded to visual stimuli. We also recorded responses to visual stimuli of populations of neurons in anaesthetised mice, and compared these responses with our model predictions. We found that connections in neocortex probably follow a connectivity rule that groups together neurons that differ in simple visual properties, to build more complex representations of visual stimuli. This finding is surprising because primary visual cortex is assumed to support mainly simple visual representations. We show that including specific rules for non-random connectivity in cortical models, and precisely measuring those rules in cortical tissue, is essential to understanding how information is processed by the brain.


2018 ◽  
Author(s):  
Mina A.Khoei ◽  
Francesco Galluppi ◽  
Quentin Sabatier ◽  
Pierre Pouget ◽  
Benoit R Cottereau ◽  
...  

Although neural responses with a millisecond precision were reported in the retina, lateral geniculate nucleus and visual cortex of multiple species, the presence and role of such a fine temporal structure is still debated at the cortical level and the general belief remains that early visual system encodes information at slower timescales. In this study, we used a new stimulation platform to generate visual stimuli that were very precisely encoded in time and we characterized in human subjects the EEG responses to moving patterns that shared the same global motion but differed in their fine scale spatio-temporal properties. In two experiments, we manipulated the information within temporal windows that corresponded to the frame duration in conventional (1/60 = 16.7ms, experience 1) and more recent (1/120 = 8.3ms, experience 2) visual displays. Our results demonstrate that EEG responses to temporally dense and coherent trajectories are significantly stronger than those to control conditions without these properties. They extend previous results from our group that showed that accurate temporal information (<10ms) significantly improve perceptual judgments on spatial discrimination, digit recognition and sensitivity for speed [Kime et al., 2016]. Altogether, our results suggest that instead of low-pass filtering the temporal information it receives from its thalamic afferents, the visual cortex may actually exploit its richness to improve visual perception.


2008 ◽  
Vol 100 (3) ◽  
pp. 1523-1532 ◽  
Author(s):  
Pedro Maldonado ◽  
Cecilia Babul ◽  
Wolf Singer ◽  
Eugenio Rodriguez ◽  
Denise Berger ◽  
...  

When inspecting visual scenes, primates perform on average four saccadic eye movements per second, which implies that scene segmentation, feature binding, and identification of image components is accomplished in <200 ms. Thus individual neurons can contribute only a small number of discharges for these complex computations, suggesting that information is encoded not only in the discharge rate but also in the timing of action potentials. While monkeys inspected natural scenes we registered, with multielectrodes from primary visual cortex, the discharges of simultaneously recorded neurons. Relating these signals to eye movements revealed that discharge rates peaked around 90 ms after fixation onset and then decreased to near baseline levels within 200 ms. Unitary event analysis revealed that preceding this increase in firing there was an episode of enhanced response synchronization during which discharges of spatially distributed cells coincided within 5-ms windows significantly more often than predicted by the discharge rates. This episode started 30 ms after fixation onset and ended by the time discharge rates had reached their maximum. When the animals scanned a blank screen a small change in firing rate, but no excess synchronization, was observed. The short latency of the stimulation-related synchronization phenomena suggests a fast-acting mechanism for the coordination of spike timing that may contribute to the basic operations of scene segmentation.


1991 ◽  
Vol 3 (2) ◽  
pp. 167-178 ◽  
Author(s):  
Thomas B. Schillen ◽  
Peter König

Recent theoretical and experimental work suggests a temporal structure of neuronal spike activity as a potential mechanism for solving the binding problem in the brain. In particular, recordings from cat visual cortex demonstrate the possibility that stimulus coherency is coded by synchronization of oscillatory neuronal responses. Coding by synchronized oscillatory activity has to avoid bulk synchronization within entire cortical areas. Recent experimental evidence indicates that incoherent stimuli can activate coherently oscillating assemblies of cells that are not synchronized among one another. In this paper we show that appropriately designed excitatory delay connections can support the desynchronization of two-dimensional layers of delayed nonlinear oscillators. Closely following experimental observations, we then present two examples of stimulus-dependent assembly formation in oscillatory layers that employ both synchronizing and desynchronizing delay connections: First, we demonstrate the segregation of oscillatory responses to two overlapping but incoherently moving stimuli. Second, we show that the coherence of movement and location of two stimulus bar segments can be coded by the correlation of oscillatory activity.


2020 ◽  
Author(s):  
Heonsoo Lee ◽  
Sean Tanabe ◽  
Shiyong Wang ◽  
Anthony G. Hudetz

AbstractIntroductionFiring rate (FR) and population coupling (PC) are intrinsic properties of cortical neurons. Neurons with different FR and PC have diverse excitability to stimulation, tuning curve, and synaptic plasticity. Therefore, investigation of the effect of anesthesia on neurons with different FR and PC would be important to understand state-dependent information processing in neuronal circuits.MethodsTo test how anesthesia affects neurons with diverse PC and FR, we measured single-unit activities in deep layers of primary visual cortex at three levels of anesthesia with desflurane and in wakefulness. Based on PC and FR in wakefulness, neurons were classified into three distinct groups: high PC-high FR (HPHF), low PC-high FR (LPHF), and low PC-low FR (LPLF) neurons.ResultsApplying repeated light flashes as visual stimuli, HPHF neurons showed the strongest early response (FR at 20-150ms post-stimulus) among the three groups, whereas the response of LPHF neurons persisted longest (up to 440ms). Anesthesia profoundly altered PC and FR, and differently affected the three neuron groups: (i) PC and FR became strongly correlated suppressing population-independent spike activity; (ii) Pairwise correlation of spikes between neurons could be predicted by a PC-based raster model suggesting uniform neuron-to-neuron coupling; (iii) Contrary to evoked-potential studies under anesthesia, the flash-induced early response of HPHF neurons was attenuated, and their spike timing was split and delayed; (iv) Late response (FR at 200-400ms post-stimulus) was suppressed both in HPHF and LPHF neurons.ConclusionsAnesthetic-induced association between PC and FR suggests reduced information content in the neural circuit. Altered response of HPHF neurons to visual stimuli suggests that anesthesia interferes with conscious sensory processing in primary sensory cortex.


Author(s):  
ALBERT L. ROTHENSTEIN ◽  
ANTONIO J. RODRÍGUEZ-SÁNCHEZ ◽  
EVGUENI SIMINE ◽  
JOHN K. TSOTSOS

We present a biologically plausible computational model for solving the visual feature binding problem, based on recent results regarding the time course and processing sequence in the primate visual system. The feature binding problem appears due to the distributed nature of visual processing in the primate brain, and the gradual loss of spatial information along the processing hierarchy. This paper puts forward the proposal that by using multiple passes of the visual processing hierarchy, both bottom-up and top-down, and using task information to tune the processing prior to each pass, we can explain the different recognition behaviors that primate vision exhibits. To accomplish this, four different kinds of binding processes are introduced and are tied directly to specific recognition tasks and their time course. The model relies on the reentrant connections so ubiquitous in the primate brain to recover spatial information, and thus allow features represented in different parts of the brain to be integrated in a unitary conscious percept. We show how different tasks and stimuli have different binding requirements, and present a unified framework within the Selective Tuning model of visual attention.


Sign in / Sign up

Export Citation Format

Share Document