scholarly journals A strategy for mapping biophysical to abstract neuronal network models applied to primary visual cortex

2021 ◽  
Vol 17 (8) ◽  
pp. e1009007
Author(s):  
Anton V. Chizhov ◽  
Lyle J. Graham

A fundamental challenge for the theoretical study of neuronal networks is to make the link between complex biophysical models based directly on experimental data, to progressively simpler mathematical models that allow the derivation of general operating principles. We present a strategy that successively maps a relatively detailed biophysical population model, comprising conductance-based Hodgkin-Huxley type neuron models with connectivity rules derived from anatomical data, to various representations with fewer parameters, finishing with a firing rate network model that permits analysis. We apply this methodology to primary visual cortex of higher mammals, focusing on the functional property of stimulus orientation selectivity of receptive fields of individual neurons. The mapping produces compact expressions for the parameters of the abstract model that clearly identify the impact of specific electrophysiological and anatomical parameters on the analytical results, in particular as manifested by specific functional signatures of visual cortex, including input-output sharpening, conductance invariance, virtual rotation and the tilt after effect. Importantly, qualitative differences between model behaviours point out consequences of various simplifications. The strategy may be applied to other neuronal systems with appropriate modifications.

2021 ◽  
Author(s):  
Anton V. Chizhov ◽  
Lyle J. Graham

AbstractA fundamental challenge for the theoretical study of neuronal networks is to make the link between complex biophysical models based directly on experimental data, to progressively simpler mathematical models that allow the derivation of general operating principles. We present a strategy that successively maps a relatively detailed biophysical population model, comprising conductance-based Hodgkin-Huxley type neuron models with connectivity rules derived from anatomical data, to various representations with fewer parameters, finishing with a firing rate network model that permits analysis. We apply this methodology to primary visual cortex of higher mammals, focusing on the functional property of stimulus orientation selectivity of receptive fields of individual neurons. The mapping produces compact expressions for the parameters of the abstract model that clearly identify the impact of specific electrophysiological and anatomical parameters on the analytical results, in particular as manifested by specific functional signatures of visual cortex, including input-output sharpening, conductance invariance, virtual rotation and the tilt after effect. Importantly, qualitative differences between model behaviours point out consequences of various simplifications. The strategy may be applied to other neuronal systems with appropriate modifications.Author summaryA hierarchy of theoretical approaches to study a neuronal network depends on a tradeoff between biological fidelity and mathematical tractibility. Biophysically-detailed models consider cellular mechanisms and anatomically defined synaptic circuits, but are often too complex to reveal insights into fundamental principles. In contrast, increasingly abstract reduced models facilitate analytical insights. To better ground the latter to the underlying biology, we describe a systematic procedure to move across the model hierarchy that allows understanding how changes in biological parameters - physiological, pathophysiological, or because of new data - impact the behaviour of the network. We apply this approach to mammalian primary visual cortex, and examine how the different models in the hierarchy reproduce functional signatures of this area, in particular the tuning of neurons to the orientation of a visual stimulus. Our work provides a navigation of the complex parameter space of neural network models faithful to biology, as well as highlighting how simplifications made for mathematical convenience can fundamentally change their behaviour.


2019 ◽  
Vol 121 (6) ◽  
pp. 2202-2214 ◽  
Author(s):  
John P. McClure ◽  
Pierre-Olivier Polack

Multimodal sensory integration facilitates the generation of a unified and coherent perception of the environment. It is now well established that unimodal sensory perceptions, such as vision, are improved in multisensory contexts. Whereas multimodal integration is primarily performed by dedicated multisensory brain regions such as the association cortices or the superior colliculus, recent studies have shown that multisensory interactions also occur in primary sensory cortices. In particular, sounds were shown to modulate the responses of neurons located in layers 2/3 (L2/3) of the mouse primary visual cortex (V1). Yet, the net effect of sound modulation at the V1 population level remained unclear. In the present study, we performed two-photon calcium imaging in awake mice to compare the representation of the orientation and the direction of drifting gratings by V1 L2/3 neurons in unimodal (visual only) or multimodal (audiovisual) conditions. We found that sound modulation depended on the tuning properties (orientation and direction selectivity) and response amplitudes of V1 L2/3 neurons. Sounds potentiated the responses of neurons that were highly tuned to the cue’s orientation and direction but weakly active in the unimodal context, following the principle of inverse effectiveness of multimodal integration. Moreover, sound suppressed the responses of neurons untuned for the orientation and/or the direction of the visual cue. Altogether, sound modulation improved the representation of the orientation and direction of the visual stimulus in V1 L2/3. Namely, visual stimuli presented with auditory stimuli recruited a neuronal population better tuned to the visual stimulus orientation and direction than when presented alone. NEW & NOTEWORTHY The primary visual cortex (V1) receives direct inputs from the primary auditory cortex. Yet, the impact of sounds on visual processing in V1 remains controverted. We show that the modulation by pure tones of V1 visual responses depends on the orientation selectivity, direction selectivity, and response amplitudes of V1 neurons. Hence, audiovisual stimuli recruit a population of V1 neurons better tuned to the orientation and direction of the visual stimulus than unimodal visual stimuli.


2000 ◽  
Vol 84 (4) ◽  
pp. 2048-2062 ◽  
Author(s):  
Mitesh K. Kapadia ◽  
Gerald Westheimer ◽  
Charles D. Gilbert

To examine the role of primary visual cortex in visuospatial integration, we studied the spatial arrangement of contextual interactions in the response properties of neurons in primary visual cortex of alert monkeys and in human perception. We found a spatial segregation of opposing contextual interactions. At the level of cortical neurons, excitatory interactions were located along the ends of receptive fields, while inhibitory interactions were strongest along the orthogonal axis. Parallel psychophysical studies in human observers showed opposing contextual interactions surrounding a target line with a similar spatial distribution. The results suggest that V1 neurons can participate in multiple perceptual processes via spatially segregated and functionally distinct components of their receptive fields.


1997 ◽  
Vol 9 (5) ◽  
pp. 959-970 ◽  
Author(s):  
Christian Piepenbrock ◽  
Helge Ritter ◽  
Klaus Obermayer

Correlation-based learning (CBL) has been suggested as the mechanism that underlies the development of simple-cell receptive fields in the primary visual cortex of cats, including orientation preference (OR) and ocular dominance (OD) (Linsker, 1986; Miller, Keller, & Stryker, 1989). CBL has been applied successfully to the development of OR and OD individually (Miller, Keller, & Stryker, 1989; Miller, 1994; Miyashita & Tanaka, 1991; Erwin, Obermayer, & Schulten, 1995), but the conditions for their joint development have not been studied (but see Erwin & Miller, 1995, for independent work on the same question) in contrast to competitive Hebbian models (Obermayer, Blasdel, & Schulten, 1992). In this article, we provide insight into why this has been the case: OR and OD decouple in symmetric CBL models, and a joint development of OR and OD is possible only in a parameter regime that depends on nonlinear mechanisms.


2005 ◽  
Vol 94 (1) ◽  
pp. 788-798 ◽  
Author(s):  
Valerio Mante ◽  
Matteo Carandini

A recent optical imaging study of primary visual cortex (V1) by Basole, White, and Fitzpatrick demonstrated that maps of preferred orientation depend on the choice of stimuli used to measure them. These authors measured population responses expressed as a function of the optimal orientation of long drifting bars. They then varied bar length, direction, and speed and found that stimuli of a same orientation can elicit different population responses and stimuli with different orientation can elicit similar population responses. We asked whether these results can be explained from known properties of V1 receptive fields. We implemented an “energy model” where a receptive field integrates stimulus energy over a region of three-dimensional frequency space. The population of receptive fields defines a volume of visibility, which covers all orientations and a plausible range of spatial and temporal frequencies. This energy model correctly predicts the population response to bars of different length, direction, and speed and explains the observations made with optical imaging. The model also readily explains a related phenomenon, the appearance of motion streaks for fast-moving dots. We conclude that the energy model can be applied to activation maps of V1 and predicts phenomena that may otherwise appear to be surprising. These results indicate that maps obtained with optical imaging reflect the layout of neurons selective for stimulus energy, not for isolated stimulus features such as orientation, direction, and speed.


2018 ◽  
Author(s):  
Adam P. Morris ◽  
Bart Krekelberg

SummaryHumans and other primates rely on eye movements to explore visual scenes and to track moving objects. As a result, the image that is projected onto the retina – and propagated throughout the visual cortical hierarchy – is almost constantly changing and makes little sense without taking into account the momentary direction of gaze. How is this achieved in the visual system? Here we show that in primary visual cortex (V1), the earliest stage of cortical vision, neural representations carry an embedded “eye tracker” that signals the direction of gaze associated with each image. Using chronically implanted multi-electrode arrays, we recorded the activity of neurons in V1 during tasks requiring fast (exploratory) and slow (pursuit) eye movements. Neurons were stimulated with flickering, full-field luminance noise at all times. As in previous studies 1-4, we observed neurons that were sensitive to gaze direction during fixation, despite comparable stimulation of their receptive fields. We trained a decoder to translate neural activity into metric estimates of (stationary) gaze direction. This decoded signal not only tracked the eye accurately during fixation, but also during fast and slow eye movements, even though the decoder had not been exposed to data from these behavioural states. Moreover, this signal lagged the real eye by approximately the time it took for new visual information to travel from the retina to cortex. Using simulations, we show that this V1 eye position signal could be used to take into account the sensory consequences of eye movements and map the fleeting positions of objects on the retina onto their stable position in the world.


2016 ◽  
Author(s):  
Inbal Ayzenshtat ◽  
Jesse Jackson ◽  
Rafael Yuste

AbstractThe response properties of neurons to sensory stimuli have been used to identify their receptive fields and functionally map sensory systems. In primary visual cortex, most neurons are selective to a particular orientation and spatial frequency of the visual stimulus. Using two-photon calcium imaging of neuronal populations from the primary visual cortex of mice, we have characterized the response properties of neurons to various orientations and spatial frequencies. Surprisingly, we found that the orientation selectivity of neurons actually depends on the spatial frequency of the stimulus. This dependence can be easily explained if one assumed spatially asymmetric Gabor-type receptive fields. We propose that receptive fields of neurons in layer 2/3 of visual cortex are indeed spatially asymmetric, and that this asymmetry could be used effectively by the visual system to encode natural scenes.Significance StatementIn this manuscript we demonstrate that the orientation selectivity of neurons in primary visual cortex of mouse is highly dependent on the stimulus SF. This dependence is realized quantitatively in a decrease in the selectivity strength of cells in non-optimum SF, and more importantly, it is also evident qualitatively in a shift in the preferred orientation of cells in non-optimum SF. We show that a receptive-field model of a 2D asymmetric Gabor, rather than a symmetric one, can explain this surprising observation. Therefore, we propose that the receptive fields of neurons in layer 2/3 of mouse visual cortex are spatially asymmetric and this asymmetry could be used effectively by the visual system to encode natural scenes.Highlights–Orientation selectivity is dependent on spatial frequency.–Asymmetric Gabor model can explain this dependence.


2021 ◽  
Author(s):  
Yulia Revina ◽  
Lucy S Petro ◽  
Cristina B Denk-Florea ◽  
Isa S Rao ◽  
Lars Muckli

The majority of synaptic inputs to the primary visual cortex (V1) are non-feedforward, instead originating from local and anatomical feedback connections. Animal electrophysiology experiments show that feedback signals originating from higher visual areas with larger receptive fields modulate the surround receptive fields of V1 neurons. Theories of cortical processing propose various roles for feedback and feedforward processing, but systematically investigating their independent contributions to cortical processing is challenging because feedback and feedforward processes coexist even in single neurons. Capitalising on the larger receptive fields of higher visual areas compared to primary visual cortex (V1), we used an occlusion paradigm that isolates top-down influences from feedforward processing. We utilised functional magnetic resonance imaging (fMRI) and multi-voxel pattern analysis methods in humans viewing natural scene images. We parametrically measured how the availability of contextual information determines the presence of detectable feedback information in non-stimulated V1, and how feedback information interacts with feedforward processing. We show that increasing the visibility of the contextual surround increases scene-specific feedback information, and that this contextual feedback enhances feedforward information. Our findings are in line with theories that cortical feedback signals transmit internal models of predicted inputs.


2021 ◽  
Vol 15 ◽  
Author(s):  
Tushar Chauhan ◽  
Timothée Masquelier ◽  
Benoit R. Cottereau

The early visual cortex is the site of crucial pre-processing for more complex, biologically relevant computations that drive perception and, ultimately, behaviour. This pre-processing is often studied under the assumption that neural populations are optimised for the most efficient (in terms of energy, information, spikes, etc.) representation of natural statistics. Normative models such as Independent Component Analysis (ICA) and Sparse Coding (SC) consider the phenomenon as a generative, minimisation problem which they assume the early cortical populations have evolved to solve. However, measurements in monkey and cat suggest that receptive fields (RFs) in the primary visual cortex are often noisy, blobby, and symmetrical, making them sub-optimal for operations such as edge-detection. We propose that this suboptimality occurs because the RFs do not emerge through a global minimisation of generative error, but through locally operating biological mechanisms such as spike-timing dependent plasticity (STDP). Using a network endowed with an abstract, rank-based STDP rule, we show that the shape and orientation tuning of the converged units are remarkably close to single-cell measurements in the macaque primary visual cortex. We quantify this similarity using physiological parameters (frequency-normalised spread vectors), information theoretic measures [Kullback–Leibler (KL) divergence and Gini index], as well as simulations of a typical electrophysiology experiment designed to estimate orientation tuning curves. Taken together, our results suggest that compared to purely generative schemes, process-based biophysical models may offer a better description of the suboptimality observed in the early visual cortex.


Sign in / Sign up

Export Citation Format

Share Document