Orientation Tuning Properties of Simple Cells in Area V1 Derived from an Approximate Analysis of Nonlinear Neural Field Models

2001 ◽  
Vol 13 (8) ◽  
pp. 1721-1747 ◽  
Author(s):  
Thomas Wennekers

We present a general approximation method for the mathematical analysis of spatially localized steady-state solutions in nonlinear neural field models. These models comprise several layers of excitatory and inhibitory cells. Coupling kernels between and inside layers are assumed to be gaussian shaped. In response to spatially localized (i.e., tuned) inputs, such networks typically reveal stationary localized activity profiles in the different layers. Qualitative properties of these solutions, like response amplitudes and tuning widths, are approximated for a whole class of nonlinear rate functions that obey a power law above some threshold and that are zero below. A special case of these functions is the semilinear function, which is commonly used in neural field models. The method is then applied to models for orientation tuning in cortical simple cells: first, to the one-layer model with “difference of gaussians” connectivity kernel developed by Carandini and Ringach (1997) as an abstraction of the biologically detailed simulations of Somers, Nelson, and Sur (1995); second, to a two-field model comprising excitatory and inhibitory cells in two separate layers. Under certain conditions, both models have the same steady states. Comparing simulations of the field models and results derived from the approximation method, we find that the approximation well predicts the tuning behavior of the full model. Moreover, explicit formulas for approximate amplitudes and tuning widths in response to changing input strength are given and checked numerically. Comparing the network behavior for different nonlinearities, we find that the only rate function (from the class of functions under study) that leads to constant tuning widths and a linear increase of firing rates in response to increasing input is the semilinear function. For other nonlinearities, the qualitative network response depends on whether the model neurons operate in a convex (e.g., x2) or concave (e.g., sqrt (x)) regime of their rate function. In the first case, tuning gradually changes from input driven at low input strength (broad tuning strongly depending on the input and roughly linear amplitudes in response to input strength) to recurrently driven at moderate input strength (sharp tuning, supra-linear increase of amplitudes in response to input strength). For concave rate functions, the network reveals stable hysteresis between a state at low firing rates and a tuned state at high rates. This means that the network can “memorize” tuning properties of a previously shown stimulus. Sigmoid rate functions can combine both effects. In contrast to the Carandini-Ringach model, the two-field model further reveals oscillations with typical frequencies in the beta and gamma range, when the excitatory and inhibitory connections are relatively strong. This suggests a rhythmic modulation of tuning properties during cortical oscillations.

2002 ◽  
Vol 14 (8) ◽  
pp. 1801-1825 ◽  
Author(s):  
Thomas Wennekers

This article presents an approximation method to reduce the spatiotemporal behavior of localized activation peaks (also called “bumps”) in nonlinear neural field equations to a set of coupled ordinary differential equations (ODEs) for only the amplitudes and tuning widths of these peaks. This enables a simplified analysis of steady-state receptive fields and their stability, as well as spatiotemporal point spread functions and dynamic tuning properties. A lowest-order approximation for peak amplitudes alone shows that much of the well-studied behavior of small neural systems (e.g., the Wilson-Cowan oscillator) should carry over to localized solutions in neural fields. Full spatiotemporal response profiles can further be reconstructed from this low-dimensional approximation. The method is applied to two standard neural field models: a one-layer model with difference-of-gaussians connectivity kernel and a two-layer excitatory-inhibitory network. Similar models have been previously employed in numerical studies addressing orientation tuning of cortical simple cells. Explicit formulas for tuning properties, instabilities, and oscillation frequencies are given, and exemplary spatiotemporal response functions, reconstructed from the low-dimensional approximation, are compared with full network simulations.


2013 ◽  
Vol 14 (2) ◽  
pp. 997-1025 ◽  
Author(s):  
Muhammad Yousaf ◽  
John Wyller ◽  
Tom Tetzlaff ◽  
Gaute T. Einevoll

2021 ◽  
Vol 103 (3) ◽  
Author(s):  
Conor L. Morrison ◽  
Priscilla E. Greenwood ◽  
Lawrence M. Ward

2018 ◽  
Vol 115 (45) ◽  
pp. 11619-11624 ◽  
Author(s):  
Wei P. Dai ◽  
Douglas Zhou ◽  
David W. McLaughlin ◽  
David Cai

Recent experiments have shown that mouse primary visual cortex (V1) is very different from that of cat or monkey, including response properties—one of which is that contrast invariance in the orientation selectivity (OS) of the neurons’ firing rates is replaced in mouse with contrast-dependent sharpening (broadening) of OS in excitatory (inhibitory) neurons. These differences indicate a different circuit design for mouse V1 than that of cat or monkey. Here we develop a large-scale computational model of an effective input layer of mouse V1. Constrained by experiment data, the model successfully reproduces experimentally observed response properties—for example, distributions of firing rates, orientation tuning widths, and response modulations of simple and complex neurons, including the contrast dependence of orientation tuning curves. Analysis of the model shows that strong feedback inhibition and strong orientation-preferential cortical excitation to the excitatory population are the predominant mechanisms underlying the contrast-sharpening of OS in excitatory neurons, while the contrast-broadening of OS in inhibitory neurons results from a strong but nonpreferential cortical excitation to these inhibitory neurons, with the resulting contrast-broadened inhibition producing a secondary enhancement on the contrast-sharpened OS of excitatory neurons. Finally, based on these mechanisms, we show that adjusting the detailed balances between the predominant mechanisms can lead to contrast invariance—providing insights for future studies on contrast dependence (invariance).


2003 ◽  
Vol 40 (03) ◽  
pp. 721-740 ◽  
Author(s):  
Henry W. Block ◽  
Yulin Li ◽  
Thomas H. Savits

In this paper we consider the initial and asymptotic behaviour of the failure rate function resulting from mixtures of subpopulations and formation of coherent systems. In particular, it is shown that the failure rate of a mixture has the same limiting behaviour as the failure rate of the strongest subpopulation. A similar result holds for systems except the role of strongest subpopulation is replaced by strongest min path set.


Sign in / Sign up

Export Citation Format

Share Document