Dependency Reduction with Divisive Normalization: Justification and Effectiveness

2011 ◽  
Vol 23 (11) ◽  
pp. 2942-2973 ◽  
Author(s):  
Siwei Lyu

Efficient coding transforms that reduce or remove statistical dependencies in natural sensory signals are important for both biology and engineering. In recent years, divisive normalization (DN) has been advocated as a simple and effective nonlinear efficient coding transform. In this work, we first elaborate on the theoretical justification for DN as an efficient coding transform. Specifically, we use the multivariate t model to represent several important statistical properties of natural sensory signals and show that DN approximates the optimal transforms that eliminate statistical dependencies in the multivariate t model. Second, we show that several forms of DN used in the literature are equivalent in their effects as efficient coding transforms. Third, we provide a quantitative evaluation of the overall dependency reduction performance of DN for both the multivariate t models and natural sensory signals. Finally, we find that statistical dependencies in the multivariate t model and natural sensory signals are increased by the DN transform with low-input dimensions. This implies that for DN to be an effective efficient coding transform, it has to pool over a sufficiently large number of inputs.

2016 ◽  
Vol 9 (1) ◽  
pp. 212-216 ◽  
Author(s):  
Zhenyu Yuan

In their focal article, Ree, Carretta, and Teachout (2015) based their definition of a dominant general factor (DGF) on two criteria: (a) A DGF should be the largest source of reliable variance; (b) it is influencing every variable measuring the construct. Although detailed attention has been paid to the statistical properties of a DGF, I believe another criterion of equal if not greater importance is the theoretical justification to expect a DGF in the measurement of a construct. In the following commentary, I will highlight the importance of theory as another important criterion when determining the meaningfulness and usefulness of DGFs, discuss the risks of creating a DGF without any theoretical guidance, and elaborate on the complexities surrounding job performance as a detailed example to illustrate why theory is important before extracting a DGF from performance ratings.


Author(s):  
Alexander Gomez Villa ◽  
Marcelo Bertalmío ◽  
Jesus Malo

In this work we study the communication efficiency of a psychophysically-tuned cascade of Wilson-Cowan and Divisive Normalization layers that simulate the retina-V1 pathway. This is the first analysis of Wilson-Cowan networks in terms of multivariate total correlation. The parameters of the cortical model have been derived through the relation between the steady state of the Wilson-Cowan model and the Divisive Normalization model.Efficiency has been analyzed in two ways: First, we provide an analytical expression for the reduction of the total correlation among the responses of a V1-like population after the Wilson-Cowan interaction. Second, we empirically study the efficiency with visual stimuli and statistical tools that were not available before: (1) a recent, radiometrically calibrated, set of natural scenes, and (2) a recent technique to estimate the multivariate total correlation in bits from sets of visual responses which only involves univariate operations, thus giving better redundancy estimates.The theoretical and the empirical results show that although this cascade of layers was not optimized for statistical independence in any way, the redundancy between the responses gets substantially reduced along the pathway. Specifically, we show that (1) the efficiency of a Wilson-Cowan network is similar to its equivalent Divisive Normalization, (2) while initial layers (Von-Kries adaptation and Weber-like brightness) contribute to univariate equalization, the bigger contributions to the reduction in total correlation come from the nonlinear local contrast and the local oriented filters, and (3) psychophysically-tuned models are more efficient in the more populated regions of the luminance-contrast plane. These results are an alternative confirmation of the Efficient Coding Hypothesis for the Wilson-Cowan systems. And from an applied perspective, they suggest that neural field models could be an option in image coding to perform image compression.


2010 ◽  
Vol 22 (12) ◽  
pp. 3179-3206 ◽  
Author(s):  
Jesús Malo ◽  
Valero Laparra

The conventional approach in computational neuroscience in favor of the efficient coding hypothesis goes from image statistics to perception. It has been argued that the behavior of the early stages of biological visual processing (e.g., spatial frequency analyzers and their nonlinearities) may be obtained from image samples and the efficient coding hypothesis using no psychophysical or physiological information. In this work we address the same issue in the opposite direction: from perception to image statistics. We show that psychophysically fitted image representation in V1 has appealing statistical properties, for example, approximate PDF factorization and substantial mutual information reduction, even though no statistical information is used to fit the V1 model. These results are complementary evidence in favor of the efficient coding hypothesis.


Author(s):  
Сергій Олександрович Гнатюк ◽  
Нургуль Абадуллаєвна Сейлова ◽  
Юлія Ярославівна Поліщук ◽  
Олег Володимирович Заріцький

2013 ◽  
Vol 25 (11) ◽  
pp. 2904-2933
Author(s):  
Matthew Chalk ◽  
Iain Murray ◽  
Peggy Seriès

Attention causes diverse changes to visual neuron responses, including alterations in receptive field structure, and firing rates. A common theoretical approach to investigate why sensory neurons behave as they do is based on the efficient coding hypothesis: that sensory processing is optimized toward the statistics of the received input. We extend this approach to account for the influence of task demands, hypothesizing that the brain learns a probabilistic model of both the sensory input and reward received for performing different actions. Attention-dependent changes to neural responses reflect optimization of this internal model to deal with changes in the sensory environment (stimulus statistics) and behavioral demands (reward statistics). We use this framework to construct a simple model of visual processing that is able to replicate a number of attention-dependent changes to the responses of neurons in the midlevel visual cortices. The model is consistent with and provides a normative explanation for recent divisive normalization models of attention (Reynolds & Heeger, 2009 ).


Physica ◽  
1952 ◽  
Vol 18 (2) ◽  
pp. 1147-1150
Author(s):  
D MAEDER ◽  
V WINTERSTEIGER

2017 ◽  
Author(s):  
Francesca Serra ◽  
Andrea Spoto ◽  
Marta Ghisi ◽  
Giulio Vidotto

Sign in / Sign up

Export Citation Format

Share Document