TLUs, linear separability and vectors

Keyword(s):  
2004 ◽  
Vol 16 (9) ◽  
pp. 1827-1850 ◽  
Author(s):  
Fabian J. Theis

The goal of blind source separation (BSS) lies in recovering the original independent sources of a mixed random vector without knowing the mixing structure. A key ingredient for performing BSS successfully is to know the indeterminacies of the problem—that is, to know how the separating model relates to the original mixing model (separability). For linear BSS, Comon (1994) showed using the Darmois-Skitovitch theorem that the linear mixing matrix can be found except for permutation and scaling. In this work, a much simpler, direct proof for linear separability is given. The idea is based on the fact that a random vector is independent if and only if the Hessian of its logarithmic density (resp. characteristic function) is diagonal everywhere. This property is then exploited to propose a new algorithm for performing BSS. Furthermore, first ideas of how to generalize separability results based on Hessian diagonalization to more complicated nonlinear models are studied in the setting of postnonlinear BSS.


2011 ◽  
pp. 606-606
Author(s):  
Geoffrey I. Webb ◽  
Claude Sammut ◽  
Claudia Perlich ◽  
Tamás Horváth ◽  
Stefan Wrobel ◽  
...  
Keyword(s):  

Author(s):  
Emmanouil Froudarakis ◽  
Uri Cohen ◽  
Maria Diamantaki ◽  
Edgar Y. Walker ◽  
Jacob Reimer ◽  
...  

AbstractDespite variations in appearance we robustly recognize objects. Neuronal populations responding to objects presented under varying conditions form object manifolds and hierarchically organized visual areas are thought to untangle pixel intensities into linearly decodable object representations. However, the associated changes in the geometry of object manifolds along the cortex remain unknown. Using home cage training we showed that mice are capable of invariant object recognition. We simultaneously recorded the responses of thousands of neurons to measure the information about object identity available across the visual cortex and found that lateral visual areas LM, LI and AL carry more linearly decodable object identity information compared to other visual areas. We applied the theory of linear separability of manifolds, and found that the increase in classification capacity is associated with a decrease in the dimension and radius of the object manifold, identifying features of the population code that enable invariant object coding.


2012 ◽  
Vol 39 (9) ◽  
pp. 7796-7807 ◽  
Author(s):  
David A. Elizondo ◽  
Ralph Birkenhead ◽  
Matias Gamez ◽  
Noelia Garcia ◽  
Esteban Alfaro
Keyword(s):  

2005 ◽  
Vol 17 (11) ◽  
pp. 2337-2382 ◽  
Author(s):  
Robert Legenstein ◽  
Christian Naeger ◽  
Wolfgang Maass

Spiking neurons are very flexible computational modules, which can implement with different values of their adjustable synaptic parameters an enormous variety of different transformations F from input spike trains to output spike trains. We examine in this letter the question to what extent a spiking neuron with biologically realistic models for dynamic synapses can be taught via spike-timing-dependent plasticity (STDP) to implement a given transformation F. We consider a supervised learning paradigm where during training, the output of the neuron is clamped to the target signal (teacher forcing). The well-known perceptron convergence theorem asserts the convergence of a simple supervised learning algorithm for drastically simplified neuron models (McCulloch-Pitts neurons). We show that in contrast to the perceptron convergence theorem, no theoretical guarantee can be given for the convergence of STDP with teacher forcing that holds for arbitrary input spike patterns. On the other hand, we prove that average case versions of the perceptron convergence theorem hold for STDP in the case of uncorrelated and correlated Poisson input spike trains and simple models for spiking neurons. For a wide class of cross-correlation functions of the input spike trains, the resulting necessary and sufficient condition can be formulated in terms of linear separability, analogously as the well-known condition of learnability by perceptrons. However, the linear separability criterion has to be applied here to the columns of the correlation matrix of the Poisson input. We demonstrate through extensive computer simulations that the theoretically predicted convergence of STDP with teacher forcing also holds for more realistic models for neurons, dynamic synapses, and more general input distributions. In addition, we show through computer simulations that these positive learning results hold not only for the common interpretation of STDP, where STDP changes the weights of synapses, but also for a more realistic interpretation suggested by experimental data where STDP modulates the initial release probability of dynamic synapses.


2017 ◽  
Vol 54 (2) ◽  
pp. 287-314
Author(s):  
Claudio Torres ◽  
Pablo Pérez-Lantero ◽  
Gilberto Gutiérrez

Sign in / Sign up

Export Citation Format

Share Document