linear separability
Recently Published Documents


TOTAL DOCUMENTS

83
(FIVE YEARS 8)

H-INDEX

14
(FIVE YEARS 1)

Entropy ◽  
2021 ◽  
Vol 23 (3) ◽  
pp. 305
Author(s):  
Marco Gherardi

Linear separability, a core concept in supervised machine learning, refers to whether the labels of a data set can be captured by the simplest possible machine: a linear classifier. In order to quantify linear separability beyond this single bit of information, one needs models of data structure parameterized by interpretable quantities, and tractable analytically. Here, I address one class of models with these properties, and show how a combinatorial method allows for the computation, in a mean field approximation, of two useful descriptors of linear separability, one of which is closely related to the popular concept of storage capacity. I motivate the need for multiple metrics by quantifying linear separability in a simple synthetic data set with controlled correlations between the points and their labels, as well as in the benchmark data set MNIST, where the capacity alone paints an incomplete picture. The analytical results indicate a high degree of “universality”, or robustness with respect to the microscopic parameters controlling data structure.


2020 ◽  
Vol 20 (11) ◽  
pp. 1244
Author(s):  
Simona Buetti ◽  
Yujie Shao ◽  
Zoe Jing Xu ◽  
Alejandro Lleras

Author(s):  
Emmanouil Froudarakis ◽  
Uri Cohen ◽  
Maria Diamantaki ◽  
Edgar Y. Walker ◽  
Jacob Reimer ◽  
...  

AbstractDespite variations in appearance we robustly recognize objects. Neuronal populations responding to objects presented under varying conditions form object manifolds and hierarchically organized visual areas are thought to untangle pixel intensities into linearly decodable object representations. However, the associated changes in the geometry of object manifolds along the cortex remain unknown. Using home cage training we showed that mice are capable of invariant object recognition. We simultaneously recorded the responses of thousands of neurons to measure the information about object identity available across the visual cortex and found that lateral visual areas LM, LI and AL carry more linearly decodable object identity information compared to other visual areas. We applied the theory of linear separability of manifolds, and found that the increase in classification capacity is associated with a decrease in the dimension and radius of the object manifold, identifying features of the population code that enable invariant object coding.


2019 ◽  
Author(s):  
N. Alex Cayco Gajic ◽  
Séverine Durand ◽  
Michael Buice ◽  
Ramakrishnan Iyer ◽  
Clay Reid ◽  
...  

SummaryHow neural populations represent sensory information, and how that representation is transformed from one brain area to another, are fundamental questions of neuroscience. The dorsolateral geniculate nucleus (dLGN) and primary visual cortex (V1) represent two distinct stages of early visual processing. Classic sparse coding theories propose that V1 neurons represent local features of images. More recent theories have argued that the visual pathway transforms visual representations to become increasingly linearly separable. To test these ideas, we simultaneously recorded the spiking activity of mouse dLGN and V1 in vivo. We find strong evidence for both sparse coding and linear separability theories. Surprisingly, the correlations between neurons in V1 (but not dLGN) were shaped as to be irrelevant for stimulus decoding, a feature which we show enables linear separability. Therefore, our results suggest that the dLGN-V1 transformation reshapes correlated variability in a manner that facilitates linear decoding while producing a sparse code.


2019 ◽  
Vol 48 (3) ◽  
pp. 335-347
Author(s):  
Kimery R. Levering ◽  
Nolan Conaway ◽  
Kenneth J. Kurtz

2017 ◽  
Vol 54 (2) ◽  
pp. 287-314
Author(s):  
Claudio Torres ◽  
Pablo Pérez-Lantero ◽  
Gilberto Gutiérrez

Sign in / Sign up

Export Citation Format

Share Document