scholarly journals Efficient Similarity-Preserving Unsupervised Learning using Modular Sparse Distributed Codes and Novelty-Contingent Noise

2020 ◽  
Author(s):  
Rod Rinkus

AbstractThere is increasing realization in neuroscience that information is represented in the brain, e.g., neocortex, hippocampus, in the form sparse distributed codes (SDCs), a kind of cell assembly. Two essential questions are: a) how are such codes formed on the basis of single trials, and how is similarity preserved during learning, i.e., how do more similar inputs get mapped to more similar SDCs. I describe a novel Modular Sparse Distributed Code (MSDC) that provides simple, neurally plausible answers to both questions. An MSDC coding field (CF) consists of Q WTA competitive modules (CMs), each comprised of K binary units (analogs of principal cells). The modular nature of the CF makes possible a single-trial, unsupervised learning algorithm that approximately preserves similarity and crucially, runs in fixed time, i.e., the number of steps needed to store an item remains constant as the number of stored items grows. Further, once items are stored as MSDCs in superposition and such that their intersection structure reflects input similarity, both fixed time best-match retrieval and fixed time belief update (updating the probabilities of all stored items) also become possible. The algorithm’s core principle is simply to add noise into the process of choosing a code, i.e., choosing a winner in each CM, which is proportional to the novelty of the input. This causes the expected intersection of the code for an input, X, with the code of each previously stored input, Y, to be proportional to the similarity of X and Y. Results demonstrating these capabilities for spatial patterns are given in the appendix.

2018 ◽  
Vol 116 (1) ◽  
pp. 96-105 ◽  
Author(s):  
Lichao Chen ◽  
Sudhir Singh ◽  
Thomas Kailath ◽  
Vwani Roychowdhury

Despite significant recent progress, machine vision systems lag considerably behind their biological counterparts in performance, scalability, and robustness. A distinctive hallmark of the brain is its ability to automatically discover and model objects, at multiscale resolutions, from repeated exposures to unlabeled contextual data and then to be able to robustly detect the learned objects under various nonideal circumstances, such as partial occlusion and different view angles. Replication of such capabilities in a machine would require three key ingredients: (i) access to large-scale perceptual data of the kind that humans experience, (ii) flexible representations of objects, and (iii) an efficient unsupervised learning algorithm. The Internet fortunately provides unprecedented access to vast amounts of visual data. This paper leverages the availability of such data to develop a scalable framework for unsupervised learning of object prototypes—brain-inspired flexible, scale, and shift invariant representations of deformable objects (e.g., humans, motorcycles, cars, airplanes) comprised of parts, their different configurations and views, and their spatial relationships. Computationally, the object prototypes are represented as geometric associative networks using probabilistic constructs such as Markov random fields. We apply our framework to various datasets and show that our approach is computationally scalable and can construct accurate and operational part-aware object models much more efficiently than in much of the recent computer vision literature. We also present efficient algorithms for detection and localization in new scenes of objects and their partial views.


Author(s):  
Jiankun Chen ◽  
Xiaolan Qiu ◽  
Chuanzhao Han ◽  
Yirong Wu

Recent neuroscience research results show that the nerve information in the brain is not only encoded by the spatial information. Spiking neural network based on pulse frequency coding plays a very important role in dealing with the problem of brain signal, especially complicated space-time information. In this paper, an unsupervised learning algorithm for bilayer feedforward spiking neural networks based on spike-timing dependent plasticity (STDP) competitiveness is proposed and applied to SAR image classification on MSTAR for the first time. The SNN learns autonomously from the input value without any labeled signal and the overall classification accuracy of SAR targets reached 80.8%. The experimental results show that the algorithm adopts the synaptic neurons and network structure with stronger biological rationality, and has the ability to classify targets on SAR image. Meanwhile, the feature map extraction ability of neurons is visualized by the generative property of SNN, which is a beneficial attempt to apply the brain-like neural network into SAR image interpretation.


Author(s):  
M. Sato ◽  
Y. Ogawa ◽  
M. Sasaki ◽  
T. Matsuo

A virgin female of the noctuid moth, a kind of noctuidae that eats cucumis, etc. performs calling at a fixed time of each day, depending on the length of a day. The photoreceptors that induce this calling are located around the neurosecretory cells (NSC) in the central portion of the protocerebrum. Besides, it is considered that the female’s biological clock is located also in the cerebral lobe. In order to elucidate the calling and the function of the biological clock, it is necessary to clarify the basic structure of the brain. The observation results of 12 or 30 day-old noctuid moths showed that their brains are basically composed of an outer and an inner portion-neural lamella (about 2.5 μm) of collagen fibril and perineurium cells. Furthermore, nerve cells surround the cerebral lobes, in which NSCs, mushroom bodies, and central nerve cells, etc. are observed. The NSCs are large-sized (20 to 30 μm dia.) cells, which are located in the pons intercerebralis of the head section and at the rear of the mushroom body (two each on the right and left). Furthermore, the cells were classified into two types: one having many free ribosoms 15 to 20 nm in dia. and the other having granules 150 to 350 nm in dia. (Fig. 1).


2010 ◽  
Vol 22 (12) ◽  
pp. 2979-3035 ◽  
Author(s):  
Stefan Klampfl ◽  
Wolfgang Maass

Neurons in the brain are able to detect and discriminate salient spatiotemporal patterns in the firing activity of presynaptic neurons. It is open how they can learn to achieve this, especially without the help of a supervisor. We show that a well-known unsupervised learning algorithm for linear neurons, slow feature analysis (SFA), is able to acquire the discrimination capability of one of the best algorithms for supervised linear discrimination learning, the Fisher linear discriminant (FLD), given suitable input statistics. We demonstrate the power of this principle by showing that it enables readout neurons from simulated cortical microcircuits to learn without any supervision to discriminate between spoken digits and to detect repeated firing patterns that are embedded into a stream of noise spike trains with the same firing statistics. Both these computer simulations and our theoretical analysis show that slow feature extraction enables neurons to extract and collect information that is spread out over a trajectory of firing states that lasts several hundred ms. In addition, it enables neurons to learn without supervision to keep track of time (relative to a stimulus onset, or the initiation of a motor response). Hence, these results elucidate how the brain could compute with trajectories of firing states rather than only with fixed point attractors. It also provides a theoretical basis for understanding recent experimental results on the emergence of view- and position-invariant classification of visual objects in inferior temporal cortex.


2021 ◽  
Vol 14 (11) ◽  
pp. 2445-2458
Author(s):  
Valerio Cetorelli ◽  
Paolo Atzeni ◽  
Valter Crescenzi ◽  
Franco Milicchio

We introduce landmark grammars , a new family of context-free grammars aimed at describing the HTML source code of pages published by large and templated websites and therefore at effectively tackling Web data extraction problems. Indeed, they address the inherent ambiguity of HTML, one of the main challenges of Web data extraction, which, despite over twenty years of research, has been largely neglected by the approaches presented in literature. We then formalize the Smallest Extraction Problem (SEP), an optimization problem for finding the grammar of a family that best describes a set of pages and contextually extract their data. Finally, we present an unsupervised learning algorithm to induce a landmark grammar from a set of pages sharing a common HTML template, and we present an automatic Web data extraction system. The experiments on consolidated benchmarks show that the approach can substantially contribute to improve the state-of-the-art.


2003 ◽  
Vol 13 (02) ◽  
pp. 111-118
Author(s):  
Jairo Diniz Filho ◽  
Teresa B. Ludermir

Neuronal groups projecting widely in the brain are being experimentally associated to attention and mood changes. Those groups are known to exert a modulatory effect over other larger groups. On the other hand, some people think of the brain functions as being performed by specialized modular systems. In this work, we propose an architecture of modular nature to explore a particular decision process. We show the importance of the modulatory effect of a special evaluation segment in that process.


Sign in / Sign up

Export Citation Format

Share Document