TurSOM: A paradigm bridging Turing's unorganized machines and self-organizing maps demonstrating dual self-organization

2011 ◽  
Vol 74 (17) ◽  
pp. 3125-3141
Author(s):  
Derek Beaton ◽  
Iren Valova ◽  
Daniel MacLean
2021 ◽  
Author(s):  
noureddine kermiche

Using data augmentation techniques, unsupervised representation learning methods extract features from data by training artificial neural networks to recognize that different views of an object are just different instances of the same object. We extend current unsupervised representation learning methods to networks that can self-organize data representations into two-dimensional (2D) maps. The proposed method combines ideas from Kohonen’s original self-organizing maps (SOM) and recent development in unsupervised representation learning. A ResNet backbone with an added 2D <i>Softmax</i> output layer is used to organize the data representations. A new loss function with linear complexity is proposed to enforce SOM requirements of winner-take-all (WTA) and competition between neurons while explicitly avoiding collapse into trivial solutions. We show that enforcing SOM topological neighborhood requirement can be achieved by a fixed radial convolution at the 2D output layer without having to resort to actual radial activation functions which prevented the original SOM algorithm from being extended to nowadays neural network architectures. We demonstrate that when combined with data augmentation techniques, self-organization is a simple emergent property of the 2D output layer because of neighborhood recruitment combined with WTA competition between neurons. The proposed methodology is demonstrated on SVHN and CIFAR10 data sets. The proposed algorithm is the first end-to-end unsupervised learning method that combines data self-organization and visualization as integral parts of unsupervised representation learning.


MENDEL ◽  
2017 ◽  
Vol 23 (1) ◽  
pp. 111-118
Author(s):  
Muhammad Rafi ◽  
Muhammad Waqar ◽  
Hareem Ajaz ◽  
Umar Ayub ◽  
Muhammad Danish

Cluster analysis of textual documents is a common technique for better ltering, navigation, under-standing and comprehension of the large document collection. Document clustering is an autonomous methodthat separate out large heterogeneous document collection into smaller more homogeneous sub-collections calledclusters. Self-organizing maps (SOM) is a type of arti cial neural network (ANN) that can be used to performautonomous self-organization of high dimension feature space into low-dimensional projections called maps. Itis considered a good method to perform clustering as both requires unsupervised processing. In this paper, weproposed a SOM using multi-layer, multi-feature to cluster documents. The paper implements a SOM usingfour layers containing lexical terms, phrases and sequences in bottom layers respectively and combining all atthe top layers. The documents are processed to extract these features to feed the SOM. The internal weightsand interconnections between these layers features(neurons) automatically settle through iterations with a smalllearning rate to discover the actual clusters. We have performed extensive set of experiments on standard textmining datasets like: NEWS20, Reuters and WebKB with evaluation measures F-Measure and Purity. Theevaluation gives encouraging results and outperforms some of the existing approaches. We conclude that SOMwith multi-features (lexical terms, phrases and sequences) and multi-layers can be very e ective in producinghigh quality clusters on large document collections.


Electronics ◽  
2020 ◽  
Vol 9 (10) ◽  
pp. 1605 ◽  
Author(s):  
Lyes Khacef ◽  
Laurent Rodriguez ◽  
Benoît Miramond

Cortical plasticity is one of the main features that enable our ability to learn and adapt in our environment. Indeed, the cerebral cortex self-organizes itself through structural and synaptic plasticity mechanisms that are very likely at the basis of an extremely interesting characteristic of the human brain development: the multimodal association. In spite of the diversity of the sensory modalities, like sight, sound and touch, the brain arrives at the same concepts (convergence). Moreover, biological observations show that one modality can activate the internal representation of another modality when both are correlated (divergence). In this work, we propose the Reentrant Self-Organizing Map (ReSOM), a brain-inspired neural system based on the reentry theory using Self-Organizing Maps and Hebbian-like learning. We propose and compare different computational methods for unsupervised learning and inference, then quantify the gain of the ReSOM in a multimodal classification task. The divergence mechanism is used to label one modality based on the other, while the convergence mechanism is used to improve the overall accuracy of the system. We perform our experiments on a constructed written/spoken digits database and a Dynamic Vision Sensor (DVS)/EletroMyoGraphy (EMG) hand gestures database. The proposed model is implemented on a cellular neuromorphic architecture that enables distributed computing with local connectivity. We show the gain of the so-called hardware plasticity induced by the ReSOM, where the system’s topology is not fixed by the user but learned along the system’s experience through self-organization.


2015 ◽  
Vol 725-726 ◽  
pp. 1057-1062 ◽  
Author(s):  
Tatiana Simankina ◽  
Olga Popova

The algorithm for clustering based on neural network modeling using T. Kohonen's self-organizing maps for the analysis of the housing stock is considered. This analysis of housing stock is required for the planning of complex reproduction of housing and major repairs regional programs development. The mechanism of self-organization is submitted. The representative sample clustering of the housing stock is produced. Its result is 16 groups of objects with a high level of internal similarity. The basic advantages of this approach for monitoring and analysis of the city housing stock are described.


2021 ◽  
Author(s):  
noureddine kermiche

Using data augmentation techniques, unsupervised representation learning methods extract features from data by training artificial neural networks to recognize that different views of an object are just different instances of the same object. We extend current unsupervised representation learning methods to networks that can self-organize data representations into two-dimensional (2D) maps. The proposed method combines ideas from Kohonen’s original self-organizing maps (SOM) and recent development in unsupervised representation learning. A ResNet backbone with an added 2D <i>Softmax</i> output layer is used to organize the data representations. A new loss function with linear complexity is proposed to enforce SOM requirements of winner-take-all (WTA) and competition between neurons while explicitly avoiding collapse into trivial solutions. We show that enforcing SOM topological neighborhood requirement can be achieved by a fixed radial convolution at the 2D output layer without having to resort to actual radial activation functions which prevented the original SOM algorithm from being extended to nowadays neural network architectures. We demonstrate that when combined with data augmentation techniques, self-organization is a simple emergent property of the 2D output layer because of neighborhood recruitment combined with WTA competition between neurons. The proposed methodology is demonstrated on SVHN and CIFAR10 data sets. The proposed algorithm is the first end-to-end unsupervised learning method that combines data self-organization and visualization as integral parts of unsupervised representation learning.


2019 ◽  
Vol 24 (1) ◽  
pp. 87-92 ◽  
Author(s):  
Yvette Reisinger ◽  
Mohamed M. Mostafa ◽  
John P. Hayes

Author(s):  
Sylvain Barthelemy ◽  
Pascal Devaux ◽  
Francois Faure ◽  
Matthieu Pautonnier

Sign in / Sign up

Export Citation Format

Share Document