scholarly journals Deep Clustering with Self-supervision using Pairwise Data Similarities

Author(s):  
Mohammadreza Sadeghi ◽  
Narges Armanfard

<div>Deep clustering incorporates embedding into clustering to find a lower-dimensional space appropriate for clustering. In this paper we propose a novel deep clustering framework with self-supervision using pairwise data similarities (DCSS). The proposed method consists of two successive phases. In the first phase we propose to form hypersphere-like groups of similar data points, i.e. one hypersphere per cluster, employing an autoencoder which is trained using cluster-specific losses. The hyper-spheres are formed in the autoencoder’s latent space. In the second phase, we propose to employ pairwise data similarities to create a K-dimensional space that is capable of accommodating more complex cluster distributions; hence, providing more accurate clustering performance. K is the number of clusters. The autoencoder’s latent space obtained in the first phase is used as the input of the second phase. Effectiveness of both phases are demonstrated on seven benchmark datasets through conducting a rigorous set of experiments.</div>

2021 ◽  
Author(s):  
Mohammadreza Sadeghi ◽  
Naeges Armanfard

Deep clustering incorporates embedding into clustering to find a lower-dimensional space appropriate for clustering. Most of the existing methods try to group similar data points through simultaneously minimizing clustering and reconstruction losses, employing an autoencoder (AE). However, they all ignore the relevant useful information available within pairwise data relationships. In this paper we propose a novel deep clustering framework with self-supervision using pairwise data similarities (DCSS). The proposed method consists of two successive phases. First, we propose a novel AE-based approach that aims to aggregate similar data points near a common group center in the latent space of an AE. The AE's latent space is obtained by minimizing weighted reconstruction and centering losses of data points, where weights are defined based on similarity of data points and group centers. In the second phase, we map the AE's latent space, using a fully connected network MNet, onto a K-dimensional space used to derive the final data cluster assignments, where K is the number of clusters. MNet is trained to strengthen (weaken) similarity of similar (dissimilar) samples. Experimental results on multiple benchmark datasets demonstrate the effectiveness of DCSS for data clustering and as a general framework for boosting up state-of-the-art clustering methods.


2021 ◽  
Author(s):  
Mohammadreza Sadeghi ◽  
Naeges Armanfard

Deep clustering incorporates embedding into clustering to find a lower-dimensional space appropriate for clustering. Most of the existing methods try to group similar data points through simultaneously minimizing clustering and reconstruction losses, employing an autoencoder (AE). However, they all ignore the relevant useful information available within pairwise data relationships. In this paper we propose a novel deep clustering framework with self-supervision using pairwise data similarities (DCSS). The proposed method consists of two successive phases. First, we propose a novel AE-based approach that aims to aggregate similar data points near a common group center in the latent space of an AE. The AE's latent space is obtained by minimizing weighted reconstruction and centering losses of data points, where weights are defined based on similarity of data points and group centers. In the second phase, we map the AE's latent space, using a fully connected network MNet, onto a K-dimensional space used to derive the final data cluster assignments, where K is the number of clusters. MNet is trained to strengthen (weaken) similarity of similar (dissimilar) samples. Experimental results on multiple benchmark datasets demonstrate the effectiveness of DCSS for data clustering and as a general framework for boosting up state-of-the-art clustering methods.


2021 ◽  
Vol 11 (15) ◽  
pp. 6963
Author(s):  
Jan Y. K. Chan ◽  
Alex Po Leung ◽  
Yunbo Xie

Using random projection, a method to speed up both kernel k-means and centroid initialization with k-means++ is proposed. We approximate the kernel matrix and distances in a lower-dimensional space Rd before the kernel k-means clustering motivated by upper error bounds. With random projections, previous work on bounds for dot products and an improved bound for kernel methods are considered for kernel k-means. The complexities for both kernel k-means with Lloyd’s algorithm and centroid initialization with k-means++ are known to be O(nkD) and Θ(nkD), respectively, with n being the number of data points, the dimensionality of input feature vectors D and the number of clusters k. The proposed method reduces the computational complexity for the kernel computation of kernel k-means from O(n2D) to O(n2d) and the subsequent computation for k-means with Lloyd’s algorithm and centroid initialization from O(nkD) to O(nkd). Our experiments demonstrate that the speed-up of the clustering method with reduced dimensionality d=200 is 2 to 26 times with very little performance degradation (less than one percent) in general.


2019 ◽  
Vol 43 (4) ◽  
pp. 653-660 ◽  
Author(s):  
M.V. Gashnikov

Adaptive multidimensional signal interpolators are developed. These interpolators take into account the presence and direction of boundaries of flat signal regions in each local neighborhood based on the automatic selection of the interpolating function for each signal sample. The selection of the interpolating function is performed by a parameterized rule, which is optimized in a parametric lower dimensional space. The dimension reduction is performed using rank filtering of local differences in the neighborhood of each signal sample. The interpolating functions of adaptive interpolators are written for the multidimensional, three-dimensional and two-dimensional cases. The use of adaptive interpolators in the problem of compression of multidimensional signals is also considered. Results of an experimental study of adaptive interpolators for real multidimensional signals of various types are presented.


2015 ◽  
Vol 7 (3) ◽  
pp. 275-279 ◽  
Author(s):  
Agnė Dzidolikaitė

The paper analyzes global optimization problem. In order to solve this problem multidimensional scaling algorithm is combined with genetic algorithm. Using multidimensional scaling we search for multidimensional data projections in a lower-dimensional space and try to keep dissimilarities of the set that we analyze. Using genetic algorithms we can get more than one local solution, but the whole population of optimal points. Different optimal points give different images. Looking at several multidimensional data images an expert can notice some qualities of given multidimensional data. In the paper genetic algorithm is applied for multidimensional scaling and glass data is visualized, and certain qualities are noticed. Analizuojamas globaliojo optimizavimo uždavinys. Jis apibrėžiamas kaip netiesinės tolydžiųjų kintamųjų tikslo funkcijos optimizavimas leistinojoje srityje. Optimizuojant taikomi įvairūs algoritmai. Paprastai taikant tikslius algoritmus randamas tikslus sprendinys, tačiau tai gali trukti labai ilgai. Dažnai norima gauti gerą sprendinį per priimtiną laiko tarpą. Tokiu atveju galimi kiti – euristiniai, algoritmai, kitaip dar vadinami euristikomis. Viena iš euristikų yra genetiniai algoritmai, kopijuojantys gyvojoje gamtoje vykstančią evoliuciją. Sudarant algoritmus naudojami evoliuciniai operatoriai: paveldimumas, mutacija, selekcija ir rekombinacija. Taikant genetinius algoritmus galima rasti pakankamai gerus sprendinius tų uždavinių, kuriems nėra tikslių algoritmų. Genetiniai algoritmai taip pat taikytini vizualizuojant duomenis daugiamačių skalių metodu. Taikant daugiamates skales ieškoma daugiamačių duomenų projekcijų mažesnio skaičiaus matmenų erdvėje siekiant išsaugoti analizuojamos aibės panašumus arba skirtingumus. Taikant genetinius algoritmus gaunamas ne vienas lokalusis sprendinys, o visa optimumų populiacija. Skirtingi optimumai atitinka skirtingus vaizdus. Matydamas kelis daugiamačių duomenų variantus, ekspertas gali įžvelgti daugiau daugiamačių duomenų savybių. Straipsnyje genetinis algoritmas pritaikytas daugiamatėms skalėms. Parodoma, kad daugiamačių skalių algoritmą galima kombinuoti su genetiniu algoritmu ir panaudoti daugiamačiams duomenims vizualizuoti.


2019 ◽  
Vol 218 (1) ◽  
pp. 45-56 ◽  
Author(s):  
C Nur Schuba ◽  
Jonathan P Schuba ◽  
Gary G Gray ◽  
Richard G Davy

SUMMARY We present a new approach to estimate 3-D seismic velocities along a target interface. This approach uses an artificial neural network trained with user-supplied geological and geophysical input features derived from both a 3-D seismic reflection volume and a 2-D wide-angle seismic profile that were acquired from the Galicia margin, offshore Spain. The S-reflector detachment fault was selected as the interface of interest. The neural network in the form of a multilayer perceptron was employed with an autoencoder and a regression layer. The autoencoder was trained using a set of input features from the 3-D reflection volume. This set of features included the reflection amplitude and instantaneous frequency at the interface of interest, time-thicknesses of overlying major layers and ratios of major layer time-thicknesses to the total time-depth of the interface. The regression model was trained to estimate the seismic velocities of the crystalline basement and mantle from these features. The ‘true’ velocities were obtained from an independent full-waveform inversion along a 2-D wide-angle seismic profile, contained within the 3-D data set. The autoencoder compressed the vector of inputs into a lower dimensional space, then the regression layer was trained in the lower dimensional space to estimate velocities above and below the targeted interface. This model was trained on 50 networks with different initializations. A total of 37 networks reached minimum achievable error of 2 per cent. The low standard deviation (&lt;300  m s−1) between different networks and low errors on velocity estimations demonstrate that the input features were sufficient to capture variations in the velocity above and below the targeted S-reflector. This regression model was then applied to the 3-D reflection volume where velocities were predicted over an area of ∼400 km2. This approach provides an alternative way to obtain velocities across a 3-D seismic survey from a deep non-reflective lithology (e.g. upper mantle) , where conventional reflection velocity estimations can be unreliable.


Author(s):  
Wen-Ji Zhou ◽  
Yang Yu ◽  
Min-Ling Zhang

In multi-label classification tasks, labels are commonly related with each other. It has been well recognized that utilizing label relationship is essential to multi-label learning. One way to utilizing label relationship is to map labels to a lower-dimensional space of uncorrelated labels, where the relationship could be encoded in the mapping. Previous linear mapping methods commonly result in regression subproblems in the lower-dimensional label space. In this paper, we disclose that mappings to a low-dimensional multi-label regression problem can be worse than mapping to a classification problem, since regression requires more complex model than classification. We then propose the binary linear compression (BILC) method that results in a binary label space, leading to classification subproblems. Experiments on several multi-label datasets show that, employing classification in the embedded space results in much simpler models than regression, leading to smaller structure risk. The proposed methods are also shown to be superior to some state-of-the-art approaches.


2006 ◽  
Vol 12 (4) ◽  
pp. 289-294 ◽  
Author(s):  
Rasa Karbauskaitė ◽  
Virginijus Marcinkevičius ◽  
Gintautas Dzemyda

This paper deals with a method, called the relational perspective map that visualizes multidimensional data onto two‐dimensional closed plane. It tries to preserve the distances between the multidimensional data in the lower‐dimensional space. But the most important feature of the relational perspective map is the ability to visualize data in a non‐overlapping manner so that it reveals small distances better than other known visualization methods. In this paper, the features of this method are explored experimentally and some disadvantages are noticed. We have proposed a modification of this method, which enables us to avoid them.


Sign in / Sign up

Export Citation Format

Share Document