scholarly journals Triplet Enhanced AutoEncoder: Model-free Discriminative Network Embedding

Author(s):  
Yao Yang ◽  
Haoran Chen ◽  
Junming Shao

Deep autoencoder is widely used in dimensionality reduction because of the expressive power of the neural network. Therefore, it is naturally suitable for embedding tasks, which essentially compresses high-dimensional information into a low-dimensional latent space. In terms of network representation, methods based on autoencoder such as SDNE and DNGR have achieved comparable results with the state-of-arts. However, all of them do not leverage label information, which leads to the embeddings lack the characteristic of discrimination. In this paper, we present Triplet Enhanced AutoEncoder (TEA), a new deep network embedding approach from the perspective of metric learning. Equipped with the triplet-loss constraint, the proposed approach not only allows capturing the topological structure but also preserving the discriminative information. Moreover, unlike existing discriminative embedding techniques, TEA is independent of any specific classifier, we call it the model-free property. Extensive empirical results on three public datasets (i.e, Cora, Citeseer and BlogCatalog) show that TEA is stable and achieves state-of-the-art performance compared with both supervised and unsupervised network embedding approaches on various percentages of labeled data. The source code can be obtained from https://github.com/yybeta/TEA.

2021 ◽  
Author(s):  
Tal Ashuach ◽  
Mariano I Gabitto ◽  
Michael I Jordan ◽  
Nir Yosef

The ability to jointly profile the transcriptional and chromatin landscape of single-cells has emerged as a powerful technique to identify cellular populations and shed light on their regulation of gene expression. Current computational methods analyze jointly profiled (paired) or individual data modalities (unpaired), but do not offer a principled method to analyze both paired and unpaired samples jointly. Here we present MultiVI, a probabilistic framework that leverages deep neural networks to jointly analyze scRNA, scATAC and multiomic (scRNA + scATAC) data. MultiVI creates an informative low-dimensional latent space that accurately reflects both chromatin and transcriptional properties of the cells even when one of the modalities is missing. MultiVI accounts for technical effects in both scRNA and scATAC-seq while correcting for batch effects in both data modalities. We use public datasets to demonstrate that MultiVI is stable, easy to use, and outperforms current approaches for the joint analysis of paired and unpaired data. MultiVI is available as an open source package, implemented in the scvi-tools framework: https://docs.scvi-tools.org


Electronics ◽  
2021 ◽  
Vol 10 (4) ◽  
pp. 456
Author(s):  
Mariusz Kurowski ◽  
Andrzej Sroczyński ◽  
Georgis Bogdanis ◽  
Andrzej Czyżewski

Handwriting biometrics applications in e-Security and e-Health are addressed in the course of the conducted research. An automated analysis method for the dynamic electronic representation of handwritten signature authentication was researched. The developed algorithms are based on the dynamic analysis of electronically handwritten signatures employing neural networks. The signatures were acquired with the use of the designed electronic pen described in the paper. The triplet loss method was used to train a neural network suitable for writer-invariant signature verification. For each signature, the same neural network calculates a fixed-length latent space representation. The hand-corrected dataset containing 10,622 signatures was used in order to train and evaluate the proposed neural network. After learning, the network was tested and evaluated based on a comparison with the results found in the literature. The use of the triplet loss algorithm to teach the neural network to generate embeddings has proven to give good results in aggregating similar signatures and separating them from signatures representing different people.


Author(s):  
Di Jin ◽  
Meng Ge ◽  
Liang Yang ◽  
Dongxiao He ◽  
Longbiao Wang ◽  
...  

Network embedding is to learn a low-dimensional representation for a network in order to capture intrinsic features of the network. It has been applied to many applications, e.g., network community detection and user recommendation. One of the recent research topics for network embedding has been focusing on exploitation of diverse information, including network topology and semantic information on nodes of networks. However, such diverse information has not been fully utilized nor adequately integrated in the existing methods, so that the resulting network embedding is far from satisfactory. In this paper, we develop a weight-free multi-component network embedding approach by network reconstruction via a deep Autoencoder. Three key components make our new approach effective, i.e., a uniformed graph representation of network topology and semantic information, enhancement to the graph representation using local network structure (i.e., pairwise relationship on nodes) by sampling with latent space regularization, and integration of the diverse information in graph forms in a deep Autoencoder. Extensive experimental results on seven real-world networks demonstrate a superior performance of our method over nine state-of-the-art methods for embedding.


2021 ◽  
Vol 15 (4) ◽  
pp. 1-23
Author(s):  
Guojie Song ◽  
Yun Wang ◽  
Lun Du ◽  
Yi Li ◽  
Junshan Wang

Network embedding is a method of learning a low-dimensional vector representation of network vertices under the condition of preserving different types of network properties. Previous studies mainly focus on preserving structural information of vertices at a particular scale, like neighbor information or community information, but cannot preserve the hierarchical community structure, which would enable the network to be easily analyzed at various scales. Inspired by the hierarchical structure of galaxies, we propose the Galaxy Network Embedding (GNE) model, which formulates an optimization problem with spherical constraints to describe the hierarchical community structure preserving network embedding. More specifically, we present an approach of embedding communities into a low-dimensional spherical surface, the center of which represents the parent community they belong to. Our experiments reveal that the representations from GNE preserve the hierarchical community structure and show advantages in several applications such as vertex multi-class classification, network visualization, and link prediction. The source code of GNE is available online.


2021 ◽  
Vol 13 (2) ◽  
pp. 51
Author(s):  
Lili Sun ◽  
Xueyan Liu ◽  
Min Zhao ◽  
Bo Yang

Variational graph autoencoder, which can encode structural information and attribute information in the graph into low-dimensional representations, has become a powerful method for studying graph-structured data. However, most existing methods based on variational (graph) autoencoder assume that the prior of latent variables obeys the standard normal distribution which encourages all nodes to gather around 0. That leads to the inability to fully utilize the latent space. Therefore, it becomes a challenge on how to choose a suitable prior without incorporating additional expert knowledge. Given this, we propose a novel noninformative prior-based interpretable variational graph autoencoder (NPIVGAE). Specifically, we exploit the noninformative prior as the prior distribution of latent variables. This prior enables the posterior distribution parameters to be almost learned from the sample data. Furthermore, we regard each dimension of a latent variable as the probability that the node belongs to each block, thereby improving the interpretability of the model. The correlation within and between blocks is described by a block–block correlation matrix. We compare our model with state-of-the-art methods on three real datasets, verifying its effectiveness and superiority.


Entropy ◽  
2021 ◽  
Vol 23 (6) ◽  
pp. 743
Author(s):  
Xi Liu ◽  
Shuhang Chen ◽  
Xiang Shen ◽  
Xiang Zhang ◽  
Yiwen Wang

Neural signal decoding is a critical technology in brain machine interface (BMI) to interpret movement intention from multi-neural activity collected from paralyzed patients. As a commonly-used decoding algorithm, the Kalman filter is often applied to derive the movement states from high-dimensional neural firing observation. However, its performance is limited and less effective for noisy nonlinear neural systems with high-dimensional measurements. In this paper, we propose a nonlinear maximum correntropy information filter, aiming at better state estimation in the filtering process for a noisy high-dimensional measurement system. We reconstruct the measurement model between the high-dimensional measurements and low-dimensional states using the neural network, and derive the state estimation using the correntropy criterion to cope with the non-Gaussian noise and eliminate large initial uncertainty. Moreover, analyses of convergence and robustness are given. The effectiveness of the proposed algorithm is evaluated by applying it on multiple segments of neural spiking data from two rats to interpret the movement states when the subjects perform a two-lever discrimination task. Our results demonstrate better and more robust state estimation performance when compared with other filters.


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Weiwei Gu ◽  
Aditya Tandon ◽  
Yong-Yeol Ahn ◽  
Filippo Radicchi

AbstractNetwork embedding is a general-purpose machine learning technique that encodes network structure in vector spaces with tunable dimension. Choosing an appropriate embedding dimension – small enough to be efficient and large enough to be effective – is challenging but necessary to generate embeddings applicable to a multitude of tasks. Existing strategies for the selection of the embedding dimension rely on performance maximization in downstream tasks. Here, we propose a principled method such that all structural information of a network is parsimoniously encoded. The method is validated on various embedding algorithms and a large corpus of real-world networks. The embedding dimension selected by our method in real-world networks suggest that efficient encoding in low-dimensional spaces is usually possible.


2019 ◽  
Vol 178 ◽  
pp. 11-24 ◽  
Author(s):  
Davood Zabihzadeh ◽  
Reza Monsefi ◽  
Hadi Sadoghi Yazdi

2021 ◽  
pp. 115646
Author(s):  
Jianfang Chang ◽  
Na Dong ◽  
Donghui Li ◽  
Minghui Qin

2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Stefano Recanatesi ◽  
Matthew Farrell ◽  
Guillaume Lajoie ◽  
Sophie Deneve ◽  
Mattia Rigotti ◽  
...  

AbstractArtificial neural networks have recently achieved many successes in solving sequential processing and planning tasks. Their success is often ascribed to the emergence of the task’s low-dimensional latent structure in the network activity – i.e., in the learned neural representations. Here, we investigate the hypothesis that a means for generating representations with easily accessed low-dimensional latent structure, possibly reflecting an underlying semantic organization, is through learning to predict observations about the world. Specifically, we ask whether and when network mechanisms for sensory prediction coincide with those for extracting the underlying latent variables. Using a recurrent neural network model trained to predict a sequence of observations we show that network dynamics exhibit low-dimensional but nonlinearly transformed representations of sensory inputs that map the latent structure of the sensory environment. We quantify these results using nonlinear measures of intrinsic dimensionality and linear decodability of latent variables, and provide mathematical arguments for why such useful predictive representations emerge. We focus throughout on how our results can aid the analysis and interpretation of experimental data.


Sign in / Sign up

Export Citation Format

Share Document