MobileGCN applied to low-dimensional node feature learning

2021 ◽  
Vol 112 ◽  
pp. 107788
Author(s):  
Wei Dong ◽  
Junsheng Wu ◽  
Zongwen Bai ◽  
Yaoqi Hu ◽  
Weigang Li ◽  
...  
2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Shicheng Li ◽  
Qinghua Liu ◽  
Jiangyan Dai ◽  
Wenle Wang ◽  
Xiaolin Gui ◽  
...  

Feature representation learning is a key issue in artificial intelligence research. Multiview multimedia data can provide rich information, which makes feature representation become one of the current research hotspots in data analysis. Recently, a large number of multiview data feature representation methods have been proposed, among which matrix factorization shows the excellent performance. Therefore, we propose an adaptive-weighted multiview deep basis matrix factorization (AMDBMF) method that integrates matrix factorization, deep learning, and view fusion together. Specifically, we first perform deep basis matrix factorization on data of each view. Then, all views are integrated to complete the procedure of multiview feature learning. Finally, we propose an adaptive weighting strategy to fuse the low-dimensional features of each view so that a unified feature representation can be obtained for multiview multimedia data. We also design an iterative update algorithm to optimize the objective function and justify the convergence of the optimization algorithm through numerical experiments. We conducted clustering experiments on five multiview multimedia datasets and compare the proposed method with several excellent current methods. The experimental results demonstrate that the clustering performance of the proposed method is better than those of the other comparison methods.


Author(s):  
Grigorios Tsagkatakis ◽  
Panagiotis Tsakalides

State-of-the-art remote sensing scene classification methods employ different Convolutional Neural Network architectures for achieving very high classification performance. A trait shared by the majority of these methods is that the class associated with each example is ascertained by examining the activations of the last fully connected layer, and the networks are trained to minimize the cross-entropy between predictions extracted from this layer and ground-truth annotations. In this work, we extend this paradigm by introducing an additional output branch which maps the inputs to low dimensional representations, effectively extracting additional feature representations of the inputs. The proposed model imposes additional distance constrains on these representations with respect to identified class representatives, in addition to the traditional categorical cross-entropy between predictions and ground-truth. By extending the typical cross-entropy loss function with a distance learning function, our proposed approach achieves significant gains across a wide set of benchmark datasets in terms of classification, while providing additional evidence related to class membership and classification confidence.


Author(s):  
Jinglin Xu ◽  
Junwei Han ◽  
Feiping Nie

More and more multi-view data which can capture rich information from heterogeneous features are widely used in real world applications. How to integrate different types of features, and how to learn low dimensional and discriminative information from high dimensional data are two main challenges. To address these challenges, this paper proposes a novel multi-view feature learning framework, which is regularized by discriminative information and obtains a feature learning model that contains multiple discriminative feature weighting matrices for different views, and then yields multiple low dimensional features used for subsequent multi-view clustering. To optimize the formulated objective function, we transform the proposed framework into a trace optimization problem which obtains the global solution in a closed form. Experimental evaluations on four widely used datasets and comparisons with a number of state-of-the-art multi-view clustering algorithms demonstrate the superiority of the proposed work.


Author(s):  
Jiajie Peng ◽  
Hansheng Xue ◽  
Zhongyu Wei ◽  
Idil Tuncali ◽  
Jianye Hao ◽  
...  

Abstract Motivation The emergence of abundant biological networks, which benefit from the development of advanced high-throughput techniques, contributes to describing and modeling complex internal interactions among biological entities such as genes and proteins. Multiple networks provide rich information for inferring the function of genes or proteins. To extract functional patterns of genes based on multiple heterogeneous networks, network embedding-based methods, aiming to capture non-linear and low-dimensional feature representation based on network biology, have recently achieved remarkable performance in gene function prediction. However, existing methods do not consider the shared information among different networks during the feature learning process. Results Taking the correlation among the networks into account, we design a novel semi-supervised autoencoder method to integrate multiple networks and generate a low-dimensional feature representation. Then we utilize a convolutional neural network based on the integrated feature embedding to annotate unlabeled gene functions. We test our method on both yeast and human datasets and compare with three state-of-the-art methods. The results demonstrate the superior performance of our method. We not only provide a comprehensive analysis of the performance of the newly proposed algorithm but also provide a tool for extracting features of genes based on multiple networks, which can be used in the downstream machine learning task. Availability DeepMNE-CNN is freely available at https://github.com/xuehansheng/DeepMNE-CNN Contact [email protected]; [email protected]; [email protected]


Author(s):  
Aishwarya H. Balwani ◽  
Eva L. Dyer

AbstractModels of neural architecture and organization are critical for the study of disease, aging, and development. Unfortunately, automating the process of building maps of microarchitectural differences both within and across brains still remains a challenge. In this paper, we present a way to build data-driven representations of brain structure using deep learning. With this model we can build meaningful representations of brain structure within an area, learn how different areas are related to one another anatomically, and use this model to discover new regions of interest within a sample that share similar characteristics in terms of their anatomical composition. We start by training a deep convolutional neural network to predict the brain area that it is in, using only small snapshots of its immediate surroundings. By requiring that the network learn to discriminate brain areas from these local views, it learns a rich representation of the underlying anatomical features that allow it to distinguish different brain areas. Once we have the trained network, we open up the black box, extract features from its last hidden layer, and then factorize them. After forming a low-dimensional factorization of the network’s representations, we find that the learned factors and their embeddings can be used to further resolve biologically meaningful subdivisions within brain regions (e.g., laminar divisions and barrels in somatosensory cortex). These findings speak to the potential use of neural networks to learn meaningful features for modeling neural architecture, and discovering new patterns in brain anatomy directly from images.


2019 ◽  
Author(s):  
Hansheng Xue ◽  
Jiajie Peng ◽  
Xuequn Shang

AbstractMotivationThe emerging of abundant biological networks, which benefit from the development of advanced high-throughput techniques, contribute to describing and modeling complex internal interactions among biological entities such as genes and proteins. Multiple networks provide rich information for inferring the function of genes or proteins. To extract functional patterns of genes based on multiple heterogeneous networks, network embedding-based methods, aiming to capture non-linear and low-dimensional feature representation based on network biology, have recently achieved remarkable performance in gene function prediction. However, existing methods mainly do not consider the shared information among different networks during the feature learning process. Thus, we propose a novel multi-networks embedding-based function prediction method based on semi-supervised autoencoder and feature convolution neural network, named DeepMNE-CNN, which captures complex topological structures of multi-networks and takes the correlation among multi-networks into account.ResultsWe design a novel semi-supervised autoencoder method to integrate multiple networks and generate a low-dimensional feature representation. Then we utilize a convolutional neural network based on the integrated feature embedding to annotate unlabeled gene functions. We test our method on both yeast and human dataset and compare with four state-of-the-art methods. The results demonstrate the superior performance of our method over four state-of-the-art algorithms. From the future explorations, we find that semi-supervised autoencoder based multi-networks integration method and CNN-based feature learning methods both contribute to the task of function prediction.AvailabilityDeepMNE-CNN is freely available at https://github.com/xuehansheng/DeepMNE-CNN


Sign in / Sign up

Export Citation Format

Share Document