scholarly journals SepNE: Bringing Separability to Network Embedding

Author(s):  
Ziyao Li ◽  
Liang Zhang ◽  
Guojie Song

Many successful methods have been proposed for learning low dimensional representations on large-scale networks, while almost all existing methods are designed in inseparable processes, learning embeddings for entire networks even when only a small proportion of nodes are of interest. This leads to great inconvenience, especially on super-large or dynamic networks, where these methods become almost impossible to implement. In this paper, we formalize the problem of separated matrix factorization, based on which we elaborate a novel objective function that preserves both local and global information. We further propose SepNE, a simple and flexible network embedding algorithm which independently learns representations for different subsets of nodes in separated processes. By implementing separability, our algorithm reduces the redundant efforts to embed irrelevant nodes, yielding scalability to super-large networks, automatic implementation in distributed learning and further adaptations. We demonstrate the effectiveness of this approach on several real-world networks with different scales and subjects. With comparable accuracy, our approach significantly outperforms state-of-the-art baselines in running times on large networks.

2020 ◽  
Vol 34 (04) ◽  
pp. 4091-4098 ◽  
Author(s):  
Tao He ◽  
Lianli Gao ◽  
Jingkuan Song ◽  
Xin Wang ◽  
Kejie Huang ◽  
...  

Learning accurate low-dimensional embeddings for a network is a crucial task as it facilitates many network analytics tasks. Moreover, the trained embeddings often require a significant amount of space to store, making storage and processing a challenge, especially as large-scale networks become more prevalent. In this paper, we present a novel semi-supervised network embedding and compression method, SNEQ, that is competitive with state-of-art embedding methods while being far more space- and time-efficient. SNEQ incorporates a novel quantisation method based on a self-attention layer that is trained in an end-to-end fashion, which is able to dramatically compress the size of the trained embeddings, thus reduces storage footprint and accelerates retrieval speed. Our evaluation on four real-world networks of diverse characteristics shows that SNEQ outperforms a number of state-of-the-art embedding methods in link prediction, node classification and node recommendation. Moreover, the quantised embedding shows a great advantage in terms of storage and time compared with continuous embeddings as well as hashing methods.


Author(s):  
Xiaobo Shen ◽  
Shirui Pan ◽  
Weiwei Liu ◽  
Yew-Soon Ong ◽  
Quan-Sen Sun

Network embedding aims to seek low-dimensional vector representations for network nodes, by preserving the network structure. The network embedding is typically represented in continuous vector, which imposes formidable challenges in storage and computation costs, particularly in large-scale applications. To address the issue, this paper proposes a novel discrete network embedding (DNE) for more compact representations. In particular, DNE learns short binary codes to represent each node. The Hamming similarity between two binary embeddings is then employed to well approximate the ground-truth similarity. A novel discrete multi-class classifier is also developed to expedite classification. Moreover, we propose to jointly learn the discrete embedding and classifier within a unified framework to improve the compactness and discrimination of network embedding. Extensive experiments on node classification consistently demonstrate that DNE exhibits lower storage and computational complexity than state-of-the-art network embedding methods, while obtains competitive classification results.


2021 ◽  
Vol 15 (3) ◽  
pp. 1-28
Author(s):  
Xueyan Liu ◽  
Bo Yang ◽  
Hechang Chen ◽  
Katarzyna Musial ◽  
Hongxu Chen ◽  
...  

Stochastic blockmodel (SBM) is a widely used statistical network representation model, with good interpretability, expressiveness, generalization, and flexibility, which has become prevalent and important in the field of network science over the last years. However, learning an optimal SBM for a given network is an NP-hard problem. This results in significant limitations when it comes to applications of SBMs in large-scale networks, because of the significant computational overhead of existing SBM models, as well as their learning methods. Reducing the cost of SBM learning and making it scalable for handling large-scale networks, while maintaining the good theoretical properties of SBM, remains an unresolved problem. In this work, we address this challenging task from a novel perspective of model redefinition. We propose a novel redefined SBM with Poisson distribution and its block-wise learning algorithm that can efficiently analyse large-scale networks. Extensive validation conducted on both artificial and real-world data shows that our proposed method significantly outperforms the state-of-the-art methods in terms of a reasonable trade-off between accuracy and scalability. 1


2014 ◽  
Vol 681 ◽  
pp. 47-50
Author(s):  
Yue Zhou ◽  
Shuai Liu ◽  
Li Xin Zhang

The structural health monitoring technology has been one of the most important issues. In this paper, the design of wireless sensor network for structural health monitoring application is studied. The basic concept, significance, state of the art of structural health monitoring, the architecture and the principle of the wireless structural health monitoring system are described. The hardware and software of the overall system are designed and built. The WLANonSAN architecture network is particularly proposed as a solution for the large-scale networks.


Author(s):  
Jie Zhang ◽  
Yuxiao Dong ◽  
Yan Wang ◽  
Jie Tang ◽  
Ming Ding

Recent advances in network embedding has revolutionized the field of graph and network mining. However, (pre-)training embeddings for very large-scale networks is computationally challenging for most existing methods. In this work, we present ProNE---a fast, scalable, and effective model, whose single-thread version is 10--400x faster than efficient network embedding benchmarks with 20 threads, including LINE, DeepWalk, node2vec, GraRep, and HOPE. As a concrete example, the single-version ProNE requires only 29 hours to embed a network of hundreds of millions of nodes while it takes LINE weeks and DeepWalk months by using 20 threads. To achieve this, ProNE first initializes network embeddings efficiently by formulating the task as sparse matrix factorization. The second step of ProNE is to enhance the embeddings by propagating them in the spectrally modulated space. Extensive experiments on networks of various scales and types demonstrate that ProNE achieves both effectiveness and significant efficiency superiority when compared to the aforementioned baselines. In addition, ProNE's embedding enhancement step can be also generalized for improving other models at speed, e.g., offering >10% relative gains for the used baselines. 


Author(s):  
Mohammadreza Armandpour ◽  
Patrick Ding ◽  
Jianhua Huang ◽  
Xia Hu

Many recent network embedding algorithms use negative sampling (NS) to approximate a variant of the computationally expensive Skip-Gram neural network architecture (SGA) objective. In this paper, we provide theoretical arguments that reveal how NS can fail to properly estimate the SGA objective, and why it is not a suitable candidate for the network embedding problem as a distinct objective. We show NS can learn undesirable embeddings, as the result of the “Popular Neighbor Problem.” We use the theory to develop a new method “R-NS” that alleviates the problems of NS by using a more intelligent negative sampling scheme and careful penalization of the embeddings. R-NS is scalable to large-scale networks, and we empirically demonstrate the superiority of R-NS over NS for multi-label classification on a variety of real-world networks including social networks and language networks.


Author(s):  
Takuya Maruyama ◽  
Noboru Harata

The network equilibrium model is a useful tool for long-term transportation planning and is one promising alternative to the traditional four-step travel forecasting model. However, some issues with the model remain to be considered. For example, almost all variations of the model adhere to the traditional trip-based approach, in which trip chains made by users are treated as separate, independent entities in the analysis. This research aims to develop a simple, tractable model to overcome this problem. One proposed model is based on piston-type trip chaining, and another accommodates any other type of trip chaining and includes congestion phenomena. These proposed models have certain key features: they have been successfully formulated as convex minimization problems, so uniqueness and algorithm convergence are easily proved; traveler behavior is based on theoretically sound random utility models, which allows the benefit of transportation projects to be calculated such that it is consistent with travel demand forecasting; and optimal road pricing can be calculated even in large-scale networks. These models are examined with the use of simple network examples, with special attention paid to the effect of trip-chaining behavior at the level of second-best toll. In a simple two-destination network, the second-best toll of the trip-based model is lower than that of trip chain–based model, indicating one of the biases of the trip-based model.


2021 ◽  
Vol 11 (5) ◽  
pp. 2371
Author(s):  
Junjian Zhan ◽  
Feng Li ◽  
Yang Wang ◽  
Daoyu Lin ◽  
Guangluan Xu

As most networks come with some content in each node, attributed network embedding has aroused much research interest. Most existing attributed network embedding methods aim at learning a fixed representation for each node encoding its local proximity. However, those methods usually neglect the global information between nodes distant from each other and distribution of the latent codes. We propose Structural Adversarial Variational Graph Auto-Encoder (SAVGAE), a novel framework which encodes the network structure and node content into low-dimensional embeddings. On one hand, our model captures the local proximity and proximities at any distance of a network by exploiting a high-order proximity indicator named Rooted Pagerank. On the other hand, our method learns the data distribution of each node representation while circumvents the side effect its sampling process causes on learning a robust embedding through adversarial training. On benchmark datasets, we demonstrate that our method performs competitively compared with state-of-the-art models.


2021 ◽  
Vol 15 (3) ◽  
pp. 1-18
Author(s):  
Sezin Kircali Ata ◽  
Yuan Fang ◽  
Min Wu ◽  
Jiaqi Shi ◽  
Chee Keong Kwoh ◽  
...  

Real-world networks often exist with multiple views, where each view describes one type of interaction among a common set of nodes. For example, on a video-sharing network, while two user nodes are linked, if they have common favorite videos in one view, then they can also be linked in another view if they share common subscribers. Unlike traditional single-view networks, multiple views maintain different semantics to complement each other. In this article, we propose M ulti-view coll A borative N etwork E mbedding (MANE), a multi-view network embedding approach to learn low-dimensional representations. Similar to existing studies, MANE hinges on diversity and collaboration—while diversity enables views to maintain their individual semantics, collaboration enables views to work together. However, we also discover a novel form of second-order collaboration that has not been explored previously, and further unify it into our framework to attain superior node representations. Furthermore, as each view often has varying importance w.r.t. different nodes, we propose MANE , an attention -based extension of MANE, to model node-wise view importance. Finally, we conduct comprehensive experiments on three public, real-world multi-view networks, and the results demonstrate that our models consistently outperform state-of-the-art approaches.


2020 ◽  
Vol 2020 ◽  
pp. 1-13
Author(s):  
Yunfang Chen ◽  
Li Wang ◽  
Dehao Qi ◽  
Tinghuai Ma ◽  
Wei Zhang

The large-scale and complex structure of real networks brings enormous challenges to traditional community detection methods. In order to detect community structure in large-scale networks more accurately and efficiently, we propose a community detection algorithm based on the network embedding representation method. Firstly, in order to solve the scarce problem of network data, this paper uses the DeepWalk model to embed a high-dimensional network into low-dimensional space with topology information. Then, low-dimensional data are processed, with each node treated as a sample and each dimension of the node as a feature. Finally, samples are fed into a Gaussian mixture model (GMM), and in order to automatically learn the number of communities, variational inference is introduced into GMM. Experimental results on the DBLP dataset show that the model method of this paper can more effectively discover the communities in large-scale networks. By further analyzing the excavated community structure, the organizational characteristics within the community are better revealed.


Sign in / Sign up

Export Citation Format

Share Document