Continual representation learning for node classification in power-law graphs

Author(s):  
Gianfranco Lombardo ◽  
Agostino Poggi ◽  
Michele Tomaiuolo
2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
Shicong Chen ◽  
Deyu Yuan ◽  
Shuhua Huang ◽  
Yang Chen

The goal of network representation learning is to extract deep-level abstraction from data features that can also be viewed as a process of transforming the high-dimensional data to low-dimensional features. Learning the mapping functions between two vector spaces is an essential problem. In this paper, we propose a new similarity index based on traditional machine learning, which integrates the concepts of common neighbor, local path, and preferential attachment. Furthermore, for applying the link prediction methods to the field of node classification, we have innovatively established an architecture named multitask graph autoencoder. Specifically, in the context of structural deep network embedding, the architecture designs a framework of high-order loss function by calculating the node similarity from multiple angles so that the model can make up for the deficiency of the second-order loss function. Through the parameter fine-tuning, the high-order loss function is introduced into the optimized autoencoder. Proved by the effective experiments, the framework is generally applicable to the majority of classical similarity indexes.


2020 ◽  
Vol 10 (20) ◽  
pp. 7214
Author(s):  
Cheng-Te Li ◽  
Hong-Yu Lin

Network representation learning (NRL) is crucial in generating effective node features for downstream tasks, such as node classification (NC) and link prediction (LP). However, existing NRL methods neither properly identify neighbor nodes that should be pushed together and away in the embedding space, nor model coarse-grained community knowledge hidden behind the network topology. In this paper, we propose a novel NRL framework, Structural Hierarchy Enhancement (SHE), to deal with such two issues. The main idea is to construct a structural hierarchy from the network based on community detection, and to utilize such a hierarchy to perform level-wise NRL. In addition, lower-level node embeddings are passed to higher-level ones so that community knowledge can be aware of in NRL. Experiments conducted on benchmark network datasets show that SHE can significantly boost the performance of NRL in both tasks of NC and LP, compared to other hierarchical NRL methods.


Mathematics ◽  
2021 ◽  
Vol 9 (15) ◽  
pp. 1767
Author(s):  
Xin Xu ◽  
Yang Lu ◽  
Yupeng Zhou ◽  
Zhiguo Fu ◽  
Yanjie Fu ◽  
...  

Network representation learning aims to learn low-dimensional, compressible, and distributed representational vectors of nodes in networks. Due to the expensive costs of obtaining label information of nodes in networks, many unsupervised network representation learning methods have been proposed, where random walk strategy is one of the wildly utilized approaches. However, the existing random walk based methods have some challenges, including: 1. The insufficiency of explaining what network knowledge in the walking path-samplings; 2. The adverse effects caused by the mixture of different information in networks; 3. The poor generality of the methods with hyper-parameters on different networks. This paper proposes an information-explainable random walk based unsupervised network representation learning framework named Probabilistic Accepted Walk (PAW) to obtain network representation from the perspective of the stationary distribution of networks. In the framework, we design two stationary distributions based on nodes’ self-information and local-information of networks to guide our proposed random walk strategy to learn representational vectors of networks through sampling paths of nodes. Numerous experimental results demonstrated that the PAW could obtain more expressive representation than the other six widely used unsupervised network representation learning baselines on four real-world networks in single-label and multi-label node classification tasks.


1999 ◽  
Vol 173 ◽  
pp. 289-293 ◽  
Author(s):  
J.R. Donnison ◽  
L.I. Pettit

AbstractA Pareto distribution was used to model the magnitude data for short-period comets up to 1988. It was found using exponential probability plots that the brightness did not vary with period and that the cut-off point previously adopted can be supported statistically. Examination of the diameters of Trans-Neptunian bodies showed that a power law does not adequately fit the limited data available.


1968 ◽  
Vol 11 (1) ◽  
pp. 169-178 ◽  
Author(s):  
Alan Gill ◽  
Charles I. Berlin

The unconditioned GSR’s elicited by tones of 60, 70, 80, and 90 dB SPL were largest in the mouse in the ranges around 10,000 Hz. The growth of response magnitude with intensity followed a power law (10 .17 to 10 .22 , depending upon frequency) and suggested that the unconditioned GSR magnitude assessed overall subjective magnitude of tones to the mouse in an orderly fashion. It is suggested that hearing sensitivity as assessed by these means may be closely related to the spectral content of the mouse’s vocalization as well as to the number of critically sensitive single units in the mouse’s VIIIth nerve.


2007 ◽  
Vol 23 (3) ◽  
pp. 157-165 ◽  
Author(s):  
Carmen Hagemeister

Abstract. When concentration tests are completed repeatedly, reaction time and error rate decrease considerably, but the underlying ability does not improve. In order to overcome this validity problem this study aimed to test if the practice effect between tests and within tests can be useful in determining whether persons have already completed this test. The power law of practice postulates that practice effects are greater in unpracticed than in practiced persons. Two experiments were carried out in which the participants completed the same tests at the beginning and at the end of two test sessions set about 3 days apart. In both experiments, the logistic regression could indeed classify persons according to previous practice through the practice effect between the tests at the beginning and at the end of the session, and, less well but still significantly, through the practice effect within the first test of the session. Further analyses showed that the practice effects correlated more highly with the initial performance than was to be expected for mathematical reasons; typically persons with long reaction times have larger practice effects. Thus, small practice effects alone do not allow one to conclude that a person has worked on the test before.


Sign in / Sign up

Export Citation Format

Share Document