scholarly journals Link Prediction Based on the Derivation of Mapping Entropy

Complexity ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-7
Author(s):  
Hefei Hu ◽  
Yanan Wang ◽  
Zheng Li ◽  
Yang Tian ◽  
Yuemei Ren

The algorithms based on topological similarity play an important role in link prediction. However, most of traditional algorithms based on the influences of nodes only consider the degrees of the endpoints which ignore the differences in contribution of neighbors. Through generous explorations, we propose the DME (derivation of mapping entropy) model concerning the mapping relationship between the node and its neighbors to access the influence of the node appropriately. Abundant experiments on nine real networks suggest that the model can improve precision in link prediction and perform better than traditional algorithms obviously with no increase in time complexity.

2014 ◽  
Vol 2014 ◽  
pp. 1-7
Author(s):  
Lin Ding ◽  
Chenhui Jin ◽  
Jie Guan ◽  
Qiuyan Wang

Loiss is a novel byte-oriented stream cipher proposed in 2011. In this paper, based on solving systems of linear equations, we propose an improved Guess and Determine attack on Loiss with a time complexity of 2231and a data complexity of 268, which reduces the time complexity of the Guess and Determine attack proposed by the designers by a factor of 216. Furthermore, a related key chosenIVattack on a scaled-down version of Loiss is presented. The attack recovers the 128-bit secret key of the scaled-down Loiss with a time complexity of 280, requiring 264chosenIVs. The related key attack is minimal in the sense that it only requires one related key. The result shows that our key recovery attack on the scaled-down Loiss is much better than an exhaustive key search in the related key setting.


2019 ◽  
Vol 18 (01) ◽  
pp. 311-338 ◽  
Author(s):  
Lingling Zhang ◽  
Jing Li ◽  
Qiuliu Zhang ◽  
Fan Meng ◽  
Weili Teng

In this paper, we propose domain knowledge-based link prediction algorithm in customer-product bipartite network to improve effectiveness of product recommendation in retail. The domain knowledge is classified into product domain knowledge and time context knowledge, which play an important part in link prediction. We take both of them into consideration in recommendation and form a unified domain knowledge-based link prediction framework. We capture product semantic similarity by ontology-based analysis and time attenuation factor from time context knowledge, then incorporate them into network topological similarity to form a new linkage measure. To evaluate the algorithm, we use a real retail transaction dataset from Food Mart. Experimental results demonstrate that the usage of domain knowledge in link prediction achieved significantly better performance.


2012 ◽  
Vol 11 (04) ◽  
pp. 1250021 ◽  
Author(s):  
HE WEN ◽  
LASZLO B. KISH

Although noise-based logic shows potential advantages of reduced power dissipation and the ability of large parallel operations with low hardware and time complexity the question still persist: Is randomness really needed out of orthogonality? In this Letter, after some general thermodynamical considerations, we show relevant examples where we compare the computational complexity of logic systems based on orthogonal noise and sinusoidal signals, respectively. The conclusion is that in certain special-purpose applications noise-based logic is exponentially better than its sinusoidal version: Its computational complexity can be exponentially smaller to perform the same task.


Author(s):  
Johan Jansson ◽  
Imre Horváth ◽  
Joris S. M. Vergeest

Abstract Previously, we have described the theory of a general mechanics model for non-rigid solids (Jansson, Vergeest, 2000). In this paper, we will describe and analyze the implementation, i.e. algorithms and analysis of their time complexity. We will reason that a good (better than O(n2), where n is the number of elements in the system) time complexity is mandatory for a scalable real time simulation system. We will show that, in simplified form, all our algorithms are O(n lg n). We have not been able to formally analyze the algorithms in non-simplified form, we will however informally discuss the expected performance. The entire system will be empirically shown to perform slightly worse than O(n lg n), for a specific range and typical input. We will also present a working prototype implementation and show it can be used for real time evaluation of reasonably complex systems. Finally we will reason about how such a system can be used in the conceptual design community as a simulation of traditional design tools.


Author(s):  
Jun Yuan ◽  
Neng Gao ◽  
Ji Xiang

Embedding knowledge graphs (KGs) into continuous vector space is an essential problem in knowledge extraction. Current models continue to improve embedding by focusing on discriminating relation-specific information from entities with increasingly complex feature engineering. We noted that they ignored the inherent relevance between relations and tried to learn unique discriminate parameter set for each relation. Thus, these models potentially suffer from high time complexity and large parameters, preventing them from efficiently applying on real-world KGs. In this paper, we follow the thought of parameter sharing to simultaneously learn more expressive features, reduce parameters and avoid complex feature engineering. Based on gate structure from LSTM, we propose a novel model TransGate and develop shared discriminate mechanism, resulting in almost same space complexity as indiscriminate models. Furthermore, to develop a more effective and scalable model, we reconstruct the gate with weight vectors making our method has comparative time complexity against indiscriminate model. We conduct extensive experiments on link prediction and triplets classification. Experiments show that TransGate not only outperforms state-of-art baselines, but also reduces parameters greatly. For example, TransGate outperforms ConvE and RGCN with 6x and 17x fewer parameters, respectively. These results indicate that parameter sharing is a superior way to further optimize embedding and TransGate finds a better trade-off between complexity and expressivity.


2011 ◽  
Vol 22 (05) ◽  
pp. 1161-1185
Author(s):  
ABUSAYEED SAIFULLAH ◽  
YUNG H. TSIN

A self-stabilizing algorithm is a distributed algorithm that can start from any initial (legitimate or illegitimate) state and eventually converge to a legitimate state in finite time without being assisted by any external agent. In this paper, we propose a self-stabilizing algorithm for finding the 3-edge-connected components of an asynchronous distributed computer network. The algorithm stabilizes in O(dnΔ) rounds and every processor requires O(n log Δ) bits, where Δ(≤ n) is an upper bound on the degree of a node, d(≤ n) is the diameter of the network, and n is the total number of nodes in the network. These time and space complexity are at least a factor of n better than those of the previously best-known self-stabilizing algorithm for 3-edge-connectivity. The result of the computation is kept in a distributed fashion by assigning, upon stabilization of the algorithm, a component identifier to each processor which uniquely identifies the 3-edge-connected component to which the processor belongs. Furthermore, the algorithm is designed in such a way that its time complexity is dominated by that of the self-stabilizing depth-first search spanning tree construction in the sense that any improvement made in the latter automatically implies improvement in the time complexity of the algorithm.


2017 ◽  
Vol 28 (06) ◽  
pp. 1750082 ◽  
Author(s):  
Yang Ma ◽  
Guangquan Cheng ◽  
Zhong Liu ◽  
Xingxing Liang

Link prediction in social networks has become a growing concern among researchers. In this paper, the clustering method was used to exploit the grouping tendency of nodes, and a clustering index (CI) was proposed to predict potential links with characteristics of scientific cooperation network taken into consideration. Results showed that CI performed better than the traditional indices for scientific coauthorship networks by compensating for their disadvantages. Compared with traditional algorithms, this method for a specific type of network can better reflect the features of the network and achieve more accurate predictions.


2015 ◽  
Vol 37 ◽  
pp. 125 ◽  
Author(s):  
Zohreh Zalaghi

Link prediction is an important task for social networks analysis, which also has applications in other domains such as information retrieval, recommender systems and e-commerce. The task is related to predicting the probable connection between two nodes in the netwok. These links are subjected to loss because of the improper creation or the lack of reflection of links in the networks; so it`s possible to develop or complete these networks and recycle the lost items and information through link prediction. In order to discover and predict these links we need the information of the nodes in the network. The information are usually extracted from the network`s graph and utilized as factors for recognition. There exist a variety of techniques for link prediction, amongst them, the most practical and current one is supervised learning based approach. In this approach, the link prediction is considered as binary classifier that each pair of nodes can be 0 or 1. The value of 0 indicates no connection between nodes and 1 means that there is a connection between them. In this research, while studying probabilistic graphical models, we use Markov random field (MRF) for link prediction problem in social networks. Experimentl results on Flicker dataset showed the proposed method was better than previous methods in precision and recall.


2019 ◽  
Vol 33 (22) ◽  
pp. 1950249 ◽  
Author(s):  
Yang Tian ◽  
Han Li ◽  
Xuzhen Zhu ◽  
Hui Tian

Link prediction based on topological similarity in complex networks obtains more and more attention both in academia and industry. Most researchers believe that two unconnected endpoints can possibly make a link when they have large influence, respectively. Through profound investigations, we find that at least one endpoint possessing large influence can easily attract other endpoints. The combined influence of two unconnected endpoints affects their mutual attractions. We consider that the greater the combined influence of endpoints is, the more the possibility of them producing a link. Therefore, we explore the contribution of combined influence for similarity-based link prediction. Furthermore, we find that the transmission capability of path determines the communication possibility between endpoints. Meanwhile, compared to the local and global path, the quasi-local path balances high accuracy and low complexity more effectually in link prediction. Therefore, we focus on the transmission capabilities of quasi-local paths between two unconnected endpoints, which is called effective paths. In this paper, we propose a link prediction index based on combined influence and effective path (CIEP). A large number of experiments on 12 real benchmark datasets show that in most cases CIEP is capable of improving the prediction performance.


2021 ◽  
Vol 01 (03) ◽  
Author(s):  
Xudong Li ◽  
Lizhen Wu ◽  
Yifeng Niu ◽  
Shengde Jia ◽  
Bosen Lin

In this paper, an algorithm for solving the multi-target correlation and co-location problem of aerial-ground heterogeneous system is investigated. Aiming at the multi-target correlation problem, the fusion algorithm of visual axis correlation method and improved topological similarity correlation method are adopted in view of large parallax and inconsistent scale between the aerial and ground perspectives. First, the visual axis was preprocessed by the threshold method, so that the sparse targets were initially associated. Then, the improved topological similarity method was used to further associate dense targets with the relative position characteristics between targets. The shortcoming of dense target similarity with small difference was optimized by the improved topological similarity method. For the problem of co-location, combined with the multi-target correlation algorithm in this paper, the triangulation positioning model was used to complete the co-location of multiple targets. In the experimental part, simulation experiments and flight experiments were designed to verify the effectiveness of the algorithm. Experimental results show that the proposed algorithm can effectively achieve multi-target correlation positioning, and that the positioning accuracy is obviously better than other positioning methods.


Sign in / Sign up

Export Citation Format

Share Document