sparse networks
Recently Published Documents


TOTAL DOCUMENTS

145
(FIVE YEARS 45)

H-INDEX

21
(FIVE YEARS 3)

Entropy ◽  
2021 ◽  
Vol 24 (1) ◽  
pp. 34
Author(s):  
Jing Su ◽  
Xiaomin Wang ◽  
Bing Yao

For random walks on a complex network, the configuration of a network that provides optimal or suboptimal navigation efficiency is meaningful research. It has been proven that a complete graph has the exact minimal mean hitting time, which grows linearly with the network order. In this paper, we present a class of sparse networks G(t) in view of a graphic operation, which have a similar dynamic process with the complete graph; however, their topological properties are different. We capture that G(t) has a remarkable scale-free nature that exists in most real networks and give the recursive relations of several related matrices for the studied network. According to the connections between random walks and electrical networks, three types of graph invariants are calculated, including regular Kirchhoff index, M-Kirchhoff index and A-Kirchhoff index. We derive the closed-form solutions for the mean hitting time of G(t), and our results show that the dominant scaling of which exhibits the same behavior as that of a complete graph. The result could be considered when designing networks with high navigation efficiency.


2021 ◽  
Vol 2021 ◽  
pp. 1-7
Author(s):  
Chen Jicheng ◽  
Chen Hongchang ◽  
Li Hanchao

Link prediction is a concept of network theory that intends to find a link between two separate network entities. In the present world of social media, this concept has taken root, and its application is seen through numerous social networks. A typical example is 2004, 4 February “TheFeacebook,” currently known as just Facebook. It uses this concept to recommend friends by checking their links using various algorithms. The same goes for shopping and e-commerce sites. Notwithstanding all the merits link prediction presents, they are only enjoyed by large networks. For sparse networks, there is a wide disparity between the links that are likely to form and the ones that include. A barrage of literature has been written to approach this problem; however, they mostly come from the angle of unsupervised learning (UL). While it may seem appropriate based on a dataset’s nature, it does not provide accurate information for sparse networks. Supervised learning could seem reasonable in such cases. This research is aimed at finding the most appropriate link-based link prediction methods in the context of big data based on supervised learning. There is a tone of books written on the same; nonetheless, they are core issues that are not always addressed in these studies, which are critical in understanding the concept of link prediction. This research explicitly looks at the new problems and uses the supervised approach in analyzing them to devise a full-fledge holistic link-based link prediction method. Specifically, the network issues that we will be delving into the lack of specificity in the existing techniques, observational periods, variance reduction, sampling approaches, and topological causes of imbalances. In the subsequent sections of the paper, we explain the theory prediction algorithms, precisely the flow-based process. We specifically address the problems on sparse networks that are never discussed with other prediction methods. The resolutions made by addressing the above techniques place our framework above the previous literature’s unsupervised approaches.


Mathematics ◽  
2021 ◽  
Vol 9 (21) ◽  
pp. 2666
Author(s):  
Daniel Gómez ◽  
Javier Castro ◽  
Inmaculada Gutiérrez ◽  
Rosa Espínola

In this paper we formally define the hierarchical clustering network problem (HCNP) as the problem to find a good hierarchical partition of a network. This new problem focuses on the dynamic process of the clustering rather than on the final picture of the clustering process. To address it, we introduce a new hierarchical clustering algorithm in networks, based on a new shortest path betweenness measure. To calculate it, the communication between each pair of nodes is weighed by the importance of the nodes that establish this communication. The weights or importance associated to each pair of nodes are calculated as the Shapley value of a game, named as the linear modularity game. This new measure, (the node-game shortest path betweenness measure), is used to obtain a hierarchical partition of the network by eliminating the link with the highest value. To evaluate the performance of our algorithm, we introduce several criteria that allow us to compare different dendrograms of a network from two point of view: modularity and homogeneity. Finally, we propose a faster algorithm based on a simplification of the node-game shortest path betweenness measure, whose order is quadratic on sparse networks. This fast version is competitive from a computational point of view with other hierarchical fast algorithms, and, in general, it provides better results.


Author(s):  
Daniel Gómez ◽  
Javier Castro ◽  
Inmaculada Gutiérrez García-Pardo ◽  
Rosa Espínola

In this paper we formally define the hierarchical clustering network problem (HCNP) as the problem to find a good hierarchical partition of a network. This new problem focuses on the dynamic process of the clustering rather than on the final picture of the clustering process. To address it, we introduce a new hierarchical clustering algorithm in networks, based on a new shortest path betweenness measure. To calculate it, the communication between each pair of nodes is weighed by the importance of the nodes that establish this communication. The weights or importance associated to each pair of nodes are calculated as the Shapley value of a game, named as the linear modularity game. This new measure, (the node-game shortest path betweenness measure), is used to obtain a hierarchical partition of the network by eliminating the link with the highest value. To evaluate the performance of our algorithm, we introduce several criteria that allow us to compare different dendrograms of a network from two point of view: modularity and homogeneity. Finally, we propose a faster algorithm based on a simplification of the node-game shortest path betweenness measure, whose order is quadratic on sparse networks. This fast version is competitive from a computational point of view with other hierarchical fast algorithms, and, in general, it provides better results.


2021 ◽  
Vol 9 (35) ◽  
pp. 183-190
Author(s):  
Mohammad Pouya Salvati ◽  
Sadegh Sulaimany ◽  
Jamshid Bagherzadeh Mohasefi

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Jason Hindes ◽  
Victoria Edwards ◽  
Klimka Szwaykowska Kasraie ◽  
George Stantchev ◽  
Ira B. Schwartz

AbstractUnderstanding swarm pattern formation is of great interest because it occurs naturally in many physical and biological systems, and has artificial applications in robotics. In both natural and engineered swarms, agent communication is typically local and sparse. This is because, over a limited sensing or communication range, the number of interactions an agent has is much smaller than the total possible number. A central question for self-organizing swarms interacting through sparse networks is whether or not collective motion states can emerge where all agents have coherent and stable dynamics. In this work we introduce the phenomenon of swarm shedding in which weakly-connected agents are ejected from stable milling patterns in self-propelled swarming networks with finite-range interactions. We show that swarm shedding can be localized around a few agents, or delocalized, and entail a simultaneous ejection of all agents in a network. Despite the complexity of milling motion in complex networks, we successfully build mean-field theory that accurately predicts both milling state dynamics and shedding transitions. The latter are described in terms of saddle-node bifurcations that depend on the range of communication, the inter-agent interaction strength, and the network topology.


2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Zhikui Chen ◽  
Xu Zhang ◽  
Shi Chen ◽  
Fangming Zhong

The introduction of deep transfer learning (DTL) further reduces the requirement of data and expert knowledge in various uses of applications, helping DNN-based models effectively reuse information. However, it often transfers all parameters from the source network that might be useful to the task. The redundant trainable parameters restrict DTL in low-computing-power devices and edge computing, while small effective networks with fewer parameters have difficulty transferring knowledge due to structural differences in design. For the challenge of how to transfer a simplified model from a complex network, in this paper, an algorithm is proposed to realize a sparse DTL, which only transfers and retains the most necessary structure to reduce the parameters of the final model. Sparse transfer hypothesis is introduced, in which a compressing strategy is designed to construct deep sparse networks that distill useful information in the auxiliary domain, improving the transfer efficiency. The proposed method is evaluated on representative datasets and applied for smart agriculture to train deep identification models that can effectively detect new pests using few data samples.


Sign in / Sign up

Export Citation Format

Share Document