ParTBC: Faster Estimation of Top- k Betweenness Centrality Vertices on GPU

2022 ◽  
Vol 27 (2) ◽  
pp. 1-25
Author(s):  
Somesh Singh ◽  
Tejas Shah ◽  
Rupesh Nasre

Betweenness centrality (BC) is a popular centrality measure, based on shortest paths, used to quantify the importance of vertices in networks. It is used in a wide array of applications including social network analysis, community detection, clustering, biological network analysis, and several others. The state-of-the-art Brandes’ algorithm for computing BC has time complexities of and for unweighted and weighted graphs, respectively. Brandes’ algorithm has been successfully parallelized on multicore and manycore platforms. However, the computation of vertex BC continues to be time-consuming for large real-world graphs. Often, in practical applications, it suffices to identify the most important vertices in a network; that is, those having the highest BC values. Such applications demand only the top vertices in the network as per their BC values but do not demand their actual BC values. In such scenarios, not only is computing the BC of all the vertices unnecessary but also exact BC values need not be computed. In this work, we attempt to marry controlled approximations with parallelization to estimate the k -highest BC vertices faster, without having to compute the exact BC scores of the vertices. We present a host of techniques to determine the top- k vertices faster , with a small inaccuracy, by computing approximate BC scores of the vertices. Aiding our techniques is a novel vertex-renumbering scheme to make the graph layout more structured , which results in faster execution of parallel Brandes’ algorithm on GPU. Our experimental results, on a suite of real-world and synthetic graphs, show that our best performing technique computes the top- k vertices with an average speedup of 2.5× compared to the exact parallel Brandes’ algorithm on GPU, with an error of less than 6%. Our techniques also exhibit high precision and recall, both in excess of 94%.

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Silvia Zaoli ◽  
Piero Mazzarisi ◽  
Fabrizio Lillo

AbstractBetweenness centrality quantifies the importance of a vertex for the information flow in a network. The standard betweenness centrality applies to static single-layer networks, but many real world networks are both dynamic and made of several layers. We propose a definition of betweenness centrality for temporal multiplexes. This definition accounts for the topological and temporal structure and for the duration of paths in the determination of the shortest paths. We propose an algorithm to compute the new metric using a mapping to a static graph. We apply the metric to a dataset of $$\sim 20$$ ∼ 20 k European flights and compare the results with those obtained with static or single-layer metrics. The differences in the airports rankings highlight the importance of considering the temporal multiplex structure and an appropriate distance metric.


PLoS ONE ◽  
2012 ◽  
Vol 7 (11) ◽  
pp. e49951 ◽  
Author(s):  
Sandra Andorf ◽  
Rhonda C. Meyer ◽  
Joachim Selbig ◽  
Thomas Altmann ◽  
Dirk Repsilber

2021 ◽  
Author(s):  
Abhilash Kumar Tripathi ◽  
Priya Saxena ◽  
Payal Thakur ◽  
Shailabh Rauniyar ◽  
Vinoj Gopalakrishnan ◽  
...  

2020 ◽  
Vol 34 (01) ◽  
pp. 19-26 ◽  
Author(s):  
Chong Chen ◽  
Min Zhang ◽  
Yongfeng Zhang ◽  
Weizhi Ma ◽  
Yiqun Liu ◽  
...  

Recent studies on recommendation have largely focused on exploring state-of-the-art neural networks to improve the expressiveness of models, while typically apply the Negative Sampling (NS) strategy for efficient learning. Despite effectiveness, two important issues have not been well-considered in existing methods: 1) NS suffers from dramatic fluctuation, making sampling-based methods difficult to achieve the optimal ranking performance in practical applications; 2) although heterogeneous feedback (e.g., view, click, and purchase) is widespread in many online systems, most existing methods leverage only one primary type of user feedback such as purchase. In this work, we propose a novel non-sampling transfer learning solution, named Efficient Heterogeneous Collaborative Filtering (EHCF) for Top-N recommendation. It can not only model fine-grained user-item relations, but also efficiently learn model parameters from the whole heterogeneous data (including all unlabeled data) with a rather low time complexity. Extensive experiments on three real-world datasets show that EHCF significantly outperforms state-of-the-art recommendation methods in both traditional (single-behavior) and heterogeneous scenarios. Moreover, EHCF shows significant improvements in training efficiency, making it more applicable to real-world large-scale systems. Our implementation has been released 1 to facilitate further developments on efficient whole-data based neural methods.


2014 ◽  
Vol 15 (1) ◽  
pp. 304 ◽  
Author(s):  
Kai Sun ◽  
Joana P Gonçalves ◽  
Chris Larminie ◽  
Nataša Pržulj

2019 ◽  
Vol 1 (2) ◽  
pp. 132-145
Author(s):  
Amira S.N. Tawadros ◽  
Sally Soliman

Purpose The purpose of this study is to examine the extent to which dynamic network analysis (DNA), text mining and natural language processing (NLP) are helpful research tools in identifying the key actors in a complex international crisis. The study uses these tools to identify the key actors in the Syrian crisis as a case study to validate the proposed algorithm. Design/methodology/approach To achieve its main purpose, the study uses a collection of three methodologies, namely, DNA, text mining and NLP. Findings The results of the analysis show four key actors in the Syrian crisis, namely, Russia, the USA, Turkey and China. The results also reveal changes in their powerful positions from 2012 to 2016, which matches the changes that occurred in the real world. The matching between the findings of the proposed algorithm and the real world events that happened in Syria validate our proposed algorithm and proves that the algorithm can be used in identifying the key actors in complex international crises. Originality/value The importance of the study lies in two main points. It proposes a new algorithm that mixes NLP, network extraction from textual unstructured data and DNA to understand and monitor changes occurring in a complex international crisis. It applies the proposed algorithm on the Syrian crisis as a case study to identify the key actors and hence validate the proposed algorithm.


2021 ◽  
Author(s):  
Priya Saxena ◽  
Abhilash Kumar Tripathi ◽  
Payal Thakur ◽  
Shailabh Rauniyar ◽  
Vinoj Gopalakrishnan ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document