graph algorithms
Recently Published Documents


TOTAL DOCUMENTS

487
(FIVE YEARS 92)

H-INDEX

36
(FIVE YEARS 3)

2022 ◽  
Vol 15 (1) ◽  
pp. 1-18
Author(s):  
Krishnaveni P. ◽  
Balasundaram S. R.

The day-to-day growth of online information necessitates intensive research in automatic text summarization (ATS). The ATS software produces summary text by extracting important information from the original text. With the help of summaries, users can easily read and understand the documents of interest. Most of the approaches for ATS used only local properties of text. Moreover, the numerous properties make the sentence selection difficult and complicated. So this article uses a graph based summarization to utilize structural and global properties of text. It introduces maximal clique based sentence selection (MCBSS) algorithm to select important and non-redundant sentences that cover all concepts of the input text for summary. The MCBSS algorithm finds novel information using maximal cliques (MCs). The experimental results of recall oriented understudy for gisting evaluation (ROUGE) on Timeline dataset show that the proposed work outperforms the existing graph algorithms Bushy Path (BP), Aggregate Similarity (AS), and TextRank (TR).


2022 ◽  
pp. 35-66
Author(s):  
Yossi Azar ◽  
Debmalya Panigrahi ◽  
Noam Touitou
Keyword(s):  

Author(s):  
V.N. Kasyanov

Graphs are the most common abstract structure encountered in computer science and are widely used for structural information visualization. In the paper, we consider practical and general graph formalism of so called hierarchical graphs and present the Higres and ALVIS systems aimed at supporting of structural information visualization on the base of hierarchical graph models.


2021 ◽  
Author(s):  
Ojas Parekh ◽  
Yipu Wang ◽  
Yang Ho ◽  
Cynthia Phillips ◽  
Ali Pinar ◽  
...  
Keyword(s):  

2021 ◽  
Vol 17 (4) ◽  
pp. 1-51
Author(s):  
Aaron Bernstein ◽  
Sebastian Forster ◽  
Monika Henzinger

Many dynamic graph algorithms have an amortized update time, rather than a stronger worst-case guarantee. But amortized data structures are not suitable for real-time systems, where each individual operation has to be executed quickly. For this reason, there exist many recent randomized results that aim to provide a guarantee stronger than amortized expected. The strongest possible guarantee for a randomized algorithm is that it is always correct (Las Vegas) and has high-probability worst-case update time, which gives a bound on the time for each individual operation that holds with high probability. In this article, we present the first polylogarithmic high-probability worst-case time bounds for the dynamic spanner and the dynamic maximal matching problem. (1) For dynamic spanner, the only known o ( n ) worst-case bounds were O ( n 3/4 ) high-probability worst-case update time for maintaining a 3-spanner and O ( n 5/9 ) for maintaining a 5-spanner. We give a O (1) k log 3 ( n ) high-probability worst-case time bound for maintaining a ( 2k-1 )-spanner, which yields the first worst-case polylog update time for all constant k . (All the results above maintain the optimal tradeoff of stretch 2k-1 and Õ( n 1+1/k ) edges.) (2) For dynamic maximal matching, or dynamic 2-approximate maximum matching, no algorithm with o(n) worst-case time bound was known and we present an algorithm with O (log 5 ( n )) high-probability worst-case time; similar worst-case bounds existed only for maintaining a matching that was (2+ϵ)-approximate, and hence not maximal. Our results are achieved using a new approach for converting amortized guarantees to worst-case ones for randomized data structures by going through a third type of guarantee, which is a middle ground between the two above: An algorithm is said to have worst-case expected update time ɑ if for every update σ, the expected time to process σ is at most ɑ. Although stronger than amortized expected, the worst-case expected guarantee does not resolve the fundamental problem of amortization: A worst-case expected update time of O(1) still allows for the possibility that every 1/ f(n) updates requires ϴ ( f(n) ) time to process, for arbitrarily high f(n) . In this article, we present a black-box reduction that converts any data structure with worst-case expected update time into one with a high-probability worst-case update time: The query time remains the same, while the update time increases by a factor of O (log 2(n) ). Thus, we achieve our results in two steps: (1) First, we show how to convert existing dynamic graph algorithms with amortized expected polylogarithmic running times into algorithms with worst-case expected polylogarithmic running times. (2) Then, we use our black-box reduction to achieve the polylogarithmic high-probability worst-case time bound. All our algorithms are Las-Vegas-type algorithms.


Author(s):  
Seher Acer ◽  
Ariful Azad ◽  
Erik G Boman ◽  
Aydın Buluç ◽  
Karen D. Devine ◽  
...  

Combinatorial algorithms in general and graph algorithms in particular play a critical enabling role in numerous scientific applications. However, the irregular memory access nature of these algorithms makes them one of the hardest algorithmic kernels to implement on parallel systems. With tens of billions of hardware threads and deep memory hierarchies, the exascale computing systems in particular pose extreme challenges in scaling graph algorithms. The codesign center on combinatorial algorithms, ExaGraph, was established to design and develop methods and techniques for efficient implementation of key combinatorial (graph) algorithms chosen from a diverse set of exascale applications. Algebraic and combinatorial methods have a complementary role in the advancement of computational science and engineering, including playing an enabling role on each other. In this paper, we survey the algorithmic and software development activities performed under the auspices of ExaGraph from both a combinatorial and an algebraic perspective. In particular, we detail our recent efforts in porting the algorithms to manycore accelerator (GPU) architectures. We also provide a brief survey of the applications that have benefited from the scalable implementations of different combinatorial algorithms to enable scientific discovery at scale. We believe that several applications will benefit from the algorithmic and software tools developed by the ExaGraph team.


2021 ◽  
Vol 8 (3) ◽  
pp. 1-25
Author(s):  
Soheil Behnezhad ◽  
Laxman Dhulipala ◽  
Hossein Esfandiari ◽  
Jakub Łącki ◽  
Vahab Mirrokni ◽  
...  

We introduce the Adaptive Massively Parallel Computation (AMPC) model, which is an extension of the Massively Parallel Computation (MPC) model. At a high level, the AMPC model strengthens the MPC model by storing all messages sent within a round in a distributed data store. In the following round, all machines are provided with random read access to the data store, subject to the same constraints on the total amount of communication as in the MPC model. Our model is inspired by the previous empirical studies of distributed graph algorithms [8, 30] using MapReduce and a distributed hash table service [17]. This extension allows us to give new graph algorithms with much lower round complexities compared to the best-known solutions in the MPC model. In particular, in the AMPC model we show how to solve maximal independent set in O (1) rounds and connectivity/minimum spanning tree in O (log log m / n n rounds both using O ( n δ ) space per machine for constant δ < 1. In the same memory regime for MPC, the best-known algorithms for these problems require poly log n rounds. Our results imply that the 2-C YCLE conjecture, which is widely believed to hold in the MPC model, does not hold in the AMPC model.


2021 ◽  
Author(s):  
Mark P. Blanco ◽  
Scott McMillan ◽  
Tze Meng Low
Keyword(s):  

2021 ◽  
Vol 11 (9) ◽  
pp. 1167
Author(s):  
Victor B. Yang ◽  
Joseph R. Madsen

Current epilepsy surgery planning protocol determines the seizure onset zone (SOZ) through resource-intensive, invasive monitoring of ictal events. Recently, we have reported that Granger Causality (GC) maps produced from analysis of interictal iEEG recordings have potential in revealing SOZ. In this study, we investigate GC maps’ network connectivity patterns to determine possible clinical correlation with patients’ SOZ and resection zone (RZ). While building understanding of interictal network topography and its relationship to the RZ/SOZ, we identify algorithmic tools with potential applications in epilepsy surgery planning. These graph algorithms are retrospectively tested on data from 25 patients and compared to the neurologist-determined SOZ and surgical RZ, viewed as sources of truth. Centrality algorithms yielded statistically significant RZ rank order sums for 16 of 24 patients with RZ data, representing an improvement from prior algorithms. While SOZ results remained largely the same, this study validates the applicability of graph algorithms to RZ/SOZ detection, opening the door to further exploration of iEEG datasets. Furthermore, this study offers previously inaccessible insights into the relationship between interictal brain connectivity patterns and epileptic brain networks, utilizing the overall topology of the graphs as well as data on edge weights and quantity of edges contained in GC maps.


Sign in / Sign up

Export Citation Format

Share Document