graph reduction
Recently Published Documents


TOTAL DOCUMENTS

204
(FIVE YEARS 19)

H-INDEX

16
(FIVE YEARS 1)

SeMA Journal ◽  
2022 ◽  
Author(s):  
Jie Chen ◽  
Yousef Saad ◽  
Zechen Zhang

AbstractThe general method of graph coarsening or graph reduction has been a remarkably useful and ubiquitous tool in scientific computing and it is now just starting to have a similar impact in machine learning. The goal of this paper is to take a broad look into coarsening techniques that have been successfully deployed in scientific computing and see how similar principles are finding their way in more recent applications related to machine learning. In scientific computing, coarsening plays a central role in algebraic multigrid methods as well as the related class of multilevel incomplete LU factorizations. In machine learning, graph coarsening goes under various names, e.g., graph downsampling or graph reduction. Its goal in most cases is to replace some original graph by one which has fewer nodes, but whose structure and characteristics are similar to those of the original graph. As will be seen, a common strategy in these methods is to rely on spectral properties to define the coarse graph.


PLoS ONE ◽  
2021 ◽  
Vol 16 (12) ◽  
pp. e0259786
Author(s):  
Muhammad Zubair Rehman ◽  
Kamal Z. Zamli ◽  
Mubarak Almutairi ◽  
Haruna Chiroma ◽  
Muhammad Aamir ◽  
...  

Team formation (TF) in social networks exploits graphs (i.e., vertices = experts and edges = skills) to represent a possible collaboration between the experts. These networks lead us towards building cost-effective research teams irrespective of the geolocation of the experts and the size of the dataset. Previously, large datasets were not closely inspected for the large-scale distributions & relationships among the researchers, resulting in the algorithms failing to scale well on the data. Therefore, this paper presents a novel TF algorithm for expert team formation called SSR-TF based on two metrics; communication cost and graph reduction, that will become a basis for future TF’s. In SSR-TF, communication cost finds the possibility of collaboration between researchers. The graph reduction scales the large data to only appropriate skills and the experts, resulting in real-time extraction of experts for collaboration. This approach is tested on five organic and benchmark datasets, i.e., UMP, DBLP, ACM, IMDB, and Bibsonomy. The SSR-TF algorithm is able to build cost-effective teams with the most appropriate experts–resulting in the formation of more communicative teams with high expertise levels.


2021 ◽  
Author(s):  
Qin Wan

Canonical numbering of the vertices from a graph has been a challenging open issue for decades not only in the domain of graph theory but also in the cheminformatic applications. This paper presents an efficient, fast and rigorous approach for canonical numbering and symmetry perception as the first workable solution with theoretical completeness. The methodology is composed of a set of algorithms including extendable representation of vertex, high-performance sorting and graph reduction, etc. The canonical numbering of vertices can be generated in a short time through the novel vertex representation method. Furthermore, a new concept of graph reduction decreases the amount of computation to determine constitutional symmetry of complex graphs into the range of hardware capability. An open-source version of algorithms overall is implemented in Rust thanks to the features of safety, performance and robust abstraction of this modern programming language. The results of experiments on more than 2 million molecules from ChEMBL database has been given at the end.


2021 ◽  
Author(s):  
Peter Podlovics ◽  
Csaba Hruska ◽  
Andor Pénzes

GRIN is short for Graph Reduction Intermediate Notation, a modern back end for lazy functional languages. Most of the currently available compilers for such languages share a common flaw: they can only optimize programs on a per-module basis. The GRIN framework allows for interprocedural whole program analysis, enabling optimizing code transformations across functions and modules as well. Some implementations of GRIN already exist, but most of them were developed only for experimentation purposes. Thus, they either compromise on low-level efficiency or contain ad hoc modifications compared to the original specification. Our goal is to provide a full-fledged implementation of GRIN by combining the currently available best technologies like LLVM, and evaluate the framework's effectiveness by measuring how the optimizer improves the performance of certain programs. We also present some improvements to the already existing components of the framework. Some of these improvements include a typed representation for the intermediate language and an interprocedural program optimization, the dead data elimination.


Author(s):  
Jens Dietrich ◽  
Lijun Chang ◽  
Long Qian ◽  
Lyndon M. Henry ◽  
Catherine Mccartin ◽  
...  

Author(s):  
Olga O. Razvenskaya

The classical NP-hard weighted vertex coloring problem consists in minimizing the number of colors in colorings of vertices of a given graph so that, for each vertex, the number of its colors equals a given weight of the vertex and adjacent vertices receive distinct colors. The weighted chromatic number is the smallest number of colors in these colorings. There are several polynomial-time algorithmic techniques for designing efficient algorithms for the weighted vertex coloring problem. For example, standard techniques of this kind are the modular graph decomposition and the graph decomposition by separating cliques. This article proposes new polynomial-time methods for graph reduction in the form of removing redundant vertices and recomputing weights of the remaining vertices so that the weighted chromatic number changes in a controlled manner. We also present a method of reducing the weighted vertex coloring problem to its unweighted version and its application. This paper contributes to the algorithmic graph theory.


2020 ◽  
Author(s):  
Chung-Hsien Chou ◽  
Shaoting Wang ◽  
Hsiang-Shun Shih ◽  
Phillip C-Y. Sheu

Abstract BackgroundGraph theory has been widely applied to the studies in biomedicine such as structural measures including betweenness centrality. However, if the network size is too large, the result of betweenness centrality would be difficult to obtain in a reasonable amount of time.ResultIn this paper, we describe an approach, 1+ɛ lossy graph reduction algorithm, to computing betweenness centrality on large graphs. The approach is able to guarantee a bounded approximation result. We use GSE48216, a breast cancer cell line co-expression network, to show that our algorithms can achieve a higher reduction rate with a trade-off of some bounded errors in query results. Furthermore, by comparing the betweenness centrality of the original graph and the reduced graph, it can be shown that a higher reduction rate does not sacrifice the accuracy of betweenness centrality when providing faster execution time.ConclusionsOur proposed 1+ɛ lossy graph reduction algorithm is validated by the experiment results which show that the approach achieves a faster execution within a bounded error rate.


2020 ◽  
Author(s):  
Ying Zhao

<p>Sampling is a widely used graph reduction technique to accelerate graph computations and simplify graph visualizations. By comprehensively analyzing the literature on graph sampling, we assume that existing algorithms cannot effectively preserve minority structures that are rare and small in a graph but are very important in graph analysis. In this work, we initially conduct a pilot user study to investigate representative minority structures that are most appealing to human viewers. We then perform an experimental study to evaluate the performance of existing graph sampling algorithms regarding minority structure preservation. Results confirm our assumption and suggest key points for designing a new graph sampling approach named mino-centric graph sampling (MCGS). In this approach, a triangle-based algorithm and a cut-point-based algorithm are proposed to efficiently identify minority structures. A set of importance assessment criteria are designed to guide the preservation of important minority structures. Three optimization objectives are introduced into a greedy strategy to balance the preservation between minority and majority structures and suppress the generation of new minority structures. A series of experiments and case studies are conducted to evaluate the effectiveness of the proposed MCGS.</p>


Sign in / Sign up

Export Citation Format

Share Document