scholarly journals Graph coarsening: from scientific computing to machine learning

SeMA Journal ◽  
2022 ◽  
Author(s):  
Jie Chen ◽  
Yousef Saad ◽  
Zechen Zhang

AbstractThe general method of graph coarsening or graph reduction has been a remarkably useful and ubiquitous tool in scientific computing and it is now just starting to have a similar impact in machine learning. The goal of this paper is to take a broad look into coarsening techniques that have been successfully deployed in scientific computing and see how similar principles are finding their way in more recent applications related to machine learning. In scientific computing, coarsening plays a central role in algebraic multigrid methods as well as the related class of multilevel incomplete LU factorizations. In machine learning, graph coarsening goes under various names, e.g., graph downsampling or graph reduction. Its goal in most cases is to replace some original graph by one which has fewer nodes, but whose structure and characteristics are similar to those of the original graph. As will be seen, a common strategy in these methods is to rely on spectral properties to define the coarse graph.

Acta Numerica ◽  
2017 ◽  
Vol 26 ◽  
pp. 591-721 ◽  
Author(s):  
Jinchao Xu ◽  
Ludmil Zikatanov

This paper provides an overview of AMG methods for solving large-scale systems of equations, such as those from discretizations of partial differential equations. AMG is often understood as the acronym of ‘algebraic multigrid’, but it can also be understood as ‘abstract multigrid’. Indeed, we demonstrate in this paper how and why an algebraic multigrid method can be better understood at a more abstract level. In the literature, there are many different algebraic multigrid methods that have been developed from different perspectives. In this paper we try to develop a unified framework and theory that can be used to derive and analyse different algebraic multigrid methods in a coherent manner. Given a smoother$R$for a matrix$A$, such as Gauss–Seidel or Jacobi, we prove that the optimal coarse space of dimension$n_{c}$is the span of the eigenvectors corresponding to the first$n_{c}$eigenvectors$\bar{R}A$(with$\bar{R}=R+R^{T}-R^{T}AR$). We also prove that this optimal coarse space can be obtained via a constrained trace-minimization problem for a matrix associated with$\bar{R}A$, and demonstrate that coarse spaces of most existing AMG methods can be viewed as approximate solutions of this trace-minimization problem. Furthermore, we provide a general approach to the construction of quasi-optimal coarse spaces, and we prove that under appropriate assumptions the resulting two-level AMG method for the underlying linear system converges uniformly with respect to the size of the problem, the coefficient variation and the anisotropy. Our theory applies to most existing multigrid methods, including the standard geometric multigrid method, classical AMG, energy-minimization AMG, unsmoothed and smoothed aggregation AMG and spectral AMGe.


Author(s):  
Brian Granger ◽  
Fernando Pérez

Project Jupyter is an open-source project for interactive computing widely used in data science, machine learning, and scientific computing. We argue that even though Jupyter helps users perform complex, technical work, Jupyter itself solves problems that are fundamentally human in nature. Namely, Jupyter helps humans to think and tell stories with code and data. We illustrate this by describing three dimensions of Jupyter: interactive computing, computational narratives, and  the idea that Jupyter is more than software. We illustrate the impact of these dimensions on a community of practice in Earth and climate science.


2020 ◽  
pp. 105971231989648 ◽  
Author(s):  
David Windridge ◽  
Henrik Svensson ◽  
Serge Thill

We consider the benefits of dream mechanisms – that is, the ability to simulate new experiences based on past ones – in a machine learning context. Specifically, we are interested in learning for artificial agents that act in the world, and operationalize “dreaming” as a mechanism by which such an agent can use its own model of the learning environment to generate new hypotheses and training data. We first show that it is not necessarily a given that such a data-hallucination process is useful, since it can easily lead to a training set dominated by spurious imagined data until an ill-defined convergence point is reached. We then analyse a notably successful implementation of a machine learning-based dreaming mechanism by Ha and Schmidhuber (Ha, D., & Schmidhuber, J. (2018). World models. arXiv e-prints, arXiv:1803.10122). On that basis, we then develop a general framework by which an agent can generate simulated data to learn from in a manner that is beneficial to the agent. This, we argue, then forms a general method for an operationalized dream-like mechanism. We finish by demonstrating the general conditions under which such mechanisms can be useful in machine learning, wherein the implicit simulator inference and extrapolation involved in dreaming act without reinforcing inference error even when inference is incomplete.


2015 ◽  
Vol 8 (2) ◽  
pp. 168-198 ◽  
Author(s):  
Yvan Notay

AbstractAbout thirty years ago, Achi Brandt wrote a seminal paper providing a convergence theory for algebraic multigrid methods [Appl. Math. Comput., 19 (1986), pp. 23–56]. Since then, this theory has been improved and extended in a number of ways, and these results have been used in many works to analyze algebraic multigrid methods and guide their developments. This paper makes a concise exposition of the state of the art. Results for symmetric and nonsymmetric matrices are presented in a unified way, highlighting the influence of the smoothing scheme on the convergence estimates. Attention is also paid to sharp eigenvalue bounds for the case where one uses a single smoothing step, allowing straightforward application to deflation-based preconditioners and two-level domain decomposition methods. Some new results are introduced whenever needed to complete the picture, and the material is self-contained thanks to a collection of new proofs, often shorter than the original ones.


Sign in / Sign up

Export Citation Format

Share Document