conjugate gradient solver
Recently Published Documents


TOTAL DOCUMENTS

68
(FIVE YEARS 5)

H-INDEX

10
(FIVE YEARS 1)

2021 ◽  
Vol 47 (2) ◽  
pp. 1-4
Author(s):  
Sarah Osborn

The article by Flegar et al. titled “Adaptive Precision Block-Jacobi for High Performance Preconditioning in the Ginkgo Linear Algebra Software” presents a novel, practical implementation of an adaptive precision block-Jacobi preconditioner. Performance results using state-of-the-art GPU architectures for the block-Jacobi preconditioner generation and application demonstrate the practical usability of the method, compared to a traditional full-precision block-Jacobi preconditioner. A production-ready implementation is provided in the Ginkgo numerical linear algebra library. In this report, the Ginkgo library is reinstalled and performance results are generated to perform a comparison to the original results when using Ginkgo’s Conjugate Gradient solver with either the full or the adaptive precision block-Jacobi preconditioner for a suite of test problems on an NVIDIA GPU accelerator. After completing this process, the published results are deemed reproducible.


2020 ◽  
Author(s):  
Lucas Bessone ◽  
Pablo Gamazo ◽  
Julián Ramos ◽  
Mario Storti

<p>GPU architectures are characterized by the abundant computing capacity in relation to memory bandwich. This makes them very good for solving problems temporaly explicit and with compact spatial discretizations. Most works using GPU focuses on the parallelization of solvers of linear equations generated by the numerical methods. However, to obtain a good performance in numerical applications using GPU it is crucial to work preferably in codes based entirely on GPU. In this work we solve a 3D nonlinear diffusion equation, using finite volume method in cartesian meshes. Two different time schemes are compared, explicit and implicit, considering for the latter, the Newton method and Conjugate Gradient solver for the system of equations. An evaluation is performed in CPU and GPU of each scheme using different metrics to measure performance, accuracy, calculation speed and mesh size. To evaluate the convergence propierties of the different schemes in relation to spatial and temporal discretization, an arbitrary analytical solution is proposed, which satisfies the differential equation by chossing a source term chosen based on it.</p>


2018 ◽  
Vol 620 ◽  
pp. A59 ◽  
Author(s):  
J. Papež ◽  
L. Grigori ◽  
R. Stompor

We discuss linear system solvers invoking a messenger-field and compare them with (preconditioned) conjugate gradient approaches. We show that the messenger-field techniques correspond to fixed point iterations of an appropriately preconditioned initial system of linear equations. We then argue that a conjugate gradient solver applied to the same preconditioned system, or equivalently a preconditioned conjugate gradient solver using the same preconditioner and applied to the original system, will in general ensure at least a comparable and typically better performance in terms of the number of iterations to convergence and time-to-solution. We illustrate our conclusions with two common examples drawn from the cosmic microwave background (CMB) data analysis: Wiener filtering and map-making. In addition, and contrary to the standard lore in the CMB field, we show that the performance of the preconditioned conjugate gradient solver can depend significantly on the starting vector. This observation seems of particular importance in the cases of map-making of high signal-to-noise ratio sky maps and therefore should be of relevance for the next generation of CMB experiments.


2018 ◽  
Vol 2018 ◽  
pp. 1-8 ◽  
Author(s):  
Zhonghua Jiang ◽  
Ning Xu

We proposed to use the conjugate gradient method to effectively solve the thermal resistance model in HotSpot thermal floorplan tool. The iterative conjugate gradient solver is suitable for traditional sparse matrix linear systems. We also defined the relative sparse matrix in the iterative thermal floorplan of Simulated Annealing framework algorithm, and the iterative method of relative sparse matrix could be applied to other iterative framework algorithms. The experimental results show that the running time of our incremental iterative conjugate gradient solver is speeded up approximately 11x compared with the LU decomposition method for case ami49, and the experiment ratio curve shows that our iterative conjugate gradient solver accelerated more with increasing number of modules.


Sign in / Sign up

Export Citation Format

Share Document