Low Byte/Flop Implementation of Iterative Solver for Sparse Matrices Derived from Stencil Computations

Author(s):  
Kenji Ono ◽  
Shuichi Chiba ◽  
Shunsuke Inoue ◽  
Kazuo Minami
2001 ◽  
Vol 9 (4) ◽  
pp. 223-231 ◽  
Author(s):  
Jack Dongarra ◽  
Victor Eijkhout ◽  
Henk van der Vorst

We present a benchmark of iterative solvers for sparse matrices. The benchmark contains several common methods and data structures, chosen to be representative of the performance of a large class of methods in current use. We give results on some high performance processors that show that performance is largely determined by memory bandwidth.


2008 ◽  
Vol 8 (4) ◽  
pp. 336-349 ◽  
Author(s):  
L. GRASEDYCK ◽  
W. HACKBUSCH ◽  
R. KRIEMANN

AbstractIn this paper we review the technique of hierarchical matrices and put it into the context of black-box solvers for large linear systems. Numerical examples for several classes of problems from medium- to large-scale illustrate the applicability and efficiency of this technique. We compare the results with those of several direct solvers (which typically scale quadratically in the matrix size) as well as an iterative solver (algebraic multigrid) which scales linearly (if it converges in O(1) steps).


Author(s):  
I Misztal ◽  
I Aguilar ◽  
D Lourenco ◽  
L Ma ◽  
J Steibel ◽  
...  

Abstract Genomic selection is now practiced successfully across many species. However, many questions remain such as long-term effects, estimations of genomic parameters, robustness of GWAS with small and large datasets, and stability of genomic predictions. This study summarizes presentations from at the 2020 ASAS symposium. The focus of many studies until now is on linkage disequilibrium (LD) between two loci. Ignoring higher level equilibrium may lead to phantom dominance and epistasis. The Bulmer effect leads to a reduction of the additive variance; however, selection for increased recombination rate can release anew genetic variance. With genomic information, estimates of genetic parameters may be biased by genomic preselection, but costs of estimation can increase drastically due to the dense form of the genomic information. To make computation of estimates feasible, genotypes could be retained only for the most important animals, and methods of estimation should use algorithms that can recognize dense blocks in sparse matrices. GWAS studies using small genomic datasets frequently find many marker-trait associations whereas studies using much bigger datasets find only a few. Most current tools use very simple models for GWAS, possibly causing artifacts. These models are adequate for large datasets where pseudo-phenotypes such as deregressed proofs indirectly account for important effects for traits of interest. Artifacts arising in GWAS with small datasets can be minimized by using data from all animals (whether genotyped or not), realistic models, and methods that account for population structure. Recent developments permit computation of p-values from GBLUP, where models can be arbitrarily complex but restricted to genotyped animals only, and to single-step GBLUP that also uses phenotypes from ungenotyped animals. Stability was an important part of nongenomic evaluations, where genetic predictions were stable in the absence of new data even with low prediction accuracies. Unfortunately, genomic evaluations for such animals change because all animals with genotypes are connected. A top ranked animal can easily drop in the next evaluation, causing a crisis of confidence in genomic evaluations. While correlations between consecutive genomic evaluations are high, outliers can have differences as high as one SD. A solution to fluctuating genomic evaluations is to base selection decisions on groups of animals. While many issues in genomic selection have been solved, many new issues that require additional research continue to surface.


2016 ◽  
Vol 51 (6) ◽  
pp. 711-726 ◽  
Author(s):  
Shoaib Kamil ◽  
Alvin Cheung ◽  
Shachar Itzhaky ◽  
Armando Solar-Lezama
Keyword(s):  

2016 ◽  
Vol 78 (8-2) ◽  
Author(s):  
Norma Alias ◽  
Nadia Nofri Yeni Suhari ◽  
Hafizah Farhah Saipan Saipol ◽  
Abdullah Aysh Dahawi ◽  
Masyitah Mohd Saidi ◽  
...  

This paper proposed the several real life applications for big data analytic using parallel computing software. Some parallel computing software under consideration are Parallel Virtual Machine, MATLAB Distributed Computing Server and Compute Unified Device Architecture to simulate the big data problems. The parallel computing is able to overcome the poor performance at the runtime, speedup and efficiency of programming in sequential computing. The mathematical models for the big data analytic are based on partial differential equations and obtained the large sparse matrices from discretization and development of the linear equation system. Iterative numerical schemes are used to solve the problems. Thus, the process of computational problems are summarized in parallel algorithm. Therefore, the parallel algorithm development is based on domain decomposition of problems and the architecture of difference parallel computing software. The parallel performance evaluations for distributed and shared memory architecture are investigated in terms of speedup, efficiency, effectiveness and temporal performance.


2017 ◽  
Vol 345 ◽  
pp. 330-344 ◽  
Author(s):  
Mikhail Belonosov ◽  
Maxim Dmitriev ◽  
Victor Kostin ◽  
Dmitry Neklyudov ◽  
Vladimir Tcheverda

SIAM Review ◽  
2009 ◽  
Vol 51 (1) ◽  
pp. 129-159 ◽  
Author(s):  
Kaushik Datta ◽  
Shoaib Kamil ◽  
Samuel Williams ◽  
Leonid Oliker ◽  
John Shalf ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document