scholarly journals Randomized GPU Algorithms for the Construction of Hierarchical Matrices from Matrix-Vector Operations

2019 ◽  
Vol 41 (4) ◽  
pp. C339-C366 ◽  
Author(s):  
Wajih Boukaram ◽  
George Turkiyyah ◽  
David Keyes
Author(s):  
Radoslav Jankoski ◽  
Ulrich Römer ◽  
Sebastian Schöps

Purpose The purpose of this paper is to present a computationally efficient approach for the stochastic modeling of an inhomogeneous reluctivity of magnetic materials. These materials can be part of electrical machines such as a single-phase transformer (a benchmark example that is considered in this paper). The approach is based on the Karhunen–Loève expansion (KLE). The stochastic model is further used to study the statistics of the self-inductance of the primary coil as a quantity of interest (QoI). Design/methodology/approach The computation of the KLE requires solving a generalized eigenvalue problem with dense matrices. The eigenvalues and the eigenfunction are computed by using the Lanczos method that needs only matrix vector multiplications. The complexity of performing matrix vector multiplications with dense matrices is reduced by using hierarchical matrices. Findings The suggested approach is used to study the impact of the spatial variability in the magnetic reluctivity on the QoI. The statistics of this parameter are influenced by the correlation lengths of the random reluctivity. Both, the mean value and the standard deviation increase as the correlation length of the random reluctivity increases. Originality/value The KLE, computed by using hierarchical matrices, is used for uncertainty quantification of low frequency electrical machines as a computationally efficient approach in terms of memory requirement, as well as computation time.


Mathematics ◽  
2021 ◽  
Vol 9 (5) ◽  
pp. 554
Author(s):  
Jiří Mazurek ◽  
Radomír Perzina ◽  
Jaroslav Ramík ◽  
David Bartl

In this paper, we compare three methods for deriving a priority vector in the theoretical framework of pairwise comparisons—the Geometric Mean Method (GMM), Eigenvalue Method (EVM) and Best–Worst Method (BWM)—with respect to two features: sensitivity and order violation. As the research method, we apply One-Factor-At-a-Time (OFAT) sensitivity analysis via Monte Carlo simulations; the number of compared objects ranges from 3 to 8, and the comparison scale coincides with Saaty’s fundamental scale from 1 to 9 with reciprocals. Our findings suggest that the BWM is, on average, significantly more sensitive statistically (and thus less robust) and more susceptible to order violation than the GMM and EVM for every examined matrix (vector) size, even after adjustment for the different numbers of pairwise comparisons required by each method. On the other hand, differences in sensitivity and order violation between the GMM and EMM were found to be mostly statistically insignificant.


Author(s):  
Ernesto Dufrechou ◽  
Pablo Ezzatti ◽  
Enrique S Quintana-Ortí

More than 10 years of research related to the development of efficient GPU routines for the sparse matrix-vector product (SpMV) have led to several realizations, each with its own strengths and weaknesses. In this work, we review some of the most relevant efforts on the subject, evaluate a few prominent routines that are publicly available using more than 3000 matrices from different applications, and apply machine learning techniques to anticipate which SpMV realization will perform best for each sparse matrix on a given parallel platform. Our numerical experiments confirm the methods offer such varied behaviors depending on the matrix structure that the identification of general rules to select the optimal method for a given matrix becomes extremely difficult, though some useful strategies (heuristics) can be defined. Using a machine learning approach, we show that it is possible to obtain unexpensive classifiers that predict the best method for a given sparse matrix with over 80% accuracy, demonstrating that this approach can deliver important reductions in both execution time and energy consumption.


2017 ◽  
Vol 43 (4) ◽  
pp. 1-49 ◽  
Author(s):  
Salvatore Filippone ◽  
Valeria Cardellini ◽  
Davide Barbieri ◽  
Alessandro Fanfarillo

Author(s):  
Rawad Bitar ◽  
Yuxuan Xing ◽  
Yasaman Keshtkarjahromi ◽  
Venkat Dasari ◽  
Salim El Rouayheb ◽  
...  

AbstractEdge computing is emerging as a new paradigm to allow processing data near the edge of the network, where the data is typically generated and collected. This enables critical computations at the edge in applications such as Internet of Things (IoT), in which an increasing number of devices (sensors, cameras, health monitoring devices, etc.) collect data that needs to be processed through computationally intensive algorithms with stringent reliability, security and latency constraints. Our key tool is the theory of coded computation, which advocates mixing data in computationally intensive tasks by employing erasure codes and offloading these tasks to other devices for computation. Coded computation is recently gaining interest, thanks to its higher reliability, smaller delay, and lower communication costs. In this paper, we develop a private and rateless adaptive coded computation (PRAC) algorithm for distributed matrix-vector multiplication by taking into account (1) the privacy requirements of IoT applications and devices, and (2) the heterogeneous and time-varying resources of edge devices. We show that PRAC outperforms known secure coded computing methods when resources are heterogeneous. We provide theoretical guarantees on the performance of PRAC and its comparison to baselines. Moreover, we confirm our theoretical results through simulations and implementations on Android-based smartphones.


Author(s):  
I. A. Papistas ◽  
Stefan Cosemans ◽  
Bram Rooseleer ◽  
Jonas Doevenspeck ◽  
M.-H. Na ◽  
...  
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document