Application of multiple-error-correcting binary BCH codes to optical matrix-vector multipliers

1996 ◽  
Author(s):  
A. Tankut Caglar ◽  
Thomas F. Krile ◽  
John F. Walkup
Mathematics ◽  
2021 ◽  
Vol 9 (5) ◽  
pp. 554
Author(s):  
Jiří Mazurek ◽  
Radomír Perzina ◽  
Jaroslav Ramík ◽  
David Bartl

In this paper, we compare three methods for deriving a priority vector in the theoretical framework of pairwise comparisons—the Geometric Mean Method (GMM), Eigenvalue Method (EVM) and Best–Worst Method (BWM)—with respect to two features: sensitivity and order violation. As the research method, we apply One-Factor-At-a-Time (OFAT) sensitivity analysis via Monte Carlo simulations; the number of compared objects ranges from 3 to 8, and the comparison scale coincides with Saaty’s fundamental scale from 1 to 9 with reciprocals. Our findings suggest that the BWM is, on average, significantly more sensitive statistically (and thus less robust) and more susceptible to order violation than the GMM and EVM for every examined matrix (vector) size, even after adjustment for the different numbers of pairwise comparisons required by each method. On the other hand, differences in sensitivity and order violation between the GMM and EMM were found to be mostly statistically insignificant.


Author(s):  
Ernesto Dufrechou ◽  
Pablo Ezzatti ◽  
Enrique S Quintana-Ortí

More than 10 years of research related to the development of efficient GPU routines for the sparse matrix-vector product (SpMV) have led to several realizations, each with its own strengths and weaknesses. In this work, we review some of the most relevant efforts on the subject, evaluate a few prominent routines that are publicly available using more than 3000 matrices from different applications, and apply machine learning techniques to anticipate which SpMV realization will perform best for each sparse matrix on a given parallel platform. Our numerical experiments confirm the methods offer such varied behaviors depending on the matrix structure that the identification of general rules to select the optimal method for a given matrix becomes extremely difficult, though some useful strategies (heuristics) can be defined. Using a machine learning approach, we show that it is possible to obtain unexpensive classifiers that predict the best method for a given sparse matrix with over 80% accuracy, demonstrating that this approach can deliver important reductions in both execution time and energy consumption.


2017 ◽  
Vol 43 (4) ◽  
pp. 1-49 ◽  
Author(s):  
Salvatore Filippone ◽  
Valeria Cardellini ◽  
Davide Barbieri ◽  
Alessandro Fanfarillo

Sign in / Sign up

Export Citation Format

Share Document