numerical linear algebra
Recently Published Documents


TOTAL DOCUMENTS

265
(FIVE YEARS 43)

H-INDEX

28
(FIVE YEARS 2)

La Matematica ◽  
2021 ◽  
Author(s):  
Roozbeh Yousefzadeh ◽  
Dianne P. O’Leary

AbstractDeep learning models have been criticized for their lack of easy interpretation, which undermines confidence in their use for important applications. Nevertheless, they are consistently utilized in many applications, consequential to humans’ lives, usually because of their better performance. Therefore, there is a great need for computational methods that can explain, audit, and debug such models. Here, we use flip points to accomplish these goals for deep learning classifiers used in social applications. A trained deep learning classifier is a mathematical function that maps inputs to classes. By way of training, the function partitions its domain and assigns a class to each of the partitions. Partitions are defined by the decision boundaries which are expected to be geometrically complex. This complexity is usually what makes deep learning models powerful classifiers. Flip points are points on those boundaries and, therefore, the key to understanding and changing the functional behavior of models. We use advanced numerical optimization techniques and state-of-the-art methods in numerical linear algebra, such as rank determination and reduced-order models to compute and analyze them. The resulting insight into the decision boundaries of a deep model can clearly explain the model’s output on the individual level, via an explanation report that is understandable by non-experts. We also develop a procedure to understand and audit model behavior towards groups of people. We show that examining decision boundaries of models in certain subspaces can reveal hidden biases that are not easily detectable. Flip points can also be used as synthetic data to alter the decision boundaries of a model and improve their functional behaviors. We demonstrate our methods by investigating several models trained on standard datasets used in social applications of machine learning. We also identify the features that are most responsible for particular classifications and misclassifications. Finally, we discuss the implications of our auditing procedure in the public policy domain.


2021 ◽  
Vol 6 (1) ◽  
Author(s):  
Kai Bergermann ◽  
Martin Stoll

AbstractWe study urban public transport systems by means of multiplex networks in which stops are represented as nodes and each line is represented by a layer. We determine and visualize public transport network orientations and compare them with street network orientations of the 36 largest German as well as 18 selected major European cities. We find that German urban public transport networks are mainly oriented in a direction close to the cardinal east-west axis, which usually coincides with one of two orthogonal preferential directions of the corresponding street network. While this behavior is present in only a subset of the considered European cities it remains true that none but one considered public transport network has a distinct north-south-like preferential orientation. Furthermore, we study the applicability of the class of matrix function-based centrality measures, which has recently been generalized from single-layer networks to layer-coupled multiplex networks, to our more general urban multiplex framework. Numerical experiments based on highly efficient and scalable methods from numerical linear algebra show promising results, which are in line with previous studies. The centrality measures allow detailed insights into geometrical properties of urban systems such as the spatial distribution of major transport axes, which can not be inferred from orientation plots. We comment on advantages over existing methodology, elaborate on the comparison of different measures and weight models, and present detailed hyper-parameter studies. All results are illustrated by demonstrative graphical representations.


2021 ◽  
Vol 62 ◽  
pp. C58-C71
Author(s):  
Markus Hegland ◽  
Frank De Hoog

Positive semi-definite matrices commonly occur as normal matrices of least squares problems in statistics or as kernel matrices in machine learning and approximation theory. They are typically large and dense. Thus algorithms to solve systems with such a matrix can be very costly. A core idea to reduce computational complexity is to approximate the matrix by one with a low rank. The optimal and well understood choice is based on the eigenvalue decomposition of the matrix. Unfortunately, this is computationally very expensive. Cheaper methods are based on Gaussian elimination but they require pivoting. We show how invariant matrix theory provides explicit error formulas for an averaged error based on volume sampling. The formula leads to ratios of elementary symmetric polynomials on the eigenvalues. We discuss several bounds for the expected norm of the approximation error and include examples where this expected error norm can be computed exactly. References A. Dax. “On extremum properties of orthogonal quotients matrices”. In: Lin. Alg. Appl. 432.5 (2010), pp. 1234–1257. doi: 10.1016/j.laa.2009.10.034. M. Dereziński and M. W. Mahoney. Determinantal Point Processes in Randomized Numerical Linear Algebra. 2020. url: https://arxiv.org/abs/2005.03185. A. Deshpande, L. Rademacher, S. Vempala, and G. Wang. “Matrix approximation and projective clustering via volume sampling”. In: Proceedings of the Seventeenth Annual ACM-SIAM Symposium on Discrete Algorithm. SODA ’06. Miami, Florida: Society for Industrial and Applied Mathematics, 2006, pp. 1117–1126. url: https://dl.acm.org/doi/abs/10.5555/1109557.1109681. S. A. Goreinov, E. E. Tyrtyshnikov, and N. L. Zamarashkin. “A theory of pseudoskeleton approximations”. In: Lin. Alg. Appl. 261.1 (1997), pp. 1–21. doi: 10.1016/S0024-3795(96)00301-1. M. W. Mahoney and P. Drineas. “CUR matrix decompositions for improved data analysis”. In: Proc. Nat. Acad. Sci. 106.3 (Jan. 20, 2009), pp. 697–702. doi: 10.1073/pnas.0803205106. M. Marcus and L. Lopes. “Inequalities for symmetric functions and Hermitian matrices”. In: Can. J. Math. 9 (1957), pp. 305–312. doi: 10.4153/CJM-1957-037-9.


2021 ◽  
Vol 21 (2) ◽  
pp. e09
Author(s):  
Federico Favaro ◽  
Ernesto Dufrechou ◽  
Pablo Ezzatti ◽  
Juan Pablo Oliver

The dissemination of multi-core architectures and the later irruption of massively parallel devices, led to a revolution in High-Performance Computing (HPC) platforms in the last decades. As a result, Field-Programmable Gate Arrays (FPGAs) are re-emerging as a versatile and more energy-efficient alternative to other platforms. Traditional FPGA design implies using low-level Hardware Description Languages (HDL) such as VHDL or Verilog, which follow an entirely different programming model than standard software languages, and their use requires specialized knowledge of the underlying hardware. In the last years, manufacturers started to make big efforts to provide High-Level Synthesis (HLS) tools, in order to allow a grater adoption of FPGAs in the HPC community.Our work studies the use of multi-core hardware and different FPGAs to address Numerical Linear Algebra (NLA) kernels such as the general matrix multiplication GEMM and the sparse matrix-vector multiplication SpMV. Specifically, we compare the behavior of fine-tuned kernels in a multi-core CPU processor and HLS implementations on FPGAs. We perform the experimental evaluation of our implementations on a low-end and a cutting-edge FPGA platform, in terms of runtime and energy consumption, and compare the results against the Intel MKL library in CPU.  


Mathematics ◽  
2021 ◽  
Vol 9 (19) ◽  
pp. 2501
Author(s):  
Khosro Sayevand ◽  
Ahmad Pourdarvish ◽  
José A. Tenreiro Machado ◽  
Raziye Erfanifar

This paper presents a third order iterative method for obtaining the Moore–Penrose and Drazin inverses with a computational cost of O(n3), where n∈N. The performance of the new approach is compared with other methods discussed in the literature. The results show that the algorithm is remarkably efficient and accurate. Furthermore, sufficient criteria in the fractional sense are presented, both for smooth and non-smooth solutions. The fractional elliptic Poisson and fractional sub-diffusion equations in the Caputo sense are considered as prototype examples. The results can be extended to other scientific areas involving numerical linear algebra.


Author(s):  
Katsuhisa Ozaki ◽  
Takeshi Ogita

AbstractThis paper concerns test matrices for numerical linear algebra using an error-free transformation of floating-point arithmetic. For specified eigenvalues given by a user, we propose methods of generating a matrix whose eigenvalues are exactly known based on, for example, Schur or Jordan normal form and a block diagonal form. It is also possible to produce a real matrix with specified complex eigenvalues. Such test matrices with exactly known eigenvalues are useful for numerical algorithms in checking the accuracy of computed results. In particular, exact errors of eigenvalues can be monitored. To generate test matrices, we first propose an error-free transformation for the product of three matrices YSX. We approximate S by ${S^{\prime }}$ S ′ to compute ${YS^{\prime }X}$ Y S ′ X without a rounding error. Next, the error-free transformation is applied to the generation of test matrices with exactly known eigenvalues. Note that the exactly known eigenvalues of the constructed matrix may differ from the anticipated given eigenvalues. Finally, numerical examples are introduced in checking the accuracy of numerical computations for symmetric and unsymmetric eigenvalue problems.


Acoustics ◽  
2021 ◽  
Vol 3 (3) ◽  
pp. 581-594
Author(s):  
Art J. R. Pelling ◽  
Ennes Sarradj

State-space models have been successfully employed for model order reduction and control purposes in acoustics in the past. However, due to the cubic complexity of the singular value decomposition, which makes up the core of many subspace system identification (SSID) methods, the construction of large scale state-space models from high-dimensional measurement data has been problematic in the past. Recent advances of numerical linear algebra have brought forth computationally efficient randomized rank-revealing matrix factorizations and it has been shown that these factorizations can be used to enhance SSID methods such as the Eigensystem Realization Algorithm (ERA). In this paper, we demonstrate the applicability of the so-called generalized ERA to acoustical systems and high-dimensional input data by means of an example. Furthermore, we introduce a new efficient method of forced response computation that relies on a state-space model in quasi-diagonal form. Numerical experiments reveal that our proposed method is more efficient than previous state-space methods and can even outperform frequency domain convolutions in certain scenarios.


Sign in / Sign up

Export Citation Format

Share Document