scholarly journals Some algorithms for maximum volume and cross approximation of symmetric semidefinite matrices

Author(s):  
Stefano Massei

AbstractVarious applications in numerical linear algebra and computer science are related to selecting the $$r\times r$$ r × r submatrix of maximum volume contained in a given matrix $$A\in \mathbb R^{n\times n}$$ A ∈ R n × n . We propose a new greedy algorithm of cost $$\mathcal O(n)$$ O ( n ) , for the case A symmetric positive semidefinite (SPSD) and we discuss its extension to related optimization problems such as the maximum ratio of volumes. In the second part of the paper we prove that any SPSD matrix admits a cross approximation built on a principal submatrix whose approximation error is bounded by $$(r+1)$$ ( r + 1 ) times the error of the best rank r approximation in the nuclear norm. In the spirit of recent work by Cortinovis and Kressner we derive some deterministic algorithms, which are capable to retrieve a quasi optimal cross approximation with cost $$\mathcal O(n^3)$$ O ( n 3 ) .

Author(s):  
Xiaofei Shi ◽  
David P. Woodruff

We show how to solve a number of problems in numerical linear algebra, such as least squares regression, lp-regression for any p ≥ 1, low rank approximation, and kernel regression, in time T(A)poly(log(nd)), where for a given input matrix A ∈ Rn×d, T(A) is the time needed to compute A · y for an arbitrary vector y ∈ Rd. Since T(A) ≤ O(nnz(A)), where nnz(A) denotes the number of non-zero entries of A, the time is no worse, up to polylogarithmic factors, as all of the recent advances for such problems that run in input-sparsity time. However, for many applications, T(A) can be much smaller than nnz(A), yielding significantly sublinear time algorithms. For example, in the overconstrained (1+ε)-approximate polynomial interpolation problem, A is a Vandermonde matrix and T(A) = O(n log n); in this case our running time is n · poly (log n) + poly (d/ε) and we recover the results of Avron, Sindhwani, and Woodruff (2013) as a special case. For overconstrained autoregression, which is a common problem arising in dynamical systems, T(A) = O(n log n), and we immediately obtain n· poly (log n) + poly(d/ε) time. For kernel autoregression, we significantly improve the running time of prior algorithms for general kernels. For the important case of autoregression with the polynomial kernel and arbitrary target vector b ∈ Rn, we obtain even faster algorithms. Our algorithms show that, perhaps surprisingly, most of these optimization problems do not require much more time than that of a polylogarithmic number of matrix-vector multiplications.


2014 ◽  
Vol 2014 ◽  
pp. 1-10
Author(s):  
Minghua Xu ◽  
Yong Zhang ◽  
Qinglong Huang ◽  
Zhenhua Yang

We consider the problem of seeking a symmetric positive semidefinite matrix in a closed convex set to approximate a given matrix. This problem may arise in several areas of numerical linear algebra or come from finance industry or statistics and thus has many applications. For solving this class of matrix optimization problems, many methods have been proposed in the literature. The proximal alternating direction method is one of those methods which can be easily applied to solve these matrix optimization problems. Generally, the proximal parameters of the proximal alternating direction method are greater than zero. In this paper, we conclude that the restriction on the proximal parameters can be relaxed for solving this kind of matrix optimization problems. Numerical experiments also show that the proximal alternating direction method with the relaxed proximal parameters is convergent and generally has a better performance than the classical proximal alternating direction method.


2020 ◽  
Vol 26 (4) ◽  
pp. 273-284
Author(s):  
Hao Ji ◽  
Michael Mascagni ◽  
Yaohang Li

AbstractIn this article, we consider the general problem of checking the correctness of matrix multiplication. Given three n\times n matrices 𝐴, 𝐵 and 𝐶, the goal is to verify that A\times B=C without carrying out the computationally costly operations of matrix multiplication and comparing the product A\times B with 𝐶, term by term. This is especially important when some or all of these matrices are very large, and when the computing environment is prone to soft errors. Here we extend Freivalds’ algorithm to a Gaussian Variant of Freivalds’ Algorithm (GVFA) by projecting the product A\times B as well as 𝐶 onto a Gaussian random vector and then comparing the resulting vectors. The computational complexity of GVFA is consistent with that of Freivalds’ algorithm, which is O(n^{2}). However, unlike Freivalds’ algorithm, whose probability of a false positive is 2^{-k}, where 𝑘 is the number of iterations, our theoretical analysis shows that, when A\times B\neq C, GVFA produces a false positive on set of inputs of measure zero with exact arithmetic. When we introduce round-off error and floating-point arithmetic into our analysis, we can show that the larger this error, the higher the probability that GVFA avoids false positives. Moreover, by iterating GVFA 𝑘 times, the probability of a false positive decreases as p^{k}, where 𝑝 is a very small value depending on the nature of the fault on the result matrix and the arithmetic system’s floating-point precision. Unlike deterministic algorithms, there do not exist any fault patterns that are completely undetectable with GVFA. Thus GVFA can be used to provide efficient fault tolerance in numerical linear algebra, and it can be efficiently implemented on modern computing architectures. In particular, GVFA can be very efficiently implemented on architectures with hardware support for fused multiply-add operations.


Author(s):  
Mareike Dressler ◽  
Adam Kurpisz ◽  
Timo de Wolff

AbstractVarious key problems from theoretical computer science can be expressed as polynomial optimization problems over the boolean hypercube. One particularly successful way to prove complexity bounds for these types of problems is based on sums of squares (SOS) as nonnegativity certificates. In this article, we initiate optimization problems over the boolean hypercube via a recent, alternative certificate called sums of nonnegative circuit polynomials (SONC). We show that key results for SOS-based certificates remain valid: First, for polynomials, which are nonnegative over the n-variate boolean hypercube with constraints of degree d there exists a SONC certificate of degree at most $$n+d$$ n + d . Second, if there exists a degree d SONC certificate for nonnegativity of a polynomial over the boolean hypercube, then there also exists a short degree d SONC certificate that includes at most $$n^{O(d)}$$ n O ( d ) nonnegative circuit polynomials. Moreover, we prove that, in opposite to SOS, the SONC cone is not closed under taking affine transformation of variables and that for SONC there does not exist an equivalent to Putinar’s Positivstellensatz for SOS. We discuss these results from both the algebraic and the optimization perspective.


Author(s):  
Nicola Mastronardi ◽  
Gene H Golub ◽  
Shivkumar Chandrasekaran ◽  
Marc Moonen ◽  
Paul Van Dooren ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document