On the approximate bilinear complexity of matrix multiplication

2014 ◽  
Vol 38 (4) ◽  
pp. 177-180 ◽  
Author(s):  
A. P. Trefilov
Author(s):  
Edgar Solomonik ◽  
James Demmel

AbstractIn matrix-vector multiplication, matrix symmetry does not permit a straightforward reduction in computational cost. More generally, in contractions of symmetric tensors, the symmetries are not preserved in the usual algebraic form of contraction algorithms. We introduce an algorithm that reduces the bilinear complexity (number of computed elementwise products) for most types of symmetric tensor contractions. In particular, it lowers the bilinear complexity of symmetrized contractions of symmetric tensors of order {s+v} and {v+t} by a factor of {\frac{(s+t+v)!}{s!t!v!}} to leading order. The algorithm computes a symmetric tensor of bilinear products, then subtracts unwanted parts of its partial sums. Special cases of this algorithm provide improvements to the bilinear complexity of the multiplication of a symmetric matrix and a vector, the symmetrized vector outer product, and the symmetrized product of symmetric matrices. While the algorithm requires more additions for each elementwise product, the total number of operations is in some cases less than classical algorithms, for tensors of any size. We provide a round-off error analysis of the algorithm and demonstrate that the error is not too large in practice. Finally, we provide an optimized implementation for one variant of the symmetry-preserving algorithm, which achieves speedups of up to 4.58\times for a particular tensor contraction, relative to a classical approach that casts the problem as a matrix-matrix multiplication.


Author(s):  
Yaniv Aspis ◽  
Krysia Broda ◽  
Alessandra Russo ◽  
Jorge Lobo

We introduce a novel approach for the computation of stable and supported models of normal logic programs in continuous vector spaces by a gradient-based search method. Specifically, the application of the immediate consequence operator of a program reduct can be computed in a vector space. To do this, Herbrand interpretations of a propositional program are embedded as 0-1 vectors in $\mathbb{R}^N$ and program reducts are represented as matrices in $\mathbb{R}^{N \times N}$. Using these representations we prove that the underlying semantics of a normal logic program is captured through matrix multiplication and a differentiable operation. As supported and stable models of a normal logic program can now be seen as fixed points in a continuous space, non-monotonic deduction can be performed using an optimisation process such as Newton's method. We report the results of several experiments using synthetically generated programs that demonstrate the feasibility of the approach and highlight how different parameter values can affect the behaviour of the system.


1983 ◽  
Author(s):  
I. V. Ramakrishnan ◽  
P. J. Varman

2002 ◽  
Vol 109 (8) ◽  
pp. 763
Author(s):  
Sung Soo Kim ◽  
Richard Johnsonbaugh ◽  
Ronald E. Prather ◽  
Donald Knuth

Sign in / Sign up

Export Citation Format

Share Document