scholarly journals Multiplication of Matrices With Different Sparseness Properties on Dynamically Reconfigurable Meshes

VLSI Design ◽  
1999 ◽  
Vol 9 (1) ◽  
pp. 69-81 ◽  
Author(s):  
Martin Middendorf ◽  
Hartmut Schmeck ◽  
Heiko Schröder ◽  
Gavin Turner

Algorithms for multiplying several types of sparse n x n-matrices on dynamically reconfigurable n x n-arrays are presented. For some classes of sparse matrices constant time algorithms are given, e.g., when the first matrix has at most kn elements in each column or in each row and the second matrix has at most kn nonzero elements in each row, where k is a constant. Moreover, O(kn ) algorithms are obtained for the case that one matrix is a general sparse matrix with at most kn nonzero elements and the other matrix has at most k nonzero elements in every row or in every column. Also a lower bound of Ω(Kn ) is proved for this and other cases which shows that the algorithms are close to the optimum.

1998 ◽  
Vol 9 (11) ◽  
pp. 1057-1072 ◽  
Author(s):  
V.V. Bokka ◽  
H. Gurla ◽  
S. Olariu ◽  
J.L. Schwing

2000 ◽  
Vol 11 (04) ◽  
pp. 553-571 ◽  
Author(s):  
ANU G. BOURGEOIS ◽  
JERRY L. TRAHAN

Recently, researchers have proposed many models using reconfigurable optically pipelined buses. We present simulations for a number of these models and establish that they possess the same complexity, so that any of these models can simulate a step of one of the other models in constant time with a polynomial increase in size. Specifically, we determine the complexity of three optical models (the PR-Mesh, APPBS, and AROB) to be the same as the well known LR-Mesh and the CF-LR-Mesh.


2021 ◽  
Vol 15 (5) ◽  
pp. 1-32
Author(s):  
Quang-huy Duong ◽  
Heri Ramampiaro ◽  
Kjetil Nørvåg ◽  
Thu-lan Dam

Dense subregion (subgraph & subtensor) detection is a well-studied area, with a wide range of applications, and numerous efficient approaches and algorithms have been proposed. Approximation approaches are commonly used for detecting dense subregions due to the complexity of the exact methods. Existing algorithms are generally efficient for dense subtensor and subgraph detection, and can perform well in many applications. However, most of the existing works utilize the state-or-the-art greedy 2-approximation algorithm to capably provide solutions with a loose theoretical density guarantee. The main drawback of most of these algorithms is that they can estimate only one subtensor, or subgraph, at a time, with a low guarantee on its density. While some methods can, on the other hand, estimate multiple subtensors, they can give a guarantee on the density with respect to the input tensor for the first estimated subsensor only. We address these drawbacks by providing both theoretical and practical solution for estimating multiple dense subtensors in tensor data and giving a higher lower bound of the density. In particular, we guarantee and prove a higher bound of the lower-bound density of the estimated subgraph and subtensors. We also propose a novel approach to show that there are multiple dense subtensors with a guarantee on its density that is greater than the lower bound used in the state-of-the-art algorithms. We evaluate our approach with extensive experiments on several real-world datasets, which demonstrates its efficiency and feasibility.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
R. A. Abdelghany ◽  
A.-B. A. Mohamed ◽  
M. Tammam ◽  
Watson Kuo ◽  
H. Eleuch

AbstractWe formulate the tripartite entropic uncertainty relation and predict its lower bound in a three-qubit Heisenberg XXZ spin chain when measuring an arbitrary pair of incompatible observables on one qubit while the other two are served as quantum memories. Our study reveals that the entanglement between the nearest neighbors plays an important role in reducing the uncertainty in measurement outcomes. In addition we have shown that the Dolatkhah’s lower bound (Phys Rev A 102(5):052227, 2020) is tighter than that of Ming (Phys Rev A 102(01):012206, 2020) and their dynamics under phase decoherence depends on the choice of the observable pair. In the absence of phase decoherence, Ming’s lower bound is time-invariant regardless the chosen observable pair, while Dolatkhah’s lower bound is perfectly identical with the tripartite uncertainty with a specific choice of pair.


1995 ◽  
Vol 27 (1) ◽  
pp. 86-99 ◽  
Author(s):  
V. Bokka ◽  
H. Gurla ◽  
S. Olariu ◽  
J.L. Schwing

1970 ◽  
Vol 37 (2) ◽  
pp. 267-270 ◽  
Author(s):  
D. Pnueli

A method is presented to obtain both upper and lower bound to eigenvalues when a variational formulation of the problem exists. The method consists of a systematic shift in the weight function. A detailed procedure is offered for one-dimensional problems, which makes improvement of the bounds possible, and which involves the same order of detailed computation as the Rayleigh-Ritz method. The main contribution of this method is that it yields the “other bound;” i.e., the one which cannot be obtained by the Rayleigh-Ritz method.


2012 ◽  
Vol 20 (3) ◽  
pp. 241-255 ◽  
Author(s):  
Eric Bavier ◽  
Mark Hoemmen ◽  
Sivasankaran Rajamanickam ◽  
Heidi Thornquist

Solvers for large sparse linear systems come in two categories: direct and iterative. Amesos2, a package in the Trilinos software project, provides direct methods, and Belos, another Trilinos package, provides iterative methods. Amesos2 offers a common interface to many different sparse matrix factorization codes, and can handle any implementation of sparse matrices and vectors, via an easy-to-extend C++ traits interface. It can also factor matrices whose entries have arbitrary “Scalar” type, enabling extended-precision and mixed-precision algorithms. Belos includes many different iterative methods for solving large sparse linear systems and least-squares problems. Unlike competing iterative solver libraries, Belos completely decouples the algorithms from the implementations of the underlying linear algebra objects. This lets Belos exploit the latest hardware without changes to the code. Belos favors algorithms that solve higher-level problems, such as multiple simultaneous linear systems and sequences of related linear systems, faster than standard algorithms. The package also supports extended-precision and mixed-precision algorithms. Together, Amesos2 and Belos form a complete suite of sparse linear solvers.


Sign in / Sign up

Export Citation Format

Share Document