Some Bounds for Inverses Involving Matrix Sparsity Pattern

2020 ◽  
Vol 249 (2) ◽  
pp. 242-255
Author(s):  
L. Yu. Kolotilina
Keyword(s):  
2021 ◽  
Vol 14 (4) ◽  
pp. 1-28
Author(s):  
Tao Yang ◽  
Zhezhi He ◽  
Tengchuan Kou ◽  
Qingzheng Li ◽  
Qi Han ◽  
...  

Field-programmable Gate Array (FPGA) is a high-performance computing platform for Convolution Neural Networks (CNNs) inference. Winograd algorithm, weight pruning, and quantization are widely adopted to reduce the storage and arithmetic overhead of CNNs on FPGAs. Recent studies strive to prune the weights in the Winograd domain, however, resulting in irregular sparse patterns and leading to low parallelism and reduced utilization of resources. Besides, there are few works to discuss a suitable quantization scheme for Winograd. In this article, we propose a regular sparse pruning pattern in the Winograd-based CNN, namely, Sub-row-balanced Sparsity (SRBS) pattern, to overcome the challenge of the irregular sparse pattern. Then, we develop a two-step hardware co-optimization approach to improve the model accuracy using the SRBS pattern. Based on the pruned model, we implement a mixed precision quantization to further reduce the computational complexity of bit operations. Finally, we design an FPGA accelerator that takes both the advantage of the SRBS pattern to eliminate low-parallelism computation and the irregular memory accesses, as well as the mixed precision quantization to get a layer-wise bit width. Experimental results on VGG16/VGG-nagadomi with CIFAR-10 and ResNet-18/34/50 with ImageNet show up to 11.8×/8.67× and 8.17×/8.31×/10.6× speedup, 12.74×/9.19× and 8.75×/8.81×/11.1× energy efficiency improvement, respectively, compared with the state-of-the-art dense Winograd accelerator [20] with negligible loss of model accuracy. We also show that our design has 4.11× speedup compared with the state-of-the-art sparse Winograd accelerator [19] on VGG16.


2018 ◽  
Vol 13 (4) ◽  
pp. 645-655 ◽  
Author(s):  
Julie Nutini ◽  
Mark Schmidt ◽  
Warren Hare
Keyword(s):  

Entropy ◽  
2019 ◽  
Vol 21 (3) ◽  
pp. 247 ◽  
Author(s):  
Mohammad Shekaramiz ◽  
Todd Moon ◽  
Jacob Gunther

We consider the sparse recovery problem of signals with an unknown clustering pattern in the context of multiple measurement vectors (MMVs) using the compressive sensing (CS) technique. For many MMVs in practice, the solution matrix exhibits some sort of clustered sparsity pattern, or clumpy behavior, along each column, as well as joint sparsity across the columns. In this paper, we propose a new sparse Bayesian learning (SBL) method that incorporates a total variation-like prior as a measure of the overall clustering pattern in the solution. We further incorporate a parameter in this prior to account for the emphasis on the amount of clumpiness in the supports of the solution to improve the recovery performance of sparse signals with an unknown clustering pattern. This parameter does not exist in the other existing algorithms and is learned via our hierarchical SBL algorithm. While the proposed algorithm is constructed for the MMVs, it can also be applied to the single measurement vector (SMV) problems. Simulation results show the effectiveness of our algorithm compared to other algorithms for both SMV and MMVs.


2015 ◽  
Vol 32 (1) ◽  
pp. 243-259 ◽  
Author(s):  
Anders Bredahl Kock

We show that the adaptive Lasso is oracle efficient in stationary and nonstationary autoregressions. This means that it estimates parameters consistently, selects the correct sparsity pattern, and estimates the coefficients belonging to the relevant variables at the same asymptotic efficiency as if only these had been included in the model from the outset. In particular, this implies that it is able to discriminate between stationary and nonstationary autoregressions and it thereby constitutes an addition to the set of unit root tests. Next, and important in practice, we show that choosing the tuning parameter by Bayesian Information Criterion (BIC) results in consistent model selection.However, it is also shown that the adaptive Lasso has no power against shrinking alternatives of the form c/T if it is tuned to perform consistent model selection. We show that if the adaptive Lasso is tuned to perform conservative model selection it has power even against shrinking alternatives of this form and compare it to the plain Lasso.


1989 ◽  
Vol 29 (4) ◽  
pp. 610-634 ◽  
Author(s):  
A. Greenbaum ◽  
G. H. Rodrigue
Keyword(s):  

1988 ◽  
Vol 107 ◽  
pp. 101-149 ◽  
Author(s):  
Jim Agler ◽  
William Helton ◽  
Scott McCullough ◽  
Leiba Rodman

Sign in / Sign up

Export Citation Format

Share Document