approximate message passing
Recently Published Documents


TOTAL DOCUMENTS

289
(FIVE YEARS 99)

H-INDEX

26
(FIVE YEARS 5)

2021 ◽  
Vol 2021 (12) ◽  
pp. 124004
Author(s):  
Parthe Pandit ◽  
Mojtaba Sahraee-Ardakan ◽  
Sundeep Rangan ◽  
Philip Schniter ◽  
Alyson K Fletcher

Abstract We consider the problem of estimating the input and hidden variables of a stochastic multi-layer neural network (NN) from an observation of the output. The hidden variables in each layer are represented as matrices with statistical interactions along both rows as well as columns. This problem applies to matrix imputation, signal recovery via deep generative prior models, multi-task and mixed regression, and learning certain classes of two-layer NNs. We extend a recently-developed algorithm—multi-layer vector approximate message passing, for this matrix-valued inference problem. It is shown that the performance of the proposed multi-layer matrix vector approximate message passing algorithm can be exactly predicted in a certain random large-system limit, where the dimensions N × d of the unknown quantities grow as N → ∞ with d fixed. In the two-layer neural-network learning problem, this scaling corresponds to the case where the number of input features as well as training samples grow to infinity but the number of hidden nodes stays fixed. The analysis enables a precise prediction of the parameter and test error of the learning.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Xue Yu ◽  
Yifan Sun ◽  
Hai-Jun Zhou

AbstractHigh-dimensional linear regression model is the most popular statistical model for high-dimensional data, but it is quite a challenging task to achieve a sparse set of regression coefficients. In this paper, we propose a simple heuristic algorithm to construct sparse high-dimensional linear regression models, which is adapted from the shortest-solution guided decimation algorithm and is referred to as ASSD. This algorithm constructs the support of regression coefficients under the guidance of the shortest least-squares solution of the recursively decimated linear models, and it applies an early-stopping criterion and a second-stage thresholding procedure to refine this support. Our extensive numerical results demonstrate that ASSD outperforms LASSO, adaptive LASSO, vector approximate message passing, and two other representative greedy algorithms in solution accuracy and robustness. ASSD is especially suitable for linear regression problems with highly correlated measurement matrices encountered in real-world applications.


2021 ◽  
Author(s):  
Christo Kurisummoottil Thomas ◽  
Rakesh Mundlamuri ◽  
Chandra R Murthy ◽  
Marios Kountouris

Author(s):  
Li Wei ◽  
Chongwen Huang ◽  
Qinghua Guo ◽  
Zhaoyang Zhang ◽  
Merouane Debbah ◽  
...  

Author(s):  
Marco Mondelli ◽  
Christos Thrampoulidis ◽  
Ramji Venkataramanan

AbstractWe study the problem of recovering an unknown signal $${\varvec{x}}$$ x given measurements obtained from a generalized linear model with a Gaussian sensing matrix. Two popular solutions are based on a linear estimator $$\hat{\varvec{x}}^\mathrm{L}$$ x ^ L and a spectral estimator $$\hat{\varvec{x}}^\mathrm{s}$$ x ^ s . The former is a data-dependent linear combination of the columns of the measurement matrix, and its analysis is quite simple. The latter is the principal eigenvector of a data-dependent matrix, and a recent line of work has studied its performance. In this paper, we show how to optimally combine $$\hat{\varvec{x}}^\mathrm{L}$$ x ^ L and $$\hat{\varvec{x}}^\mathrm{s}$$ x ^ s . At the heart of our analysis is the exact characterization of the empirical joint distribution of $$({\varvec{x}}, \hat{\varvec{x}}^\mathrm{L}, \hat{\varvec{x}}^\mathrm{s})$$ ( x , x ^ L , x ^ s ) in the high-dimensional limit. This allows us to compute the Bayes-optimal combination of $$\hat{\varvec{x}}^\mathrm{L}$$ x ^ L and $$\hat{\varvec{x}}^\mathrm{s}$$ x ^ s , given the limiting distribution of the signal $${\varvec{x}}$$ x . When the distribution of the signal is Gaussian, then the Bayes-optimal combination has the form $$\theta \hat{\varvec{x}}^\mathrm{L}+\hat{\varvec{x}}^\mathrm{s}$$ θ x ^ L + x ^ s and we derive the optimal combination coefficient. In order to establish the limiting distribution of $$({\varvec{x}}, \hat{\varvec{x}}^\mathrm{L}, \hat{\varvec{x}}^\mathrm{s})$$ ( x , x ^ L , x ^ s ) , we design and analyze an approximate message passing algorithm whose iterates give $$\hat{\varvec{x}}^\mathrm{L}$$ x ^ L and approach $$\hat{\varvec{x}}^\mathrm{s}$$ x ^ s . Numerical simulations demonstrate the improvement of the proposed combination with respect to the two methods considered separately.


Author(s):  
Yang Zhang ◽  
Qunfei Zhang ◽  
Lingling Zhang ◽  
Chengbing He ◽  
Xinyuan Tian

2021 ◽  
Author(s):  
Lei Liu ◽  
Shunqi Huang ◽  
Brian M. Kurkoski

Sign in / Sign up

Export Citation Format

Share Document