scholarly journals Graph signal denoising using t-shrinkage priors

Author(s):  
Sayantan Banerjee ◽  
Weining Shen
2021 ◽  
Vol 11 (4) ◽  
pp. 1591
Author(s):  
Ruixia Liu ◽  
Minglei Shu ◽  
Changfang Chen

The electrocardiogram (ECG) is widely used for the diagnosis of heart diseases. However, ECG signals are easily contaminated by different noises. This paper presents efficient denoising and compressed sensing (CS) schemes for ECG signals based on basis pursuit (BP). In the process of signal denoising and reconstruction, the low-pass filtering method and alternating direction method of multipliers (ADMM) optimization algorithm are used. This method introduces dual variables, adds a secondary penalty term, and reduces constraint conditions through alternate optimization to optimize the original variable and the dual variable at the same time. This algorithm is able to remove both baseline wander and Gaussian white noise. The effectiveness of the algorithm is validated through the records of the MIT-BIH arrhythmia database. The simulations show that the proposed ADMM-based method performs better in ECG denoising. Furthermore, this algorithm keeps the details of the ECG signal in reconstruction and achieves higher signal-to-noise ratio (SNR) and smaller mean square error (MSE).


Biometrika ◽  
2020 ◽  
Vol 107 (3) ◽  
pp. 745-752 ◽  
Author(s):  
Sirio Legramanti ◽  
Daniele Durante ◽  
David B Dunson

Summary The dimension of the parameter space is typically unknown in a variety of models that rely on factorizations. For example, in factor analysis the number of latent factors is not known and has to be inferred from the data. Although classical shrinkage priors are useful in such contexts, increasing shrinkage priors can provide a more effective approach that progressively penalizes expansions with growing complexity. In this article we propose a novel increasing shrinkage prior, called the cumulative shrinkage process, for the parameters that control the dimension in overcomplete formulations. Our construction has broad applicability and is based on an interpretable sequence of spike-and-slab distributions which assign increasing mass to the spike as the model complexity grows. Using factor analysis as an illustrative example, we show that this formulation has theoretical and practical advantages relative to current competitors, including an improved ability to recover the model dimension. An adaptive Markov chain Monte Carlo algorithm is proposed, and the performance gains are outlined in simulations and in an application to personality data.


Sign in / Sign up

Export Citation Format

Share Document