Absolute acoustic-impedance estimation with L1 norm constraint and combined first and second order TV regularizations

Author(s):  
Song Guo ◽  
Huazhong Wang
2019 ◽  
Vol 16 (4) ◽  
pp. 773-788 ◽  
Author(s):  
Song Guo ◽  
Huazhong Wang

Abstract Absolute acoustic impedance (AI) is generally divided into background AI and relative AI for linear inversion. In practice, the intermediate frequency components of the AI model are generally poorly reconstructed, so the estimated AI will suffer from an error caused by the frequency gap. To remedy this error, a priori information should be incorporated to narrow down the gap. With the knowledge that underground reflectivity was sparse, we solved an L1 norm constrained problem to extend the bandwidth of the reflectivity section, and an absolute AI model was then estimated with broadband reflectivity section and given background AI. Conventionally, the AI model is regularized with the total variation (TV) norm because of its blocky feature. However, the first-order TV norm that leads to piecewise-constant solutions will cause staircase errors in slanted and smooth regions in the inverted AI model. To better restore the smooth variation while preserving the sharp geological structure of the AI model, we introduced a second-order extension of the first-order TV norm and inverted the absolute AI model with combined first- and second-order TV regularizations. The algorithm used to solve the optimization problem with the combined TV constraints was derived based on split-Bregman iterations. Numerical experiments that were tested on the Marmousi AI model and 2D stacked field data illustrated the effectiveness of the sparse constraint with respect to shrinking the frequency gaps and proved that the proposed combined TV norms had better performances than those with conventional first-order TV norms.


Algorithms ◽  
2019 ◽  
Vol 12 (10) ◽  
pp. 221
Author(s):  
Lin ◽  
Chen ◽  
Chen ◽  
Yu

Image deblurring under the background of impulse noise is a typically ill-posed inverse problem which attracted great attention in the fields of image processing and computer vision. The fast total variation deconvolution (FTVd) algorithm proved to be an effective way to solve this problem. However, it only considers sparsity of the first-order total variation, resulting in staircase artefacts. The L1 norm is adopted in the FTVd model to depict the sparsity of the impulse noise, while the L1 norm has limited capacity of depicting it. To overcome this limitation, we present a new algorithm based on the Lp-pseudo-norm and total generalized variation (TGV) regularization. The TGV regularization puts sparse constraints on both the first-order and second-order gradients of the image, effectively preserving the image edge while relieving undesirable artefacts. The Lp-pseudo-norm constraint is employed to replace the L1 norm constraint to depict the sparsity of the impulse noise more precisely. The alternating direction method of multipliers is adopted to solve the proposed model. In the numerical experiments, the proposed algorithm is compared with some state-of-the-art algorithms in terms of peak signal-to-noise ratio (PSNR), structural similarity (SSIM), signal-to-noise ratio (SNR), operation time, and visual effects to verify its superiority.


2020 ◽  
Vol 9 (6) ◽  
pp. 340
Author(s):  
Xiaohua Tong ◽  
Runjie Wang ◽  
Wenzhong Shi ◽  
Zhiyuan Li

Mathematically describing the physical process of a sequential data assimilation system perfectly is difficult and inevitably results in errors in the assimilation model. Filter divergence is a common phenomenon because of model inaccuracies and affects the quality of the assimilation results in sequential data assimilation systems. In this study, an approach based on an L1-norm constraint for filter-divergence suppression in sequential data assimilation systems was proposed. The method adjusts the weights of the state-simulated values and measurements based on new measurements using an L1-norm constraint when filter divergence is about to occur. Results for simulation data and real-world traffic flow measurements collected from a sub-area of the highway between Leeds and Sheffield, England, showed that the proposed method produced a higher assimilation accuracy than the other filter-divergence suppression methods. This indicates the effectiveness of the proposed approach based on the L1-norm constraint for filter-divergence suppression.


2019 ◽  
Vol 20 (4) ◽  
pp. 886
Author(s):  
Sha-Sha Wu ◽  
Mi-Xiao Hou ◽  
Chun-Mei Feng ◽  
Jin-Xing Liu

Feature selection and sample clustering play an important role in bioinformatics. Traditional feature selection methods separate sparse regression and embedding learning. Later, to effectively identify the significant features of the genomic data, Joint Embedding Learning and Sparse Regression (JELSR) is proposed. However, since there are many redundancy and noise values in genomic data, the sparseness of this method is far from enough. In this paper, we propose a strengthened version of JELSR by adding the L1-norm constraint on the regularization term based on a previous model, and call it LJELSR, to further improve the sparseness of the method. Then, we provide a new iterative algorithm to obtain the convergence solution. The experimental results show that our method achieves a state-of-the-art level both in identifying differentially expressed genes and sample clustering on different genomic data compared to previous methods. Additionally, the selected differentially expressed genes may be of great value in medical research.


Sign in / Sign up

Export Citation Format

Share Document