A novel modified TRSVD method for large-scale linear discrete ill-posed problems

Author(s):  
Xianglan Bai ◽  
Guang-Xin Huang ◽  
Xiao-Jun Lei ◽  
Lothar Reichel ◽  
Feng Yin
Keyword(s):  
Geosciences ◽  
2021 ◽  
Vol 11 (2) ◽  
pp. 41
Author(s):  
Tim Jurisch ◽  
Stefan Cantré ◽  
Fokke Saathoff

A variety of studies recently proved the applicability of different dried, fine-grained dredged materials as replacement material for erosion-resistant sea dike covers. In Rostock, Germany, a large-scale field experiment was conducted, in which different dredged materials were tested with regard to installation technology, stability, turf development, infiltration, and erosion resistance. The infiltration experiments to study the development of a seepage line in the dike body showed unexpected measurement results. Due to the high complexity of the problem, standard geo-hydraulic models proved to be unable to analyze these results. Therefore, different methods of inverse infiltration modeling were applied, such as the parameter estimation tool (PEST) and the AMALGAM algorithm. In the paper, the two approaches are compared and discussed. A sensitivity analysis proved the presumption of a non-linear model behavior for the infiltration problem and the Eigenvalue ratio indicates that the dike infiltration is an ill-posed problem. Although this complicates the inverse modeling (e.g., termination in local minima), parameter sets close to an optimum were found with both the PEST and the AMALGAM algorithms. Together with the field measurement data, this information supports the rating of the effective material properties of the applied dredged materials used as dike cover material.


Geophysics ◽  
1988 ◽  
Vol 53 (3) ◽  
pp. 375-385 ◽  
Author(s):  
R. R. B. von Frese ◽  
D. N. Ravat ◽  
W. J. Hinze ◽  
C. A. McGue

Instabilities and the large matrices which are common to inversions of regional magnetic and gravity anomalies often complicate the use of efficient least‐squares matrix procedures. Inversion stability profoundly affects anomaly analysis, and hence it must be considered in any application. Wildly varying or unstable solutions are the products of errors in the anomaly observations and the integrated effects of observation spacing, source spacing, elevation differences between sources and observations, geographic coordinate attributes, geomagnetic field attitudes, and other factors which influence the conditioning of inversion. Solution instabilities caused by ill‐posed parameters can be efficiently minimized by ridge regression with a damping factor large enough to stabilize the inversion, but small enough to produce an analytically useful solution. An effective choice for the damping factor is facilitated by plotting damping factors against residuals between observed and modeled anomalies and by then comparing this curve to curves of damping factors plotted against solution variance or the residuals between predicted anomaly maps representing the processing objective (e.g., downward continuation, differential reduction to the radial pole, etc.). To obtain accurate and efficient large‐scale inversions of anomaly data, a procedure based on the superposition principle of potential fields may be used. This method involves successive inversions of residuals between the observations and various stable model fields which can be readily accommodated by available computer memory. Integration of the model fields yields a well‐resolved representation of the observed anomalies corresponding to an integrated model which normally could not be obtained by direct inversion because the memory requirements would be excessive. MAGSAT magnetic anomaly inversions over India demonstrate the utility of these procedures for improving the geologic analysis of potential field anomalies.


2018 ◽  
Vol 26 (2) ◽  
pp. 243-257 ◽  
Author(s):  
Zichao Yan ◽  
Yanfei Wang

AbstractFull waveform inversion is a large-scale nonlinear and ill-posed problem. We consider applying the regularization technique for full waveform inversion with structure constraints. The structure information was extracted with difference operators with respect to model parameters. And then we establish an {l_{p}}-{l_{q}}-norm constrained minimization model for different choices of parameters p and q. To solve this large-scale optimization problem, a fast gradient method with projection onto convex set and a multiscale inversion strategy are addressed. The regularization parameter is estimated adaptively with respect to the frequency range of the data. Numerical experiments on a layered model and a benchmark SEG/EAGE overthrust model are performed to testify the validity of this proposed regularization scheme.


2012 ◽  
Vol 58 (210) ◽  
pp. 795-808 ◽  
Author(s):  
Marijke Habermann ◽  
David Maxwell ◽  
Martin Truffer

AbstractInverse problems are used to estimate model parameters from observations. Many inverse problems are ill-posed because they lack stability, meaning it is not possible to find solutions that are stable with respect to small changes in input data. Regularization techniques are necessary to stabilize the problem. For nonlinear inverse problems, iterative inverse methods can be used as a regularization method. These methods start with an initial estimate of the model parameters, update the parameters to match observation in an iterative process that adjusts large-scale spatial features first, and use a stopping criterion to prevent the overfitting of data. This criterion determines the smoothness of the solution and thus the degree of regularization. Here, iterative inverse methods are implemented for the specific problem of reconstructing basal stickiness of an ice sheet by using the shallow-shelf approximation as a forward model and synthetically derived surface velocities as input data. The incomplete Gauss-Newton (IGN) method is introduced and compared to the commonly used steepest descent and nonlinear conjugate gradient methods. Two different stopping criteria, the discrepancy principle and a recent- improvement threshold, are compared. The IGN method is favored because it is rapidly converging, and it incorporates the discrepancy principle, which leads to optimally resolved solutions.


2011 ◽  
Vol 2011 ◽  
pp. 1-15 ◽  
Author(s):  
Yang Chen ◽  
Weimin Yu ◽  
Yinsheng Li ◽  
Zhou Yang ◽  
Limin Luo ◽  
...  

Edge-preserving Bayesian restorations using nonquadratic priors are often inefficient in restoring continuous variations and tend to produce block artifacts around edges in ill-posed inverse image restorations. To overcome this, we have proposed a spatial adaptive (SA) prior with improved performance. However, this SA prior restoration suffers from high computational cost and the unguaranteed convergence problem. Concerning these issues, this paper proposes a Large-scale Total Patch Variation (LS-TPV) Prior model for Bayesian image restoration. In this model, the prior for each pixel is defined as a singleton conditional probability, which is in a mixture prior form of one patch similarity prior and one weight entropy prior. A joint MAP estimation is thus built to ensure the iteration monotonicity. The intensive calculation of patch distances is greatly alleviated by the parallelization of Compute Unified Device Architecture(CUDA). Experiments with both simulated and real data validate the good performance of the proposed restoration.


2018 ◽  
Vol 61 (1) ◽  
pp. 76-98 ◽  
Author(s):  
TING LI ◽  
ZHONG WAN

We propose a new adaptive and composite Barzilai–Borwein (BB) step size by integrating the advantages of such existing step sizes. Particularly, the proposed step size is an optimal weighted mean of two classical BB step sizes and the weights are updated at each iteration in accordance with the quality of the classical BB step sizes. Combined with the steepest descent direction, the adaptive and composite BB step size is incorporated into the development of an algorithm such that it is efficient to solve large-scale optimization problems. We prove that the developed algorithm is globally convergent and it R-linearly converges when applied to solve strictly convex quadratic minimization problems. Compared with the state-of-the-art algorithms available in the literature, the proposed step size is more efficient in solving ill-posed or large-scale benchmark test problems.


2020 ◽  
Vol 36 (9) ◽  
pp. 095007
Author(s):  
S Bellavia ◽  
M Donatelli ◽  
E Riccietti
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document