scholarly journals Optimal Convergence of the Discrepancy Principle for Polynomially and Exponentially Ill-Posed Operators under White Noise

Author(s):  
Tim Jahn
2008 ◽  
Vol 8 (1) ◽  
pp. 86-98 ◽  
Author(s):  
S.G. SOLODKY ◽  
A. MOSENTSOVA

Abstract The problem of approximate solution of severely ill-posed problems given in the form of linear operator equations of the first kind with approximately known right-hand sides was considered. We have studied a strategy for solving this type of problems, which consists in combinating of Morozov’s discrepancy principle and a finite-dimensional version of the Tikhonov regularization. It is shown that this combination provides an optimal order of accuracy on source sets


Mathematics ◽  
2020 ◽  
Vol 8 (4) ◽  
pp. 608
Author(s):  
Pornsarp Pornsawad ◽  
Parada Sungcharoen ◽  
Christine Böckmann

In this paper, we present the convergence rate analysis of the modified Landweber method under logarithmic source condition for nonlinear ill-posed problems. The regularization parameter is chosen according to the discrepancy principle. The reconstructions of the shape of an unknown domain for an inverse potential problem by using the modified Landweber method are exhibited.


Mathematics ◽  
2020 ◽  
Vol 8 (3) ◽  
pp. 331
Author(s):  
Bernd Hofmann ◽  
Christopher Hofmann

This paper deals with the Tikhonov regularization for nonlinear ill-posed operator equations in Hilbert scales with oversmoothing penalties. One focus is on the application of the discrepancy principle for choosing the regularization parameter and its consequences. Numerical case studies are performed in order to complement analytical results concerning the oversmoothing situation. For example, case studies are presented for exact solutions of Hölder type smoothness with a low Hölder exponent. Moreover, the regularization parameter choice using the discrepancy principle, for which rate results are proven in the oversmoothing case in in reference (Hofmann, B.; Mathé, P. Inverse Probl. 2018, 34, 015007) is compared to Hölder type a priori choices. On the other hand, well-known analytical results on the existence and convergence of regularized solutions are summarized and partially augmented. In particular, a sketch for a novel proof to derive Hölder convergence rates in the case of oversmoothing penalties is given, extending ideas from in reference (Hofmann, B.; Plato, R. ETNA. 2020, 93).


2004 ◽  
Vol 2004 (37) ◽  
pp. 1973-1996 ◽  
Author(s):  
Santhosh George ◽  
M. Thamban Nair

Simplified regularization using finite-dimensional approximations in the setting of Hilbert scales has been considered for obtaining stable approximate solutions to ill-posed operator equations. The derived error estimates using an a priori and a posteriori choice of parameters in relation to the noise level are shown to be of optimal order with respect to certain natural assumptions on the ill posedness of the equation. The results are shown to be applicable to a wide class of spline approximations in the setting of Sobolev scales.


2012 ◽  
Vol 58 (210) ◽  
pp. 795-808 ◽  
Author(s):  
Marijke Habermann ◽  
David Maxwell ◽  
Martin Truffer

AbstractInverse problems are used to estimate model parameters from observations. Many inverse problems are ill-posed because they lack stability, meaning it is not possible to find solutions that are stable with respect to small changes in input data. Regularization techniques are necessary to stabilize the problem. For nonlinear inverse problems, iterative inverse methods can be used as a regularization method. These methods start with an initial estimate of the model parameters, update the parameters to match observation in an iterative process that adjusts large-scale spatial features first, and use a stopping criterion to prevent the overfitting of data. This criterion determines the smoothness of the solution and thus the degree of regularization. Here, iterative inverse methods are implemented for the specific problem of reconstructing basal stickiness of an ice sheet by using the shallow-shelf approximation as a forward model and synthetically derived surface velocities as input data. The incomplete Gauss-Newton (IGN) method is introduced and compared to the commonly used steepest descent and nonlinear conjugate gradient methods. Two different stopping criteria, the discrepancy principle and a recent- improvement threshold, are compared. The IGN method is favored because it is rapidly converging, and it incorporates the discrepancy principle, which leads to optimally resolved solutions.


Author(s):  
M. A. Lukas

AbstractConsider the prototype ill-posed problem of a first kind integral equation ℛ with discrete noisy data di, = f(xi) + εi, i = 1, …, n. Let u0 be the true solution and unα a regularised solution with regularisation parameter α. Under certain assumptions, it is known that if α → 0 but not too quickly as n → ∞, then unα converges to u0. We examine the dependence of the optimal sequence of α and resulting optimal convergence rate on the smoothness of f or u0, the kernel K, the order of regularisation m and the error norm used. Some important implications are made, including the fact that m must be sufficiently high relative to the smoothness of u0 in order to ensure optimal convergence. An optimal filtering criterion is used to determine the order where is the maximum smoothness of u0. Two practical methods for estimating the optimal α, the unbiased risk estimate and generalised cross validation, are also discussed.


Sign in / Sign up

Export Citation Format

Share Document