scholarly journals A converse result for Banach space convergence rates in Tikhonov-type convex regularization of ill-posed linear equations

2018 ◽  
Vol 26 (5) ◽  
pp. 639-646 ◽  
Author(s):  
Jens Flemming

Abstract We consider Tikhonov-type variational regularization of ill-posed linear operator equations in Banach spaces with general convex penalty functionals. Upper bounds for certain error measures expressing the distance between exact and regularized solutions, especially for Bregman distances, can be obtained from variational source conditions. We prove that such bounds are optimal in case of twisted Bregman distances, a specific a priori parameter choice, and low regularity of the exact solution, that is, the rate function is also an asymptotic lower bound for the error measure. This result extends existing converse results from Hilbert space settings to Banach spaces without adhering to spectral theory.

Mathematics ◽  
2020 ◽  
Vol 8 (3) ◽  
pp. 331
Author(s):  
Bernd Hofmann ◽  
Christopher Hofmann

This paper deals with the Tikhonov regularization for nonlinear ill-posed operator equations in Hilbert scales with oversmoothing penalties. One focus is on the application of the discrepancy principle for choosing the regularization parameter and its consequences. Numerical case studies are performed in order to complement analytical results concerning the oversmoothing situation. For example, case studies are presented for exact solutions of Hölder type smoothness with a low Hölder exponent. Moreover, the regularization parameter choice using the discrepancy principle, for which rate results are proven in the oversmoothing case in in reference (Hofmann, B.; Mathé, P. Inverse Probl. 2018, 34, 015007) is compared to Hölder type a priori choices. On the other hand, well-known analytical results on the existence and convergence of regularized solutions are summarized and partially augmented. In particular, a sketch for a novel proof to derive Hölder convergence rates in the case of oversmoothing penalties is given, extending ideas from in reference (Hofmann, B.; Plato, R. ETNA. 2020, 93).


2019 ◽  
Vol 27 (4) ◽  
pp. 539-557
Author(s):  
Barbara Kaltenbacher ◽  
Andrej Klassen ◽  
Mario Luiz Previatti de Souza

Abstract In this paper, we consider the iteratively regularized Gauss–Newton method, where regularization is achieved by Ivanov regularization, i.e., by imposing a priori constraints on the solution. We propose an a posteriori choice of the regularization radius, based on an inexact Newton/discrepancy principle approach, prove convergence and convergence rates under a variational source condition as the noise level tends to zero and provide an analysis of the discretization error. Our results are valid in general, possibly nonreflexive Banach spaces, including, e.g., {L^{\infty}} as a preimage space. The theoretical findings are illustrated by numerical experiments.


2004 ◽  
Vol 76 (2) ◽  
pp. 281-290 ◽  
Author(s):  
Guoliang Chen ◽  
Yimin Wei ◽  
Yifeng Xue

AbstractFor any bounded linear operator A in a Banach space, two generalized condition numbers, k(A) and k(A) are defined in this paper. These condition numbers may be applied to the perturbation analysis for the solution of ill-posed differential equations and bounded linear operator equations in infinite dimensional Banach spaces. Different expressions for the two generalized condition numbers are discussed in this paper and applied to the perturbation analysis of the operator equation.


2019 ◽  
Vol 27 (4) ◽  
pp. 575-590 ◽  
Author(s):  
Wei Wang ◽  
Shuai Lu ◽  
Bernd Hofmann ◽  
Jin Cheng

Abstract Measuring the error by an {\ell^{1}} -norm, we analyze under sparsity assumptions an {\ell^{0}} -regularization approach, where the penalty in the Tikhonov functional is complemented by a general stabilizing convex functional. In this context, ill-posed operator equations {Ax=y} with an injective and bounded linear operator A mapping between {\ell^{2}} and a Banach space Y are regularized. For sparse solutions, error estimates as well as linear and sublinear convergence rates are derived based on a variational inequality approach, where the regularization parameter can be chosen either a priori in an appropriate way or a posteriori by the sequential discrepancy principle. To further illustrate the balance between the {\ell^{0}} -term and the complementing convex penalty, the important special case of the {\ell^{2}} -norm square penalty is investigated showing explicit dependence between both terms. Finally, some numerical experiments verify and illustrate the sparsity promoting properties of corresponding regularized solutions.


2018 ◽  
Vol 26 (3) ◽  
pp. 311-333 ◽  
Author(s):  
Pallavi Mahale ◽  
Sharad Kumar Dixit

AbstractJin Qinian and Min Zhong [10] considered an iteratively regularized Gauss–Newton method in Banach spaces to find a stable approximate solution of the nonlinear ill-posed operator equation. They have considered a Morozov-type stopping rule (Rule 1) as one of the criterion to stop the iterations and studied the convergence analysis of the method. However, no error estimates have been obtained for this case. In this paper, we consider a modified variant of the method, namely, the simplified Gauss–Newton method under both an a priori as well as a Morozov-type stopping rule. In both cases, we obtain order optimal error estimates under Hölder-type approximate source conditions. An example of a parameter identification problem for which the method can be implemented is discussed in the paper.


2018 ◽  
Vol 26 (5) ◽  
pp. 689-702 ◽  
Author(s):  
Christian Clason ◽  
Andrej Klassen

Abstract We consider the method of quasi-solutions (also referred to as Ivanov regularization) for the regularization of linear ill-posed problems in non-reflexive Banach spaces. Using the equivalence to a metric projection onto the image of the forward operator, it is possible to show regularization properties and to characterize parameter choice rules that lead to a convergent regularization method, which includes the Morozov discrepancy principle. Convergence rates in a suitably chosen Bregman distance can be obtained as well. We also address the numerical computation of quasi-solutions to inverse source problems for partial differential equations in {L^{\infty}(\Omega)} using a semi-smooth Newton method and a backtracking line search for the parameter choice according to the discrepancy principle. Numerical examples illustrate the behavior of quasi-solutions in this setting.


2018 ◽  
Vol 26 (2) ◽  
pp. 277-286 ◽  
Author(s):  
Jens Flemming

AbstractVariational source conditions proved to be useful for deriving convergence rates for Tikhonov’s regularization method and also for other methods. Up to now, such conditions have been verified only for few examples or for situations which can be also handled by classical range-type source conditions. Here we show that for almost every ill-posed inverse problem variational source conditions are satisfied. Whether linear or nonlinear, whether Hilbert or Banach spaces, whether one or multiple solutions, variational source conditions are a universal tool for proving convergence rates.


Author(s):  
Stefan Kindermann

AbstractTikhonov regularization in Banach spaces with convex penalty and convex fidelity term for linear ill-posed operator equations is studied. As a main result, convergence rates in terms of the Bregman distance of the regularized solution to the exact solution is proven by imposing a generalization of the established variational inequality conditions on the exact solution. This condition only involves a decay rate of the difference of the penalty functionals in terms of the residual.


Sign in / Sign up

Export Citation Format

Share Document