Iterative methods for solving non-linear ill-posed operator equations with non-monotonic operators

Author(s):  
A. Bakushinsky ◽  
A. Goncharsky
2015 ◽  
Vol 15 (3) ◽  
pp. 373-389
Author(s):  
Oleg Matysik ◽  
Petr Zabreiko

AbstractThe paper deals with iterative methods for solving linear operator equations ${x = Bx + f}$ and ${Ax = f}$ with self-adjoint operators in Hilbert space X in the critical case when ${\rho (B) = 1}$ and ${0 \in \operatorname{Sp} A}$. The results obtained are based on a theorem by M. A. Krasnosel'skii on the convergence of the successive approximations, their modifications and refinements.


Author(s):  
Risheng Liu

Numerous tasks at the core of statistics, learning, and vision areas are specific cases of ill-posed inverse problems. Recently, learning-based (e.g., deep) iterative methods have been empirically shown to be useful for these problems. Nevertheless, integrating learnable structures into iterations is still a laborious process, which can only be guided by intuitions or empirical insights. Moreover, there is a lack of rigorous analysis of the convergence behaviors of these reimplemented iterations, and thus the significance of such methods is a little bit vague. We move beyond these limits and propose a theoretically guaranteed optimization learning paradigm, a generic and provable paradigm for nonconvex inverse problems, and develop a series of convergent deep models. Our theoretical analysis reveals that the proposed optimization learning paradigm allows us to generate globally convergent trajectories for learning-based iterative methods. Thanks to the superiority of our framework, we achieve state-of-the-art performance on different real applications.


Sign in / Sign up

Export Citation Format

Share Document