Regularization Method of Inverse Problems in Image Information Processing

2013 ◽  
Vol 8 (7) ◽  
pp. 966-974
Author(s):  
Yan Yan ◽  
Yamian Peng ◽  
Yuanyuan Cai ◽  
Shifeng Li
2005 ◽  
Vol 48 (7) ◽  
pp. 403-413
Author(s):  
Isamu KASHIMA ◽  
Satsuki KUMASAKA ◽  
Ryota KAWAMATA ◽  
Yusuke KOZAI

2019 ◽  
Vol 62 (3) ◽  
pp. 445-455
Author(s):  
Johannes Schwab ◽  
Stephan Antholzer ◽  
Markus Haltmeier

Abstract Deep learning and (deep) neural networks are emerging tools to address inverse problems and image reconstruction tasks. Despite outstanding performance, the mathematical analysis for solving inverse problems by neural networks is mostly missing. In this paper, we introduce and rigorously analyze families of deep regularizing neural networks (RegNets) of the form $$\mathbf {B}_\alpha + \mathbf {N}_{\theta (\alpha )} \mathbf {B}_\alpha $$Bα+Nθ(α)Bα, where $$\mathbf {B}_\alpha $$Bα is a classical regularization and the network $$\mathbf {N}_{\theta (\alpha )} \mathbf {B}_\alpha $$Nθ(α)Bα is trained to recover the missing part $${\text {Id}}_X - \mathbf {B}_\alpha $$IdX-Bα not found by the classical regularization. We show that these regularizing networks yield a convergent regularization method for solving inverse problems. Additionally, we derive convergence rates (quantitative error estimates) assuming a sufficient decay of the associated distance function. We demonstrate that our results recover existing convergence and convergence rates results for filter-based regularization methods as well as the recently introduced null space network as special cases. Numerical results are presented for a tomographic sparse data problem, which clearly demonstrate that the proposed RegNets improve classical regularization as well as the null space network.


Sign in / Sign up

Export Citation Format

Share Document