scholarly journals Solving ill-posed inverse problems using iterative deep neural networks

2017 ◽  
Vol 33 (12) ◽  
pp. 124007 ◽  
Author(s):  
Jonas Adler ◽  
Ozan Öktem
2019 ◽  
Vol 62 (3) ◽  
pp. 445-455
Author(s):  
Johannes Schwab ◽  
Stephan Antholzer ◽  
Markus Haltmeier

Abstract Deep learning and (deep) neural networks are emerging tools to address inverse problems and image reconstruction tasks. Despite outstanding performance, the mathematical analysis for solving inverse problems by neural networks is mostly missing. In this paper, we introduce and rigorously analyze families of deep regularizing neural networks (RegNets) of the form $$\mathbf {B}_\alpha + \mathbf {N}_{\theta (\alpha )} \mathbf {B}_\alpha $$Bα+Nθ(α)Bα, where $$\mathbf {B}_\alpha $$Bα is a classical regularization and the network $$\mathbf {N}_{\theta (\alpha )} \mathbf {B}_\alpha $$Nθ(α)Bα is trained to recover the missing part $${\text {Id}}_X - \mathbf {B}_\alpha $$IdX-Bα not found by the classical regularization. We show that these regularizing networks yield a convergent regularization method for solving inverse problems. Additionally, we derive convergence rates (quantitative error estimates) assuming a sufficient decay of the associated distance function. We demonstrate that our results recover existing convergence and convergence rates results for filter-based regularization methods as well as the recently introduced null space network as special cases. Numerical results are presented for a tomographic sparse data problem, which clearly demonstrate that the proposed RegNets improve classical regularization as well as the null space network.


2021 ◽  
Vol 14 (2) ◽  
pp. 470-505
Author(s):  
Tatiana A. Bubba ◽  
Mathilde Galinier ◽  
Matti Lassas ◽  
Marco Prato ◽  
Luca Ratti ◽  
...  

2021 ◽  
Vol 7 (11) ◽  
pp. 243
Author(s):  
Alexander Denker ◽  
Maximilian Schmidt ◽  
Johannes Leuschner ◽  
Peter Maass

Over recent years, deep learning methods have become an increasingly popular choice for solving tasks from the field of inverse problems. Many of these new data-driven methods have produced impressive results, although most only give point estimates for the reconstruction. However, especially in the analysis of ill-posed inverse problems, the study of uncertainties is essential. In our work, we apply generative flow-based models based on invertible neural networks to two challenging medical imaging tasks, i.e., low-dose computed tomography and accelerated medical resonance imaging. We test different architectures of invertible neural networks and provide extensive ablation studies. In most applications, a standard Gaussian is used as the base distribution for a flow-based model. Our results show that the choice of a radial distribution can improve the quality of reconstructions.


2018 ◽  
Vol 35 (1) ◽  
pp. 20-36 ◽  
Author(s):  
Alice Lucas ◽  
Michael Iliadis ◽  
Rafael Molina ◽  
Aggelos K. Katsaggelos

2020 ◽  
Vol 36 (6) ◽  
pp. 065005 ◽  
Author(s):  
Housen Li ◽  
Johannes Schwab ◽  
Stephan Antholzer ◽  
Markus Haltmeier

Author(s):  
Alex Hernández-García ◽  
Johannes Mehrer ◽  
Nikolaus Kriegeskorte ◽  
Peter König ◽  
Tim C. Kietzmann

Sign in / Sign up

Export Citation Format

Share Document