scholarly journals Big in Japan: Regularizing Networks for Solving Inverse Problems

2019 ◽  
Vol 62 (3) ◽  
pp. 445-455
Author(s):  
Johannes Schwab ◽  
Stephan Antholzer ◽  
Markus Haltmeier

Abstract Deep learning and (deep) neural networks are emerging tools to address inverse problems and image reconstruction tasks. Despite outstanding performance, the mathematical analysis for solving inverse problems by neural networks is mostly missing. In this paper, we introduce and rigorously analyze families of deep regularizing neural networks (RegNets) of the form $$\mathbf {B}_\alpha + \mathbf {N}_{\theta (\alpha )} \mathbf {B}_\alpha $$Bα+Nθ(α)Bα, where $$\mathbf {B}_\alpha $$Bα is a classical regularization and the network $$\mathbf {N}_{\theta (\alpha )} \mathbf {B}_\alpha $$Nθ(α)Bα is trained to recover the missing part $${\text {Id}}_X - \mathbf {B}_\alpha $$IdX-Bα not found by the classical regularization. We show that these regularizing networks yield a convergent regularization method for solving inverse problems. Additionally, we derive convergence rates (quantitative error estimates) assuming a sufficient decay of the associated distance function. We demonstrate that our results recover existing convergence and convergence rates results for filter-based regularization methods as well as the recently introduced null space network as special cases. Numerical results are presented for a tomographic sparse data problem, which clearly demonstrate that the proposed RegNets improve classical regularization as well as the null space network.

2019 ◽  
Vol 31 (12) ◽  
pp. 2293-2323 ◽  
Author(s):  
Kenji Kawaguchi ◽  
Jiaoyang Huang ◽  
Leslie Pack Kaelbling

For nonconvex optimization in machine learning, this article proves that every local minimum achieves the globally optimal value of the perturbable gradient basis model at any differentiable point. As a result, nonconvex machine learning is theoretically as supported as convex machine learning with a handcrafted basis in terms of the loss at differentiable local minima, except in the case when a preference is given to the handcrafted basis over the perturbable gradient basis. The proofs of these results are derived under mild assumptions. Accordingly, the proven results are directly applicable to many machine learning models, including practical deep neural networks, without any modification of practical methods. Furthermore, as special cases of our general results, this article improves or complements several state-of-the-art theoretical results on deep neural networks, deep residual networks, and overparameterized deep neural networks with a unified proof technique and novel geometric insights. A special case of our results also contributes to the theoretical foundation of representation learning.


2020 ◽  
Vol 10 (5) ◽  
pp. 1816 ◽  
Author(s):  
Zaccharie Ramzi ◽  
Philippe Ciuciu ◽  
Jean-Luc Starck

Deep learning is starting to offer promising results for reconstruction in Magnetic Resonance Imaging (MRI). A lot of networks are being developed, but the comparisons remain hard because the frameworks used are not the same among studies, the networks are not properly re-trained, and the datasets used are not the same among comparisons. The recent release of a public dataset, fastMRI, consisting of raw k-space data, encouraged us to write a consistent benchmark of several deep neural networks for MR image reconstruction. This paper shows the results obtained for this benchmark, allowing to compare the networks, and links the open source implementation of all these networks in Keras. The main finding of this benchmark is that it is beneficial to perform more iterations between the image and the measurement spaces compared to having a deeper per-space network.


2021 ◽  
Vol 14 (2) ◽  
pp. 470-505
Author(s):  
Tatiana A. Bubba ◽  
Mathilde Galinier ◽  
Matti Lassas ◽  
Marco Prato ◽  
Luca Ratti ◽  
...  

2018 ◽  
Vol 26 (2) ◽  
pp. 277-286 ◽  
Author(s):  
Jens Flemming

AbstractVariational source conditions proved to be useful for deriving convergence rates for Tikhonov’s regularization method and also for other methods. Up to now, such conditions have been verified only for few examples or for situations which can be also handled by classical range-type source conditions. Here we show that for almost every ill-posed inverse problem variational source conditions are satisfied. Whether linear or nonlinear, whether Hilbert or Banach spaces, whether one or multiple solutions, variational source conditions are a universal tool for proving convergence rates.


2018 ◽  
Vol 35 (1) ◽  
pp. 20-36 ◽  
Author(s):  
Alice Lucas ◽  
Michael Iliadis ◽  
Rafael Molina ◽  
Aggelos K. Katsaggelos

Econometrica ◽  
2021 ◽  
Vol 89 (1) ◽  
pp. 181-213 ◽  
Author(s):  
Max H. Farrell ◽  
Tengyuan Liang ◽  
Sanjog Misra

We study deep neural networks and their use in semiparametric inference. We establish novel nonasymptotic high probability bounds for deep feedforward neural nets. These deliver rates of convergence that are sufficiently fast (in some cases minimax optimal) to allow us to establish valid second‐step inference after first‐step estimation with deep learning, a result also new to the literature. Our nonasymptotic high probability bounds, and the subsequent semiparametric inference, treat the current standard architecture: fully connected feedforward neural networks (multilayer perceptrons), with the now‐common rectified linear unit activation function, unbounded weights, and a depth explicitly diverging with the sample size. We discuss other architectures as well, including fixed‐width, very deep networks. We establish the nonasymptotic bounds for these deep nets for a general class of nonparametric regression‐type loss functions, which includes as special cases least squares, logistic regression, and other generalized linear models. We then apply our theory to develop semiparametric inference, focusing on causal parameters for concreteness, and demonstrate the effectiveness of deep learning with an empirical application to direct mail marketing.


Sign in / Sign up

Export Citation Format

Share Document