Neural networks for linear inverse problems with incomplete data especially in applications to signal and image reconstruction

1995 ◽  
Vol 8 (1) ◽  
pp. 7-41 ◽  
Author(s):  
A. Cichocki ◽  
R. Unbehauen ◽  
M. Lendl ◽  
K. Weinzierl
2019 ◽  
Vol 62 (3) ◽  
pp. 417-444
Author(s):  
A. Chambolle ◽  
M. Holler ◽  
T. Pock

AbstractA variational model for learning convolutional image atoms from corrupted and/or incomplete data is introduced and analyzed both in function space and numerically. Building on lifting and relaxation strategies, the proposed approach is convex and allows for simultaneous image reconstruction and atom learning in a general, inverse problems context. Further, motivated by an improved numerical performance, also a semi-convex variant is included in the analysis and the experiments of the paper. For both settings, fundamental analytical properties allowing in particular to ensure well-posedness and stability results for inverse problems are proven in a continuous setting. Exploiting convexity, globally optimal solutions are further computed numerically for applications with incomplete, noisy and blurry data and numerical results are shown.


Entropy ◽  
2021 ◽  
Vol 23 (11) ◽  
pp. 1481
Author(s):  
Yang Sun ◽  
Hangdong Zhao ◽  
Jonathan Scarlett

In recent years, neural network based image priors have been shown to be highly effective for linear inverse problems, often significantly outperforming conventional methods that are based on sparsity and related notions. While pre-trained generative models are perhaps the most common, it has additionally been shown that even untrained neural networks can serve as excellent priors in various imaging applications. In this paper, we seek to broaden the applicability and understanding of untrained neural network priors by investigating the interaction between architecture selection, measurement models (e.g., inpainting vs. denoising vs. compressive sensing), and signal types (e.g., smooth vs. erratic). We motivate the problem via statistical learning theory, and provide two practical algorithms for tuning architectural hyperparameters. Using experimental evaluations, we demonstrate that the optimal hyperparameters may vary significantly between tasks and can exhibit large performance gaps when tuned for the wrong task. In addition, we investigate which hyperparameters tend to be more important, and which are robust to deviations from the optimum.


Sign in / Sign up

Export Citation Format

Share Document