Ill-posed nature of inverse problems and their regularization (stability — estimates)

Author(s):  
G. A. Viano
1983 ◽  
Vol 45 (5) ◽  
pp. 1237-1245 ◽  
Author(s):  
O. M. Alifanov
Keyword(s):  

Author(s):  
C. W. Groetsch ◽  
Martin Hanke

Abstract A simple numerical method for some one-dimensional inverse problems of model identification type arising in nonlinear heat transfer is discussed. The essence of the method is to express the nonlinearity in terms of an integro-differential operator, the values of which are approximated by a linear spline technique. The inverse problems are mildly ill-posed and therefore call for regularization when data errors are present. A general technique for stabilization of unbounded operators may be applied to regularize the process and a specific regularization technique is illustrated on a model problem.


Author(s):  
Mingyong Zhou

Background: Complex inverse problems such as Radar Imaging and CT/EIT imaging are well investigated in mathematical algorithms with various regularization methods. However it is difficult to obtain stable inverse solutions with fast convergence and high accuracy at the same time due to the ill-posed property and non-linear property. Objective: In this paper, we propose a hierarchical and multi-resolution scalable method from both algorithm perspective and hardware perspective to achieve fast and accurate solu-tions for inverse problems by taking radar and EIT imaging as examples. Method: We present an extension of discussion on neuromorphic computing as brain-inspired computing method and the learning/training algorithm to design a series of problem specific AI “brains” (with different memristive values) to solve a general complex ill-posed inverse problems that are traditionally solved by mathematical regular operators. We design a hierarchical and multi-resolution scalable method and an algorithm framework to train AI deep learning neuron network and map into the memristive circuit so that the memristive val-ues are optimally obtained. We propose FPGA as an emulation implementation for neuro-morphic circuit as well. Result: We compared the methodology between our approach and traditional regulariza-tion method. In particular we use Electrical Impedance Tomography (EIT) and Radar imaging as typical examples to compare how to design an AI deep learning neuron network architec-tures to solve inverse problems. Conclusion: With EIT imaging as a typical example, we show that any moderate complex inverse problem, as long as it can be described as combinational problem, AI deep learning neuron network is a practical alternative approach to try to solve the inverse problems with any given expected resolution accuracy, as long as the neuron network width is large enough and computational power is strong enough for all combination samples training purpose.


2019 ◽  
Vol 27 (3) ◽  
pp. 317-340 ◽  
Author(s):  
Max Kontak ◽  
Volker Michel

Abstract In this work, we present the so-called Regularized Weak Functional Matching Pursuit (RWFMP) algorithm, which is a weak greedy algorithm for linear ill-posed inverse problems. In comparison to the Regularized Functional Matching Pursuit (RFMP), on which it is based, the RWFMP possesses an improved theoretical analysis including the guaranteed existence of the iterates, the convergence of the algorithm for inverse problems in infinite-dimensional Hilbert spaces, and a convergence rate, which is also valid for the particular case of the RFMP. Another improvement is the cancellation of the previously required and difficult to verify semi-frame condition. Furthermore, we provide an a-priori parameter choice rule for the RWFMP, which yields a convergent regularization. Finally, we will give a numerical example, which shows that the “weak” approach is also beneficial from the computational point of view. By applying an improved search strategy in the algorithm, which is motivated by the weak approach, we can save up to 90  of computation time in comparison to the RFMP, whereas the accuracy of the solution does not change as much.


Author(s):  
Risheng Liu

Numerous tasks at the core of statistics, learning, and vision areas are specific cases of ill-posed inverse problems. Recently, learning-based (e.g., deep) iterative methods have been empirically shown to be useful for these problems. Nevertheless, integrating learnable structures into iterations is still a laborious process, which can only be guided by intuitions or empirical insights. Moreover, there is a lack of rigorous analysis of the convergence behaviors of these reimplemented iterations, and thus the significance of such methods is a little bit vague. We move beyond these limits and propose a theoretically guaranteed optimization learning paradigm, a generic and provable paradigm for nonconvex inverse problems, and develop a series of convergent deep models. Our theoretical analysis reveals that the proposed optimization learning paradigm allows us to generate globally convergent trajectories for learning-based iterative methods. Thanks to the superiority of our framework, we achieve state-of-the-art performance on different real applications.


Sign in / Sign up

Export Citation Format

Share Document