Inverse problem for biomedical applications: use of prior information on target and forward model parameters

2011 ◽  
Author(s):  
Fabrizio Martelli ◽  
Samuele Del Bianco ◽  
Giovanni Zaccanti
2021 ◽  
Vol 13 (2) ◽  
pp. 215
Author(s):  
Hui Qin ◽  
Zhengzheng Wang ◽  
Yu Tang ◽  
Tiesuo Geng

The crosshole ground penetrating radar (GPR) is a widely used tool to map subsurface properties, and inversion methods are used to derive electrical parameters from crosshole GPR data. In this paper, a probabilistic inversion algorithm that uses Markov chain Monte Carlo (MCMC) simulations within the Bayesian framework is implemented to infer the posterior distribution of the relative permittivity of the subsurface medium. Close attention is paid to the critical elements of this method, including the forward model, data type and prior information, and their influence on the inversion results are investigated. First, a uniform prior distribution is used to reflect the lack of prior knowledge of model parameters, and inversions are performed using the straight-ray model with first-arrival traveltime data, the finite-difference time-domain (FDTD) model with first-arrival traveltime data, and the FDTD model with waveform data, respectively. The cases using first-arrival traveltime data require an unreasonable number of model evaluations to converge, yet are not able to recover the real relative permittivity field. In contrast, the inversion using the FDTD model with waveform data successfully infers the correct model parameters. Then, the smooth constraint of model parameters is employed as the prior distribution. The inversion results demonstrate that the prior information barely affects the inversion results using the FDTD model with waveform data, but significantly improves the inversion results using first-arrival traveltime data by decreasing the computing time and reducing uncertainties of the posterior distribution of model parameters.


Geophysics ◽  
2021 ◽  
pp. 1-57
Author(s):  
Qiang Guo ◽  
Jing Ba ◽  
Li-Yun Fu ◽  
Cong Luo

The estimation of reservoir parameters from seismic observations is one of the main objectives in reservoir characterization. However, the forward model relating petrophysical properties of rocks to observed seismic data is highly nonlinear, and solving the relevant inverse problem is a challenging task. We present a novel inversion method for jointly estimating elastic and petrophysical parameters of rocks from prestack seismic data. We combine a full rock-physics model and the exact Zoeppritz equation as the forward model. To overcome the ill-conditioning of the inverse problem and address the complex prior distribution of model parameters given lithofacies variations, we introduce a regularization term based on the prior Gaussian mixture model under Bayesian framework. The objective function is optimized by the fast simulated annealing algorithm, during which the Gaussian mixture-based regularization terms are adaptively and iteratively adjusted by the maximum likelihood estimator, allowing the posterior distribution to be more consistent with the observed seismic data. The adaptive regularization method improves the accuracy of petrophysical parameters compared to the sequential inversion and non-adaptive regularization methods, and the inversion result can be used for indicating gas-saturated areas when applied to field data.


Geophysics ◽  
2014 ◽  
Vol 79 (3) ◽  
pp. H1-H21 ◽  
Author(s):  
Thomas Mejer Hansen ◽  
Knud Skou Cordua ◽  
Bo Holm Jacobsen ◽  
Klaus Mosegaard

Inversion of geophysical data relies on knowledge about how to solve the forward problem, that is, computing data from a given set of model parameters. In many applications of inverse problems, the solution to the forward problem is assumed to be known perfectly, without any error. In reality, solving the forward model (forward-modeling process) will almost always be prone to errors, which we referred to as modeling errors. For a specific forward problem, computation of crosshole tomographic first-arrival traveltimes, we evaluated how the modeling error, given several different approximate forward models, can be more than an order of magnitude larger than the measurement uncertainty. We also found that the modeling error is strongly linked to the spatial variability of the assumed velocity field, i.e., the a priori velocity model. We discovered some general tools by which the modeling error can be quantified and cast into a consistent formulation as an additive Gaussian observation error. We tested a method for generating a sample of the modeling error due to using a simple and approximate forward model, as opposed to a more complex and correct forward model. Then, a probabilistic model of the modeling error was inferred in the form of a correlated Gaussian probability distribution. The key to the method was the ability to generate many realizations from a statistical description of the source of the modeling error, which in this case is the a priori model. The methodology was tested for two synthetic ground-penetrating radar crosshole tomographic inverse problems. Ignoring the modeling error can lead to severe artifacts, which erroneously appear to be well resolved in the solution of the inverse problem. Accounting for the modeling error leads to a solution of the inverse problem consistent with the actual model. Further, using an approximate forward modeling may lead to a dramatic decrease in the computational demands for solving inverse problems.


Entropy ◽  
2021 ◽  
Vol 23 (5) ◽  
pp. 599
Author(s):  
Danilo Cruz ◽  
João de Araújo ◽  
Carlos da Costa ◽  
Carlos da Silva

Full waveform inversion is an advantageous technique for obtaining high-resolution subsurface information. In the petroleum industry, mainly in reservoir characterisation, it is common to use information from wells as previous information to decrease the ambiguity of the obtained results. For this, we propose adding a relative entropy term to the formalism of the full waveform inversion. In this context, entropy will be just a nomenclature for regularisation and will have the role of helping the converge to the global minimum. The application of entropy in inverse problems usually involves formulating the problem, so that it is possible to use statistical concepts. To avoid this step, we propose a deterministic application to the full waveform inversion. We will discuss some aspects of relative entropy and show three different ways of using them to add prior information through entropy in the inverse problem. We use a dynamic weighting scheme to add prior information through entropy. The idea is that the prior information can help to find the path of the global minimum at the beginning of the inversion process. In all cases, the prior information can be incorporated very quickly into the full waveform inversion and lead the inversion to the desired solution. When we include the logarithmic weighting that constitutes entropy to the inverse problem, we will suppress the low-intensity ripples and sharpen the point events. Thus, the addition of entropy relative to full waveform inversion can provide a result with better resolution. In regions where salt is present in the BP 2004 model, we obtained a significant improvement by adding prior information through the relative entropy for synthetic data. We will show that the prior information added through entropy in full-waveform inversion formalism will prove to be a way to avoid local minimums.


Mathematics ◽  
2021 ◽  
Vol 9 (8) ◽  
pp. 910
Author(s):  
Andrey Kovtanyuk ◽  
Alexander Chebotarev ◽  
Varvara Turova ◽  
Irina Sidorenko ◽  
Renée Lampe

An inverse problem for a system of equations modeling oxygen transport in the brain is studied. The problem consists of finding the right-hand side of the equation for the blood oxygen transport, which is a linear combination of given functionals describing the average oxygen concentration in the neighborhoods of the ends of arterioles and venules. The overdetermination condition is determined by the values of these functionals evaluated on the solution. The unique solvability of the problem is proven without any smallness assumptions on the model parameters.


2020 ◽  
Vol 72 (1) ◽  
Author(s):  
Guillaume Ropp ◽  
Vincent Lesur ◽  
Julien Baerenzung ◽  
Matthias Holschneider

Abstract We describe a new, original approach to the modelling of the Earth’s magnetic field. The overall objective of this study is to reliably render fast variations of the core field and its secular variation. This method combines a sequential modelling approach, a Kalman filter, and a correlation-based modelling step. Sources that most significantly contribute to the field measured at the surface of the Earth are modelled. Their separation is based on strong prior information on their spatial and temporal behaviours. We obtain a time series of model distributions which display behaviours similar to those of recent models based on more classic approaches, particularly at large temporal and spatial scales. Interesting new features and periodicities are visible in our models at smaller time and spatial scales. An important aspect of our method is to yield reliable error bars for all model parameters. These errors, however, are only as reliable as the description of the different sources and the prior information used are realistic. Finally, we used a slightly different version of our method to produce candidate models for the thirteenth edition of the International Geomagnetic Reference Field.


Proceedings ◽  
2019 ◽  
Vol 33 (1) ◽  
pp. 21
Author(s):  
Fabrizia Guglielmetti ◽  
Eric Villard ◽  
Ed Fomalont

A stable and unique solution to the ill-posed inverse problem in radio synthesis image analysis is sought employing Bayesian probability theory combined with a probabilistic two-component mixture model. The solution of the ill-posed inverse problem is given by inferring the values of model parameters defined to describe completely the physical system arised by the data. The analysed data are calibrated visibilities, Fourier transformed from the ( u , v ) to image planes. Adaptive splines are explored to model the cumbersome background model corrupted by the largely varying dirty beam in the image plane. The de-convolution process of the dirty image from the dirty beam is tackled in probability space. Probability maps in source detection at several resolution values quantify the acquired knowledge on the celestial source distribution from a given state of information. The information available are data constrains, prior knowledge and uncertain information. The novel algorithm has the aim to provide an alternative imaging task for the use of the Atacama Large Millimeter/Submillimeter Array (ALMA) in support of the widely used Common Astronomy Software Applications (CASA) enhancing the capabilities in source detection.


2014 ◽  
Vol 989-994 ◽  
pp. 3609-3612
Author(s):  
Yong Jian Zhao

Blind source extraction (BSE) is a promising technique to solve signal mixture problems while only one or a few source signals are desired. In biomedical applications, one often knows certain prior information about a desired source signal in advance. In this paper, we explore specific prior information as a constrained condition so as to develop a flexible BSE algorithm. One can extract a desired source signal while its normalized kurtosis range is known in advance. Computer simulations on biomedical signals confirm the validity of the proposed algorithm.


Author(s):  
Jan Prüser ◽  
Christoph Hanck

Abstract Vector autoregressions (VARs) are richly parameterized time series models that can capture complex dynamic interrelationships among macroeconomic variables. However, in small samples the rich parametrization of VAR models may come at the cost of overfitting the data, possibly leading to imprecise inference for key quantities of interest such as impulse response functions (IRFs). Bayesian VARs (BVARs) can use prior information to shrink the model parameters, potentially avoiding such overfitting. We provide a simulation study to compare, in terms of the frequentist properties of the estimates of the IRFs, useful strategies to select the informativeness of the prior. The study reveals that prior information may help to obtain more precise estimates of impulse response functions than classical OLS-estimated VARs and more accurate coverage rates of error bands in small samples. Strategies based on selecting the prior hyperparameters of the BVAR building on empirical or hierarchical modeling perform particularly well.


2018 ◽  
Vol 27 (02) ◽  
pp. 1850015 ◽  
Author(s):  
S. Cht. Mavrodiev ◽  
M. A. Deliyergiyev

We formalized the nuclear mass problem in the inverse problem framework. This approach allows us to infer the underlying model parameters from experimental observation, rather than to predict the observations from the model parameters. The inverse problem was formulated for the numerically generalized semi-empirical mass formula of Bethe and von Weizsäcker. It was solved in a step-by-step way based on the AME2012 nuclear database. The established parametrization describes the measured nuclear masses of 2564 isotopes with a maximum deviation less than 2.6[Formula: see text]MeV, starting from the number of protons and number of neutrons equal to 1.The explicit form of unknown functions in the generalized mass formula was discovered in a step-by-step way using the modified least [Formula: see text] procedure, that realized in the algorithms which were developed by Lubomir Aleksandrov to solve the nonlinear systems of equations via the Gauss–Newton method, lets us to choose the better one between two functions with same [Formula: see text]. In the obtained generalized model, the corrections to the binding energy depend on nine proton (2, 8, 14, 20, 28, 50, 82, 108, 124) and ten neutron (2, 8, 14, 20, 28, 50, 82, 124, 152, 202) magic numbers as well on the asymptotic boundaries of their influence. The obtained results were compared with the predictions of other models.


Sign in / Sign up

Export Citation Format

Share Document