scholarly journals Analytical and Numerical Connections between Fractional Fickian and Intravoxel Incoherent Motion Models of Diffusion MRI

Mathematics ◽  
2021 ◽  
Vol 9 (16) ◽  
pp. 1963
Author(s):  
Jingting Yao ◽  
Muhammad Ali Raza Anjum ◽  
Anshuman Swain ◽  
David A. Reiter

Impaired tissue perfusion underlies many chronic disease states and aging. Diffusion-weighted imaging (DWI) is a noninvasive MRI technique that has been widely used to characterize tissue perfusion. Parametric models based on DWI measurements can characterize microvascular perfusion modulated by functional and microstructural alterations in the skeletal muscle. The intravoxel incoherent motion (IVIM) model uses a biexponential form to quantify the incoherent motion of water molecules in the microvasculature at low b-values of DWI measurements. The fractional Fickian diffusion (FFD) model is a parsimonious representation of anomalous superdiffusion that uses the stretched exponential form and can be used to quantify the microvascular volume of skeletal muscle. Both models are established measures of perfusion based on DWI, and the prognostic value of model parameters for identifying pathophysiological processes has been studied. Although the mathematical properties of individual models have been previously reported, quantitative connections between IVIM and FFD models have not been examined. This work provides a mathematical framework for obtaining a direct, one-way transformation of the parameters of the stretched exponential model to those of the biexponential model. Numerical simulations are implemented, and the results corroborate analytical results. Additionally, analysis of in vivo DWI measurements in skeletal muscle using both biexponential and stretched exponential models is shown and compared with analytical and numerical models. These results demonstrate the difficulty of model selection based on goodness of fit to experimental data. This analysis provides a framework for better interpreting and harmonizing perfusion parameters from experimental results using these two different models.

Author(s):  
David A. Reiter ◽  
Fatemeh Adelnia ◽  
Donnie Cameron ◽  
Richard G. Spencer ◽  
Luigi Ferrucci

2000 ◽  
Vol 663 ◽  
Author(s):  
J. Samper ◽  
R. Juncosa ◽  
V. Navarro ◽  
J. Delgado ◽  
L. Montenegro ◽  
...  

ABSTRACTFEBEX (Full-scale Engineered Barrier EXperiment) is a demonstration and research project dealing with the bentonite engineered barrier designed for sealing and containment of waste in a high level radioactive waste repository (HLWR). It includes two main experiments: an situ full-scale test performed at Grimsel (GTS) and a mock-up test operating since February 1997 at CIEMAT facilities in Madrid (Spain) [1,2,3]. One of the objectives of FEBEX is the development and testing of conceptual and numerical models for the thermal, hydrodynamic, and geochemical (THG) processes expected to take place in engineered clay barriers. A significant improvement in coupled THG modeling of the clay barrier has been achieved both in terms of a better understanding of THG processes and more sophisticated THG computer codes. The ability of these models to reproduce the observed THG patterns in a wide range of THG conditions enhances the confidence in their prediction capabilities. Numerical THG models of heating and hydration experiments performed on small-scale lab cells provide excellent results for temperatures, water inflow and final water content in the cells [3]. Calculated concentrations at the end of the experiments reproduce most of the patterns of measured data. In general, the fit of concentrations of dissolved species is better than that of exchanged cations. These models were later used to simulate the evolution of the large-scale experiments (in situ and mock-up). Some thermo-hydrodynamic hypotheses and bentonite parameters were slightly revised during TH calibration of the mock-up test. The results of the reference model reproduce simultaneously the observed water inflows and bentonite temperatures and relative humidities. Although the model is highly sensitive to one-at-a-time variations in model parameters, the possibility of parameter combinations leading to similar fits cannot be precluded. The TH model of the “in situ” test is based on the same bentonite TH parameters and assumptions as for the “mock-up” test. Granite parameters were slightly modified during the calibration process in order to reproduce the observed thermal and hydrodynamic evolution. The reference model captures properly relative humidities and temperatures in the bentonite [3]. It also reproduces the observed spatial distribution of water pressures and temperatures in the granite. Once calibrated the TH aspects of the model, predictions of the THG evolution of both tests were performed. Data from the dismantling of the in situ test, which is planned for the summer of 2001, will provide a unique opportunity to test and validate current THG models of the EBS.


Mathematics ◽  
2021 ◽  
Vol 9 (16) ◽  
pp. 1853
Author(s):  
Alina Bărbulescu ◽  
Cristian Ștefan Dumitriu

Artificial intelligence (AI) methods are interesting alternatives to classical approaches for modeling financial time series since they relax the assumptions imposed on the data generating process by the parametric models and do not impose any constraint on the model’s functional form. Even if many studies employed these techniques for modeling financial time series, the connection of the models’ performances with the statistical characteristics of the data series has not yet been investigated. Therefore, this research aims to study the performances of Gene Expression Programming (GEP) for modeling monthly and weekly financial series that present trend and/or seasonality and after the removal of each component. It is shown that series normality and homoskedasticity do not influence the models’ quality. The trend removal increases the models’ performance, whereas the seasonality elimination results in diminishing the goodness of fit. Comparisons with ARIMA models built are also provided.


2021 ◽  
pp. 1-18
Author(s):  
Gisela Vanegas ◽  
John Nejedlik ◽  
Pascale Neff ◽  
Torsten Clemens

Summary Forecasting production from hydrocarbon fields is challenging because of the large number of uncertain model parameters and the multitude of observed data that are measured. The large number of model parameters leads to uncertainty in the production forecast from hydrocarbon fields. Changing operating conditions [e.g., implementation of improved oil recovery or enhanced oil recovery (EOR)] results in model parameters becoming sensitive in the forecast that were not sensitive during the production history. Hence, simulation approaches need to be able to address uncertainty in model parameters as well as conditioning numerical models to a multitude of different observed data. Sampling from distributions of various geological and dynamic parameters allows for the generation of an ensemble of numerical models that could be falsified using principal-component analysis (PCA) for different observed data. If the numerical models are not falsified, machine-learning (ML) approaches can be used to generate a large set of parameter combinations that can be conditioned to the different observed data. The data conditioning is followed by a final step ensuring that parameter interactions are covered. The methodology was applied to a sandstone oil reservoir with more than 70 years of production history containing dozens of wells. The resulting ensemble of numerical models is conditioned to all observed data. Furthermore, the resulting posterior-model parameter distributions are only modified from the prior-model parameter distributions if the observed data are informative for the model parameters. Hence, changes in operating conditions can be forecast under uncertainty, which is essential if nonsensitive parameters in the history are sensitive in the forecast.


2021 ◽  
Author(s):  
Christian Zeman ◽  
Christoph Schär

<p>Since their first operational application in the 1950s, atmospheric numerical models have become essential tools in weather and climate prediction. As such, they are a constant subject to changes, thanks to advances in computer systems, numerical methods, and the ever increasing knowledge about the atmosphere of Earth. Many of the changes in today's models relate to seemingly unsuspicious modifications, associated with minor code rearrangements, changes in hardware infrastructure, or software upgrades. Such changes are meant to preserve the model formulation, yet the verification of such changes is challenged by the chaotic nature of our atmosphere - any small change, even rounding errors, can have a big impact on individual simulations. Overall this represents a serious challenge to a consistent model development and maintenance framework.</p><p>Here we propose a new methodology for quantifying and verifying the impacts of minor atmospheric model changes, or its underlying hardware/software system, by using ensemble simulations in combination with a statistical hypothesis test. The methodology can assess effects of model changes on almost any output variable over time, and can also be used with different hypothesis tests.</p><p>We present first applications of the methodology with the regional weather and climate model COSMO. The changes considered include a major system upgrade of the supercomputer used, the change from double to single precision floating-point representation, changes in the update frequency of the lateral boundary conditions, and tiny changes to selected model parameters. While providing very robust results, the methodology also shows a large sensitivity to more significant model changes, making it a good candidate for an automated tool to guarantee model consistency in the development cycle.</p>


2015 ◽  
Vol 12 (12) ◽  
pp. 13217-13256 ◽  
Author(s):  
G. Formetta ◽  
G. Capparelli ◽  
P. Versace

Abstract. Rainfall induced shallow landslides cause loss of life and significant damages involving private and public properties, transportation system, etc. Prediction of shallow landslides susceptible locations is a complex task that involves many disciplines: hydrology, geotechnical science, geomorphology, and statistics. Usually to accomplish this task two main approaches are used: statistical or physically based model. Reliable models' applications involve: automatic parameters calibration, objective quantification of the quality of susceptibility maps, model sensitivity analysis. This paper presents a methodology to systemically and objectively calibrate, verify and compare different models and different models performances indicators in order to individuate and eventually select the models whose behaviors are more reliable for a certain case study. The procedure was implemented in package of models for landslide susceptibility analysis and integrated in the NewAge-JGrass hydrological model. The package includes three simplified physically based models for landslides susceptibility analysis (M1, M2, and M3) and a component for models verifications. It computes eight goodness of fit indices by comparing pixel-by-pixel model results and measurements data. Moreover, the package integration in NewAge-JGrass allows the use of other components such as geographic information system tools to manage inputs-output processes, and automatic calibration algorithms to estimate model parameters. The system was applied for a case study in Calabria (Italy) along the Salerno-Reggio Calabria highway, between Cosenza and Altilia municipality. The analysis provided that among all the optimized indices and all the three models, the optimization of the index distance to perfect classification in the receiver operating characteristic plane (D2PC) coupled with model M3 is the best modeling solution for our test case.


2011 ◽  
Vol 8 (4) ◽  
pp. 7017-7053 ◽  
Author(s):  
Z. Bao ◽  
J. Liu ◽  
J. Zhang ◽  
G. Fu ◽  
G. Wang ◽  
...  

Abstract. Equifinality is unavoidable when transferring model parameters from gauged catchments to ungauged catchments for predictions in ungauged basins (PUB). A framework for estimating the three baseflow parameters of variable infiltration capacity (VIC) model, directly with soil and topography properties is presented. When the new parameters setting methodology is used, the number of parameters needing to be calibrated is reduced from six to three, that leads to a decrease of equifinality and uncertainty. This is validated by Monte Carlo simulations in 24 hydro-climatic catchments in China. Using the new parameters estimation approach, model parameters become more sensitive and the extent of parameters space will be smaller when a threshold of goodness-of-fit is given. That means the parameters uncertainty is reduced with the new parameters setting methodology. In addition, the uncertainty of model simulation is estimated by the generalised likelihood uncertainty estimation (GLUE) methodology. The results indicate that the uncertainty of streamflow simulations, i.e., confidence interval, is lower with the new parameters estimation methodology compared to that used by original calibration methodology. The new baseflow parameters estimation framework could be applied in VIC model and other appropriate models for PUB.


2021 ◽  
Vol 21 (8) ◽  
pp. 2447-2460
Author(s):  
Stuart R. Mead ◽  
Jonathan Procter ◽  
Gabor Kereszturi

Abstract. The use of mass flow simulations in volcanic hazard zonation and mapping is often limited by model complexity (i.e. uncertainty in correct values of model parameters), a lack of model uncertainty quantification, and limited approaches to incorporate this uncertainty into hazard maps. When quantified, mass flow simulation errors are typically evaluated on a pixel-pair basis, using the difference between simulated and observed (“actual”) map-cell values to evaluate the performance of a model. However, these comparisons conflate location and quantification errors, neglecting possible spatial autocorrelation of evaluated errors. As a result, model performance assessments typically yield moderate accuracy values. In this paper, similarly moderate accuracy values were found in a performance assessment of three depth-averaged numerical models using the 2012 debris avalanche from the Upper Te Maari crater, Tongariro Volcano, as a benchmark. To provide a fairer assessment of performance and evaluate spatial covariance of errors, we use a fuzzy set approach to indicate the proximity of similarly valued map cells. This “fuzzification” of simulated results yields improvements in targeted performance metrics relative to a length scale parameter at the expense of decreases in opposing metrics (e.g. fewer false negatives result in more false positives) and a reduction in resolution. The use of this approach to generate hazard zones incorporating the identified uncertainty and associated trade-offs is demonstrated and indicates a potential use for informed stakeholders by reducing the complexity of uncertainty estimation and supporting decision-making from simulated data.


2016 ◽  
Vol 9 (3) ◽  
pp. 118-137
Author(s):  
L.S. Kuravsky ◽  
P.A. Marmalyuk ◽  
G.A. Yuryev ◽  
O.B. Belyaeva ◽  
O.Yu. Prokopieva

This paper describes a new concept of flight crew assessment based on flight simulators training result. It is based on representation of pilot gaze movement with the aid of continuous-time Markov processes with discrete states. Considered are both the procedure of model parameters identification provided with goodness-of-fit tests in use and the classifier-building technique, which makes it possible to estimate degree of correspondence between the observed gaze motion distribution under study and reference distributions identified for different diagnosed groups. The final assessing criterion is formed on the basis of integrated diagnostic parameters, which are determined by the parameters of the identified models. The article provides a description of the experiment, illustrations, and results of studies aimed at assessing the reliability of the developed models and criteria, as well as conclusions about the applicability of the approach, its advantages and disadvantages.


2021 ◽  
Author(s):  
Stuart R. Mead ◽  
Jonathan Procter ◽  
Gabor Kereszturi

Abstract. The use of mass flow simulations in volcanic hazard zonation and mapping is often limited by model complexity (i.e. uncertainty in correct values of model parameters), a lack of model uncertainty quantification, and limited approaches to incorporate this uncertainty into hazard maps. When quantified, mass flow simulation errors are typically evaluated on a pixel-pair basis, using the difference between simulated and observed (actual) map-cell values to evaluate the performance of a model. However, these comparisons conflate location and quantification errors, neglecting possible spatial autocorrelation of evaluated errors. As a result, model performance assessments typically yield moderate accuracy values. In this paper, similarly moderate accuracy values were found in a performance assessment of three depth-averaged numerical models using the 2012 debris avalanche from the Upper Te Maari crater, Tongariro Volcano as a benchmark. To provide a fairer assessment of performance and evaluate spatial covariance of errors, we use a fuzzy set approach to indicate the proximity of similarly valued map cells. This fuzzification of simulated results yields improvements in targeted performance metrics relative to a length scale parameter, at the expense of decreases in opposing metrics (e.g. less false negatives results in more false positives) and a reduction in resolution. The use of this approach to generate hazard zones incorporating the identified uncertainty and associated trade-offs is demonstrated, and indicates a potential use for informed stakeholders by reducing the complexity of uncertainty estimation and supporting decision making from simulated data.


Sign in / Sign up

Export Citation Format

Share Document