scholarly journals Cardiovascular Modeling With Adapted Parametric Inference

2018 ◽  
Vol 62 ◽  
pp. 91-107 ◽  
Author(s):  
Didier Lucor ◽  
Olivier P. Le Maître

Computational modeling of the cardiovascular system, promoted by the advance of fluid-structure interaction numerical methods, has made great progress towards the development of patient-specific numerical aids to diagnosis, risk prediction, intervention and clinical treatment. Nevertheless, the reliability of these models is inevitably impacted by rough modeling assumptions. A strong in-tegration of patient-specific data into numerical modeling is therefore needed in order to improve the accuracy of the predictions through the calibration of important physiological parameters. The Bayesian statistical framework to inverse problems is a powerful approach that relies on posterior sampling techniques, such as Markov chain Monte Carlo algorithms. The generation of samples re-quires many evaluations of the cardiovascular parameter-to-observable model. In practice, the use of a full cardiovascular numerical model is prohibitively expensive and a computational strategy based on approximations of the system response, or surrogate models, is needed to perform the data as-similation. As the support of the parameters distribution typically concentrates on a small fraction of the initial prior distribution, a worthy improvement consists in gradually adapting the surrogate model to minimize the approximation error for parameter values corresponding to high posterior den-sity. We introduce a novel numerical pathway to construct a series of polynomial surrogate models, by regression, using samples drawn from a sequence of distributions likely to converge to the posterior distribution. The approach yields substantial gains in efficiency and accuracy over direct prior-based surrogate models, as demonstrated via application to pulse wave velocities identification in a human lower limb arterial network.

2021 ◽  
Vol 4 (1) ◽  
Author(s):  
Michelle Przedborski ◽  
Munisha Smalley ◽  
Saravanan Thiyagarajan ◽  
Aaron Goldman ◽  
Mohammad Kohandel

AbstractAnti-PD-1 immunotherapy has recently shown tremendous success for the treatment of several aggressive cancers. However, variability and unpredictability in treatment outcome have been observed, and are thought to be driven by patient-specific biology and interactions of the patient’s immune system with the tumor. Here we develop an integrative systems biology and machine learning approach, built around clinical data, to predict patient response to anti-PD-1 immunotherapy and to improve the response rate. Using this approach, we determine biomarkers of patient response and identify potential mechanisms of drug resistance. We develop systems biology informed neural networks (SBINN) to calculate patient-specific kinetic parameter values and to predict clinical outcome. We show how transfer learning can be leveraged with simulated clinical data to significantly improve the response prediction accuracy of the SBINN. Further, we identify novel drug combinations and optimize the treatment protocol for triple combination therapy consisting of IL-6 inhibition, recombinant IL-12, and anti-PD-1 immunotherapy in order to maximize patient response. We also find unexpected differences in protein expression levels between response phenotypes which complement recent clinical findings. Our approach has the potential to aid in the development of targeted experiments for patient drug screening as well as identify novel therapeutic targets.


Author(s):  
K. Prabith ◽  
I. R. Praveen Krishna

Abstract The main objective of this paper is to use the time variational method (TVM) for the nonlinear response analysis of mechanical systems subjected to multiple-frequency excitations. The system response, which is composed of fractional multiples of frequencies, is expressed in terms of a fundamental frequency that is the greatest common divisor of the approximated frequency components. Unlike the multiharmonic balance method (MHBM), the formulation of the proposed method is very simple in analyzing the systems with more than two excitation frequencies. In addition, the proposed method avoids the alternate transformation between frequency and time domains during the calculation of the nonlinear force and the Jacobian matrix. In this work, the performance of the proposed method is compared with that of numerical integration and the MHBM using three nonlinear mechanical models undergoing multiple-frequency excitations. It is observed that the proposed method produces approximate results during the quasi-periodic response analysis since the formulation includes an approximation of the incommensurate frequencies to commensurate ones. However, the approximation error is very small and the method reduces a significant amount of computational efforts compared to the other methods. In addition, the TVM is a recommended option when the number of state variables involved in the nonlinear function is high as it calculates the nonlinear force vector and the Jacobian matrix directly from the displacement vector. Moreover, the proposed method is far much faster than numerical integration in capturing the steady-state, quasi-periodic responses of the nonlinear mechanical systems.


Author(s):  
Alessandro Satriano ◽  
Edward J. Vigmond ◽  
Elena S. Di Martino

When complex biological structures are modeled, one of the most critical issues is the assignment of geometrical, mechanical and electrical properties to the meshed surfaces. Properties of interest are commonly obtained from diagnostic imaging, experimental tests or anatomical observation. These parameters are usually lumped into individual values assigned to a specific region after subdividing the structure in sub-regions. This practice simplifies the problem avoiding the cumbersome assignment of parameter values to each element. However, sub-regions may not adequately represent the smooth transition between regions thus resulting in artificial discontinuities. In addition, some parameters, such as for example the organization of cardiomyocytes, which is the objective of our research, may be obtained through destructive tests or through sophisticated methods that can only be performed on a limited number of samples. Or else, data structure obtained for one animal species could be applied on a different species. Furthermore, in a clinical environment the need for fast turnout of patient-specific models would benefit from the assignment of tissue properties in a semi-automatic manner.


1988 ◽  
Vol 110 (2) ◽  
pp. 126-133 ◽  
Author(s):  
Kenneth C. Q. Tsai ◽  
David M. Auslander

A statistical methodology is presented for designing controllers in problems where analytical solutions are unobtainable. The methodology is applicable to many complicated systems containing, for example, nonlinearities, uncertainty, and multiple inputs and multiple outputs. Because the design technique is a simulation based approach, no specific restrictions are placed on either the plant or the controller structure. A Monte Carlo technique is used to map the parameter space onto the indices of performance. The system performance either passes or fails the performance index. The objective in systems with uncertain parameters is to select (controller) parameter values which maximize the probability of passing the performance criterion. In deterministic systems, the goal is to find parameter values in the pass region that are as insensitive as possible, that is, parameter values that allow for the maximum amount of parameter variation without causing the system response to leave the pass region.


2019 ◽  
Author(s):  
Angelika Stefan ◽  
Nathan J. Evans ◽  
Eric-Jan Wagenmakers

The Bayesian statistical framework requires the specification of prior distributions, which reflect pre-data knowledge about the relative plausibility of different parameter values. As prior distributions influence the results of Bayesian analyses, it is important to specify them with care. Prior elicitation has frequently been proposed as a principled method for deriving prior distributions based on expert knowledge. Although prior elicitation provides a theoretically satisfactory method of specifying prior distributions, there are several implicit decisions that researchers need to make at different stages of the elicitation process, each of them constituting important researcher degrees of freedom. Here, we discuss some of these decisions and group them into three categories: decisions about (1) the setup of the prior elicitation; (2) the core elicitation process; and (3) combination of elicited prior distributions from different experts. Importantly, different decision paths could result in greatly varying priors elicited from the same experts. Hence, researchers who wish to perform prior elicitation are advised to carefully consider each of the practical decisions before, during, and after the elicitation process. By explicitly outlining the consequences of these practical decisions, we hope to raise awareness for methodological flexibility in prior elicitation and provide researchers with a more structured approach to navigate the decision paths in prior elicitation. Making the decisions explicit also provides the foundation for further research that can identify evidence-based best practices that may eventually reduce the methodologically flexibility in prior elicitation.


2021 ◽  
Author(s):  
Donghui Xu ◽  
Gautam Bisht ◽  
Khachik Sargsyan ◽  
Chang Liao ◽  
L. Ruby Leung

Abstract. Runoff is a critical component of the terrestrial water cycle and Earth System Models (ESMs) are essential tools to study its spatio-temporal variability. Runoff schemes in ESMs typically include many parameters so model calibration is necessary to improve the accuracy of simulated runoff. However, runoff calibration at global scale is challenging because of the high computational cost and the lack of reliable observational datasets. In this study, we calibrated 11 runoff relevant parameters in the Energy Exascale Earth System Model (E3SM) Land Model (ELM) using an uncertainty quantification framework. First, the Polynomial Chaos Expansion machinery with Bayesian Compressed Sensing is used to construct computationally inexpensive surrogate models for ELM-simulated runoff at 0.5° × 0.5° for 1991–2010. The main methodological advance in this work is the construction of surrogates for the error metric between ELM and the benchmark data, facilitating efficient calibration and avoiding the more conventional, but challenging, construction of high-dimensional surrogates for ELM itself. Second, the Sobol index sensitivity analysis is performed using the surrogate models to identify the most sensitive parameters, and our results show that in most regions ELM-simulated runoff is strongly sensitive to 3 of the 11 uncertain parameters. Third, a Bayesian method is used to infer the optimal values of the most sensitive parameters using an observation-based global runoff dataset as the benchmark. Our results show that model performance is significantly improved with the inferred parameter values. Although the parametric uncertainty of simulated runoff is reduced after the parameter inference, it remains comparable to the multi-model ensemble uncertainty represented by the global hydrological models in ISMIP2a. Additionally, the annual global runoff trend during the simulation period is not well constrained by the inferred parameter values, suggesting the importance of including parametric uncertainty in future runoff projections.


2019 ◽  
Vol 492 (1) ◽  
pp. 394-404 ◽  
Author(s):  
M A Price ◽  
X Cai ◽  
J D McEwen ◽  
M Pereyra ◽  
T D Kitching ◽  
...  

ABSTRACT Until recently, mass-mapping techniques for weak gravitational lensing convergence reconstruction have lacked a principled statistical framework upon which to quantify reconstruction uncertainties, without making strong assumptions of Gaussianity. In previous work, we presented a sparse hierarchical Bayesian formalism for convergence reconstruction that addresses this shortcoming. Here, we draw on the concept of local credible intervals (cf. Bayesian error bars) as an extension of the uncertainty quantification techniques previously detailed. These uncertainty quantification techniques are benchmarked against those recovered via Px-MALA – a state-of-the-art proximal Markov chain Monte Carlo (MCMC) algorithm. We find that, typically, our recovered uncertainties are everywhere conservative (never underestimate the uncertainty, yet the approximation error is bounded above), of similar magnitude and highly correlated with those recovered via Px-MALA. Moreover, we demonstrate an increase in computational efficiency of $\mathcal {O}(10^6)$ when using our sparse Bayesian approach over MCMC techniques. This computational saving is critical for the application of Bayesian uncertainty quantification to large-scale stage IV surveys such as LSST and Euclid.


2008 ◽  
Vol 75 (1) ◽  
Author(s):  
Bryan Eisenhower ◽  
Gregory Hagen ◽  
Andrzej Banaszuk ◽  
Igor Mezić

In this paper we investigate oscillations of a dynamical system containing passive dynamics driven by a positive feedback and how spatial characteristics (i.e., symmetry) affect the amplitude and stability of its nominal limit cycling response. The physical motivation of this problem is thermoacoustic dynamics in a gas turbine combustor. The spatial domain is periodic (passive annular acoustics) which are driven by heat released from a combustion process, and with sufficient driving through this nonlinear feedback a limit cycle is produced which is exhibited by a traveling acoustic wave around this annulus. We show that this response can be controlled passively by spatial perturbation in the symmetry of acoustic parameters. We find the critical parameter values that affect this oscillation, study the bifurcation properties, and subsequently use harmonic balance and temporal averaging to characterize periodic solutions and their stability. In all of these cases, we carry a parameter associated with the spatial symmetry of the acoustics and investigate how this symmetry affects the system response. The contribution of this paper is a unique analysis of a particular physical phenomena, as well as illustrating the equivalence of different nonlinear analysis tools for this analysis.


Sign in / Sign up

Export Citation Format

Share Document