Improving Model Parameters in Vibrating Systems Using Neumann Series

2018 ◽  
Vol 141 (1) ◽  
Author(s):  
Alyssa T. Liem ◽  
J. Gregory McDaniel ◽  
Andrew S. Wixom

A method is presented to improve the estimates of material properties, dimensions, and other model parameters for linear vibrating systems. The method improves the estimates of a single model parameter of interest by finding parameter values that bring model predictions into agreement with experimental measurements. A truncated Neumann series is used to approximate the inverse of the dynamic stiffness matrix. This approximation avoids the need to directly solve the equations of motion for each parameter variation. The Neumman series is shown to be equivalent to a Taylor series expansion about nominal parameter values. A recursive scheme is presented for computing the associated derivatives, which are interpreted as sensitivities of displacements to parameter variations. The convergence of the Neumman series is studied in the context of vibrating systems, and it is found that the spectral radius is strongly dependent on system resonances. A homogeneous viscoelastic bar in longitudinal vibration is chosen as a test specimen, and the complex-valued Young's modulus is chosen as an uncertain parameter. The method is demonstrated on simulated experimental measurements computed from the model. These demonstrations show that parameter values estimated by the method agree with those used to simulate the experiment when enough terms are included in the Neumann series. Similar results are obtained for the case of an elastic plate with clamped boundary conditions. The method is also demonstrated on experimental data, where it produces improved parameter estimates that bring the model predictions into agreement with the measured response to within 1% at a point on the bar across a frequency range that includes three resonance frequencies.

Author(s):  
I. A. Kuznetsov ◽  
A. V. Kuznetsov

In this paper, we first develop a model of axonal transport of tubulin-associated unit (tau) protein. We determine the minimum number of parameters necessary to reproduce published experimental results, reducing the number of parameters from 18 in the full model to eight in the simplified model. We then address the following questions: Is it possible to estimate parameter values for this model using the very limited amount of published experimental data? Furthermore, is it possible to estimate confidence intervals for the determined parameters? The idea that is explored in this paper is based on using bootstrapping. Model parameters were estimated by minimizing the objective function that simulates the discrepancy between the model predictions and experimental data. Residuals were then identified by calculating the differences between the experimental data and model predictions. New, surrogate ‘experimental’ data were generated by randomly resampling residuals. By finding sets of best-fit parameters for a large number of surrogate data the histograms for the model parameters were produced. These histograms were then used to estimate confidence intervals for the model parameters, by using the percentile bootstrap. Once the model was calibrated, we applied it to analysing some features of tau transport that are not accessible to current experimental techniques.


2018 ◽  
Author(s):  
Sebastian Gluth ◽  
Nachshon Meiran

AbstractIt has become a key goal of model-based neuroscience to estimate trial-by-trial fluctuations of cognitive model parameters for linking these fluctuations to brain signals. However, previously developed methods were limited by being difficulty to implement, time-consuming, or model-specific. Here, we propose an easy, efficient and general approach to estimating trial-wise changes in parameters: Leave-One-Trial-Out (LOTO). The rationale behind LOTO is that the difference between the parameter estimates for the complete dataset and for the dataset with one omitted trial reflects the parameter value in the omitted trial. We show that LOTO is superior to estimating parameter values from single trials and compare it to previously proposed approaches. Furthermore, the method allows distinguishing true variability in a parameter from noise and from variability in other parameters. In our view, the practicability and generality of LOTO will advance research on tracking fluctuations in latent cognitive variables and linking them to neural data.


Author(s):  
Ranik Raaen Wahlstrøm ◽  
Florentina Paraschiv ◽  
Michael Schürle

AbstractWe shed light on computational challenges when fitting the Nelson-Siegel, Bliss and Svensson parsimonious yield curve models to observed US Treasury securities with maturities up to 30 years. As model parameters have a specific financial meaning, the stability of their estimated values over time becomes relevant when their dynamic behavior is interpreted in risk-return models. Our study is the first in the literature that compares the stability of estimated model parameters among different parsimonious models and for different approaches for predefining initial parameter values. We find that the Nelson-Siegel parameter estimates are more stable and conserve their intrinsic economical interpretation. Results reveal in addition the patterns of confounding effects in the Svensson model. To obtain the most stable and intuitive parameter estimates over time, we recommend the use of the Nelson-Siegel model by taking initial parameter values derived from the observed yields. The implications of excluding Treasury bills, constraining parameters and reducing clusters across time to maturity are also investigated.


2018 ◽  
Author(s):  
Benjamin Rosenbaum ◽  
Michael Raatz ◽  
Guntram Weithoff ◽  
Gregor F. Fussmann ◽  
Ursula Gaedke

AbstractEmpirical time series of interacting entities, e.g. species abundances, are highly useful to study ecological mechanisms. Mathematical models are valuable tools to further elucidate those mechanisms and underlying processes. However, obtaining an agreement between model predictions and experimental observations remains a demanding task. As models always abstract from reality one parameter often summarizes several properties. Parameter measurements are performed in additional experiments independent of the ones delivering the time series. Transferring these parameter values to different settings may result in incorrect parametrizations. On top of that, the properties of organisms and thus the respective parameter values may vary considerably. These issues limit the use of a priori model parametrizations.In this study, we present a method suited for a direct estimation of model parameters and their variability from experimental time series data. We combine numerical simulations of a continuous-time dynamical population model with Bayesian inference, using a hierarchical framework that allows for variability of individual parameters. The method is applied to a comprehensive set of time series from a laboratory predator-prey system that features both steady states and cyclic population dynamics.Our model predictions are able to reproduce both steady states and cyclic dynamics of the data. Additionally to the direct estimates of the parameter values, the Bayesian approach also provides their uncertainties. We found that fitting cyclic population dynamics, which contain more information on the process rates than steady states, yields more precise parameter estimates. We detected significant variability among parameters of different time series and identified the variation in the maximum growth rate of the prey as a source for the transition from steady states to cyclic dynamics.By lending more flexibility to the model, our approach facilitates parametrizations and shows more easily which patterns in time series can be explained also by simple models. Applying Bayesian inference and dynamical population models in conjunction may help to quantify the profound variability in organismal properties in nature.


eLife ◽  
2019 ◽  
Vol 8 ◽  
Author(s):  
Sebastian Gluth ◽  
Nachshon Meiran

A key goal of model-based cognitive neuroscience is to estimate the trial-by-trial fluctuations of cognitive model parameters in order to link these fluctuations to brain signals. However, previously developed methods are limited by being difficult to implement, time-consuming, or model-specific. Here, we propose an easy, efficient and general approach to estimating trial-wise changes in parameters: Leave-One-Trial-Out (LOTO). The rationale behind LOTO is that the difference between parameter estimates for the complete dataset and for the dataset with one omitted trial reflects the parameter value in the omitted trial. We show that LOTO is superior to estimating parameter values from single trials and compare it to previously proposed approaches. Furthermore, the method makes it possible to distinguish true variability in a parameter from noise and from other sources of variability. In our view, the practicability and generality of LOTO will advance research on tracking fluctuations in latent cognitive variables and linking them to neural data.


2012 ◽  
Vol 2012 ◽  
pp. 1-13 ◽  
Author(s):  
Jim J. Xiao

The objectives were to review available PK models for saturable FcRn-mediated IgG disposition, and to explore an alternative semimechanistic model. Most available empirical and mechanistic PK models assumed equal IgG concentrations in plasma and endosome in addition to other model-specific assumptions. These might have led to inappropriate parameter estimates and model interpretations. Some physiologically based PK (PBPK) models included FcRn-mediated IgG recycling. The nature of PBPK models requires borrowing parameter values from literature, and subtle differences in the assumptions may render dramatic changes in parameter estimates related to the IgG recycling kinetics. These models might have been unnecessarily complicated to address FcRn saturation and nonlinear IgG PK especially in the IVIG setting. A simple semimechanistic PK model (cutoff model) was developed that assumed a constant endogenous IgG production rate and a saturable FcRn-binding capacity. The FcRn-binding capacity was defined as MAX, and IgG concentrations exceeding MAX in endosome resulted in lysosomal degradation. The model parameters were estimated using simulated data from previously published models. The cutoff model adequately described the rat and mouse IgG PK data simulated from published models and allowed reasonable estimation of endogenous IgG turnover rates.


Author(s):  
Rafegh Aghamohammadi ◽  
Jorge Laval

This paper extends the Stochastic Method of Cuts (SMoC) to approximate of the Macroscopic Fundamental Diagram (MFD) of urban networks and uses Maximum Likelihood Estimation (MLE) method to estimate the model parameters based on empirical data from a corridor and 30 cities around the world. For the corridor case, the estimated values are in good agreement with the measured values of the parameters. For the network datasets, the results indicate that the method yields satisfactory parameter estimates and graphical fits for roughly 50\% of the studied networks, where estimations fall within the expected range of the parameter values. The satisfactory estimates are mostly for the datasets which (i) cover a relatively wider range of densities and (ii) the average flow values at different densities are approximately normally distributed similar to the probability density function of the SMoC. The estimated parameter values are compared to the real or expected values and any discrepancies and their potential causes are discussed in depth to identify the challenges in the MFD estimation both analytically and empirically. In particular, we find that the most important issues needing further investigation are: (i) the distribution of loop detectors within the links, (ii) the distribution of loop detectors across the network, and (iii) the treatment of unsignalized intersections and their impact on the block length.


Cells ◽  
2021 ◽  
Vol 10 (6) ◽  
pp. 1516
Author(s):  
Daniel Gratz ◽  
Alexander J Winkle ◽  
Seth H Weinberg ◽  
Thomas J Hund

The voltage-gated Na+ channel Nav1.5 is critical for normal cardiac myocyte excitability. Mathematical models have been widely used to study Nav1.5 function and link to a range of cardiac arrhythmias. There is growing appreciation for the importance of incorporating physiological heterogeneity observed even in a healthy population into mathematical models of the cardiac action potential. Here, we apply methods from Bayesian statistics to capture the variability in experimental measurements on human atrial Nav1.5 across experimental protocols and labs. This variability was used to define a physiological distribution for model parameters in a novel model formulation of Nav1.5, which was then incorporated into an existing human atrial action potential model. Model validation was performed by comparing the simulated distribution of action potential upstroke velocity measurements to experimental measurements from several different sources. Going forward, we hope to apply this approach to other major atrial ion channels to create a comprehensive model of the human atrial AP. We anticipate that such a model will be useful for understanding excitability at the population level, including variable drug response and penetrance of variants linked to inherited cardiac arrhythmia syndromes.


2021 ◽  
Vol 11 (7) ◽  
pp. 2898
Author(s):  
Humberto C. Godinez ◽  
Esteban Rougier

Simulation of fracture initiation, propagation, and arrest is a problem of interest for many applications in the scientific community. There are a number of numerical methods used for this purpose, and among the most widely accepted is the combined finite-discrete element method (FDEM). To model fracture with FDEM, material behavior is described by specifying a combination of elastic properties, strengths (in the normal and tangential directions), and energy dissipated in failure modes I and II, which are modeled by incorporating a parameterized softening curve defining a post-peak stress-displacement relationship unique to each material. In this work, we implement a data assimilation method to estimate key model parameter values with the objective of improving the calibration processes for FDEM fracture simulations. Specifically, we implement the ensemble Kalman filter assimilation method to the Hybrid Optimization Software Suite (HOSS), a FDEM-based code which was developed for the simulation of fracture and fragmentation behavior. We present a set of assimilation experiments to match the numerical results obtained for a Split Hopkinson Pressure Bar (SHPB) model with experimental observations for granite. We achieved this by calibrating a subset of model parameters. The results show a steady convergence of the assimilated parameter values towards observed time/stress curves from the SHPB observations. In particular, both tensile and shear strengths seem to be converging faster than the other parameters considered.


2008 ◽  
Vol 10 (2) ◽  
pp. 153-162 ◽  
Author(s):  
B. G. Ruessink

When a numerical model is to be used as a practical tool, its parameters should preferably be stable and consistent, that is, possess a small uncertainty and be time-invariant. Using data and predictions of alongshore mean currents flowing on a beach as a case study, this paper illustrates how parameter stability and consistency can be assessed using Markov chain Monte Carlo. Within a single calibration run, Markov chain Monte Carlo estimates the parameter posterior probability density function, its mode being the best-fit parameter set. Parameter stability is investigated by stepwise adding new data to a calibration run, while consistency is examined by calibrating the model on different datasets of equal length. The results for the present case study indicate that various tidal cycles with strong (say, >0.5 m/s) currents are required to obtain stable parameter estimates, and that the best-fit model parameters and the underlying posterior distribution are strongly time-varying. This inconsistent parameter behavior may reflect unresolved variability of the processes represented by the parameters, or may represent compensational behavior for temporal violations in specific model assumptions.


Sign in / Sign up

Export Citation Format

Share Document