scholarly journals How meaningful are parameter estimates from models of inter-temporal choice?

2020 ◽  
Author(s):  
Timothy Ballard ◽  
Ashley Luckman ◽  
Emmanouil Konstantinidis

Decades of work has been dedicated to developing and testing models that characterize how people make inter-temporal choices. Although parameter estimates from these models are often interpreted as indices of latent components of the choice process, little work has been done to examine their reliability. This is problematic, because estimation error can bias conclusions that are drawn from these parameter estimates. We examine the reliability of inter-temporal choice model parameter estimates by conducting a parameter recovery analysis of 11 prominent models. We find that the reliability of parameter estimation varies considerably between models and the experimental designs upon which parameter estimates are based. We conclude that many parameter estimates reported in previous research are likely unreliable and provide recommendations on how to enhance reliability for those wishing to use inter-temporal choice models for measurement purposes.

2002 ◽  
Vol 39 (4) ◽  
pp. 479-487 ◽  
Author(s):  
Rick L. Andrews ◽  
Andrew Ainslie ◽  
Imran S. Currim

Currently, there is an important debate about the relative merits of models with discrete and continuous representations of consumer heterogeneity. In a recent JMR study, Andrews, Ansari, and Currim (2002 ; hereafter AAC) compared metric conjoint analysis models with discrete and continuous representations of heterogeneity and found no differences between the two models with respect to parameter recovery and prediction of ratings for holdout profiles. Models with continuous representations of heterogeneity fit the data better than models with discrete representations of heterogeneity. The goal of the current study is to compare the relative performance of logit choice models with discrete versus continuous representations of heterogeneity in terms of the accuracy of household-level parameters, fit, and forecasting accuracy. To accomplish this goal, the authors conduct an extensive simulation experiment with logit models in a scanner data context, using an experimental design based on AAC and other recent simulation studies. One of the main findings is that models with continuous and discrete representations of heterogeneity recover household-level parameter estimates and predict holdout choices about equally well except when the number of purchases per household is small, in which case the models with continuous representations perform very poorly. As in the AAC study, models with continuous representations of heterogeneity fit the data better.


2017 ◽  
Author(s):  
Paul D. Blischak ◽  
Laura S. Kubatko ◽  
Andrea D. Wolfe

AbstractMotivation:Genotyping and parameter estimation using high throughput sequencing data are everyday tasks for population geneticists, but methods developed for diploids are typically not applicable to polyploid taxa. This is due to their duplicated chromosomes, as well as the complex patterns of allelic exchange that often accompany whole genome duplication (WGD) events. For WGDs within a single lineage (auto polyploids), inbreeding can result from mixed mating and/or double reduction. For WGDs that involve hybridization (allopolyploids), alleles are typically inherited through independently segregating subgenomes.Results:We present two new models for estimating genotypes and population genetic parameters from genotype likelihoods for auto- and allopolyploids. We then use simulations to compare these models to existing approaches at varying depths of sequencing coverage and ploidy levels. These simulations show that our models typically have lower levels of estimation error for genotype and parameter estimates, especially when sequencing coverage is low. Finally, we also apply these models to two empirical data sets from the literature. Overall, we show that the use of genotype likelihoods to model non-standard inheritance patterns is a promising approach for conducting population genomic inferences in polyploids.Availability:A C++ program, EBG, is provided to perform inference using the models we describe. It is available under the GNU GPLv3 on GitHub:https://github.com/pblischak/polyploid-genotyping.Contact: [email protected].


2002 ◽  
Vol 45 (4-5) ◽  
pp. 335-343 ◽  
Author(s):  
H. Spanjers ◽  
G.G. Patry ◽  
K.J. Keesman

This paper describes part of a project to develop a systematic approach to knowledge extraction from on-line respirometric measurements in support of wastewater treatment plant control and operation. The paper deals with the following issues: (1) test of the implementation of an automatic set-up consisting of a continuous laboratory respirometer integrated in a mobile trailer with sampling and dosing equipment, and data-acquisition and communication system; (2) assessment of activated sludge/sewage characteristics from sludge respirograms by model parameter estimation; (3) comparison of the parameter estimates with regular plant data and information obtained from supplementary wastewater respirograms. The paper describes the equipment and some of its measuring results from a period of one week at a large-scale wastewater treatment plant. The measurements were evaluated in terms of the common activated sludge modelling practice. The automatic set-up allowed reliable measurements during at least one week. The data were used to calibrate two different version of the model, and independent parameter estimates were obtained.


1980 ◽  
Vol 239 (5) ◽  
pp. R390-R400
Author(s):  
T. M. Grove ◽  
G. A. Bekey ◽  
L. J. Haywood

The accuracy of parameter estimation applied to physiological systems is analyzed. The method of analysis is applicable to procedures utilizing minimization of squared output error and a nonlinear dynamic system model. Three major sources of estimation error are described: 1) measurement error, 2) modeling error, and 3) optimization error. Measurement errors affect values used for the system output, the model input, and nonestimated parameters of the model. Modeling errors are due to failure to adequately describe the structure of the system and to numerical errors that occur in the digital computer solution of the model equations. Linearization by use of Taylor series expansions in the region of the nominal solution is used to obtain an expression for the covariance matrix of the parameter estimates in terms of the covariance matrix of each error source. The analysis is applied to the example of cardiac output estimation from respiratory measurements. The results demonstrate that an analysis of system identifiability is not sufficient to ensure usable estimates and that systematic error analysis is essential for assessing the usefulness of parameter estimation techniques.


2014 ◽  
Vol 1006-1007 ◽  
pp. 815-820
Author(s):  
Zhen Wang ◽  
Lan Xiang Zhu ◽  
Feng Yu ◽  
Lei Gu

Based on Electromagnetic Environmental Sensory(EES)and Multiple-input Multiple-Out(MIMO) radar sensing algorithm , this paper presents SVD-TLD perception algorithm, which firstly use the cross-spectrum AR model parameter estimation, and secondly considering the cross-correlation matrix of the estimation error function disturbance and lastly taking into account of the two ends of the equation, using the cross-correlation function of the estimated measurement errors to affect the Total Least Squares (TLS) method . Compared with the AR model parameter estimation, the accuracy of SVD algorithm cross-spectral estimation has significantly improved, greatly reducing the amount of computation and is more conducive to real-time online computing.


2020 ◽  
Vol 80 (4) ◽  
pp. 775-807
Author(s):  
Yue Liu ◽  
Ying Cheng ◽  
Hongyun Liu

The responses of non-effortful test-takers may have serious consequences as non-effortful responses can impair model calibration and latent trait inferences. This article introduces a mixture model, using both response accuracy and response time information, to help differentiating non-effortful and effortful individuals, and to improve item parameter estimation based on the effortful group. Two mixture approaches are compared with the traditional response time mixture model (TMM) method and the normative threshold 10 (NT10) method with response behavior effort criteria in four simulation scenarios with regard to item parameter recovery and classification accuracy. The results demonstrate that the mixture methods and the TMM method can reduce the bias of item parameter estimates caused by non-effortful individuals, with the mixture methods showing more advantages when the non-effort severity is high or the response times are not lognormally distributed. An illustrative example is also provided.


2020 ◽  
Author(s):  
Joseph Rios ◽  
Jim Soland

As low-stakes testing contexts increase, low test-taking effort may serve as a serious validity threat. One common solution to this problem is to identify noneffortful responses and treat them as missing during parameter estimation via the Effort-Moderated IRT (EM-IRT) model. Although this model has been shown to outperform traditional IRT models (e.g., 2PL) in parameter estimation under simulated conditions, prior research has failed to examine its performance under violations to the model’s assumptions. Therefore, the objective of this simulation study was to examine item and mean ability parameter recovery when violating the assumptions that noneffortful responding occurs randomly (assumption #1) and is unrelated to the underlying ability of examinees (assumption #2). Results demonstrated that, across conditions, the EM-IRT model provided robust item parameter estimates to violations of assumption #1. However, bias values greater than 0.20 SDs were observed for the EM-IRT model when violating assumption #2; nonetheless, these values were still lower than the 2PL model. In terms of mean ability estimates, model results indicated equal performance between the EM-IRT and 2PL models across conditions. Across both models, mean ability estimates were found to be biased by more than 0.25 SDs when violating assumption #2. However, our accompanying empirical study suggested that this biasing occurred under extreme conditions that may not be present in some operational settings. Overall, these results suggest that the EM-IRT model provides superior item and equal mean ability parameter estimates in the presence of model violations under realistic conditions when compared to the 2PL model.


PLoS ONE ◽  
2021 ◽  
Vol 16 (8) ◽  
pp. e0256227
Author(s):  
Rajnesh Lal ◽  
Weidong Huang ◽  
Zhenquan Li

Since the novel coronavirus (COVID-19) outbreak in China, and due to the open accessibility of COVID-19 data, several researchers and modellers revisited the classical epidemiological models to evaluate their practical applicability. While mathematical compartmental models can predict various contagious viruses’ dynamics, their efficiency depends on the model parameters. Recently, several parameter estimation methods have been proposed for different models. In this study, we evaluated the Ensemble Kalman filter’s performance (EnKF) in the estimation of time-varying model parameters with synthetic data and the real COVID-19 data of Hubei province, China. Contrary to the previous works, in the current study, the effect of damping factors on an augmented EnKF is studied. An augmented EnKF algorithm is provided, and we present how the filter performs in estimating models using uncertain observational (reported) data. Results obtained confirm that the augumented-EnKF approach can provide reliable model parameter estimates. Additionally, there was a good fit of profiles between model simulation and the reported COVID-19 data confirming the possibility of using the augmented-EnKF approach for reliable model parameter estimation.


2020 ◽  
Author(s):  
Joseph Rios ◽  
Jim Soland

As low-stakes testing contexts increase, low test-taking effort may serve as a serious validity threat. One common solution to this problem is to identify noneffortful responses and treat them as missing during parameter estimation via the Effort-Moderated IRT (EM-IRT) model. Although this model has been shown to outperform traditional IRT models (e.g., 2PL) in parameter estimation under simulated conditions, prior research has failed to examine its performance under violations to the model’s assumptions. Therefore, the objective of this simulation study was to examine item and mean ability parameter recovery when violating the assumptions that noneffortful responding occurs randomly (assumption #1) and is unrelated to the underlying ability of examinees (assumption #2). Results demonstrated that, across conditions, the EM-IRT model provided robust item parameter estimates to violations of assumption #1. However, bias values greater than 0.20 SDs were observed for the EM-IRT model when violating assumption #2; nonetheless, these values were still lower than the 2PL model. In terms of mean ability estimates, model results indicated equal performance between the EM-IRT and 2PL models across conditions. Across both models, mean ability estimates were found to be biased by more than 0.25 SDs when violating assumption #2. However, our accompanying empirical study suggested that this biasing occurred under extreme conditions that may not be present in some operational settings. Overall, these results suggest that the EM-IRT model provides superior item and equal mean ability parameter estimates in the presence of model violations under realistic conditions when compared to the 2PL model.


Sign in / Sign up

Export Citation Format

Share Document