An Empirical Comparison of Logit Choice Models with Discrete versus Continuous Representations of Heterogeneity

2002 ◽  
Vol 39 (4) ◽  
pp. 479-487 ◽  
Author(s):  
Rick L. Andrews ◽  
Andrew Ainslie ◽  
Imran S. Currim

Currently, there is an important debate about the relative merits of models with discrete and continuous representations of consumer heterogeneity. In a recent JMR study, Andrews, Ansari, and Currim (2002 ; hereafter AAC) compared metric conjoint analysis models with discrete and continuous representations of heterogeneity and found no differences between the two models with respect to parameter recovery and prediction of ratings for holdout profiles. Models with continuous representations of heterogeneity fit the data better than models with discrete representations of heterogeneity. The goal of the current study is to compare the relative performance of logit choice models with discrete versus continuous representations of heterogeneity in terms of the accuracy of household-level parameters, fit, and forecasting accuracy. To accomplish this goal, the authors conduct an extensive simulation experiment with logit models in a scanner data context, using an experimental design based on AAC and other recent simulation studies. One of the main findings is that models with continuous and discrete representations of heterogeneity recover household-level parameter estimates and predict holdout choices about equally well except when the number of purchases per household is small, in which case the models with continuous representations perform very poorly. As in the AAC study, models with continuous representations of heterogeneity fit the data better.

2020 ◽  
Author(s):  
Timothy Ballard ◽  
Ashley Luckman ◽  
Emmanouil Konstantinidis

Decades of work has been dedicated to developing and testing models that characterize how people make inter-temporal choices. Although parameter estimates from these models are often interpreted as indices of latent components of the choice process, little work has been done to examine their reliability. This is problematic, because estimation error can bias conclusions that are drawn from these parameter estimates. We examine the reliability of inter-temporal choice model parameter estimates by conducting a parameter recovery analysis of 11 prominent models. We find that the reliability of parameter estimation varies considerably between models and the experimental designs upon which parameter estimates are based. We conclude that many parameter estimates reported in previous research are likely unreliable and provide recommendations on how to enhance reliability for those wishing to use inter-temporal choice models for measurement purposes.


2012 ◽  
Vol 616-618 ◽  
pp. 1143-1147
Author(s):  
Wei Sun ◽  
Jing Min Wang ◽  
Jun Jie Kang

In this paper, the performance of combination forecast methods for CO2 emissions prediction is investigated. Linear model, time series model, GM (1, 1) model and Grey Verhulst model are selected in study as the separate models. And, four kinds of combination forecast models, i.e. the equivalent weight (EW) combination method, variance-covariance (VACO) combination method, regression combination (R) method, and discounted mean square forecast error (MSFE) method are chosen to employ for top 5 CO2 emitters. The forecasting accuracy is compared between these combination models and single models. This research suggests that the combination forecasts are almost certain to outperform the worst individual forecasts and maybe even better than most individual ones. Furthermore the combination forecasts can avoid the risk of model choosing in future projection. For CO2 emissions forecast with many uncertain factors in the future, combining the single forecast would be safer in such forecasting situations.


2017 ◽  
Vol 27 (10) ◽  
pp. 2885-2905 ◽  
Author(s):  
Richard D Riley ◽  
Joie Ensor ◽  
Dan Jackson ◽  
Danielle L Burke

Many meta-analysis models contain multiple parameters, for example due to multiple outcomes, multiple treatments or multiple regression coefficients. In particular, meta-regression models may contain multiple study-level covariates, and one-stage individual participant data meta-analysis models may contain multiple patient-level covariates and interactions. Here, we propose how to derive percentage study weights for such situations, in order to reveal the (otherwise hidden) contribution of each study toward the parameter estimates of interest. We assume that studies are independent, and utilise a decomposition of Fisher’s information matrix to decompose the total variance matrix of parameter estimates into study-specific contributions, from which percentage weights are derived. This approach generalises how percentage weights are calculated in a traditional, single parameter meta-analysis model. Application is made to one- and two-stage individual participant data meta-analyses, meta-regression and network (multivariate) meta-analysis of multiple treatments. These reveal percentage study weights toward clinically important estimates, such as summary treatment effects and treatment-covariate interactions, and are especially useful when some studies are potential outliers or at high risk of bias. We also derive percentage study weights toward methodologically interesting measures, such as the magnitude of ecological bias (difference between within-study and across-study associations) and the amount of inconsistency (difference between direct and indirect evidence in a network meta-analysis).


2020 ◽  
Author(s):  
Hui Tian ◽  
Andrew Yim ◽  
David P. Newton

We show that quantile regression is better than ordinary-least-squares (OLS) regression in forecasting profitability for a range of profitability measures following the conventional setup of the accounting literature, including the mean absolute forecast error (MAFE) evaluation criterion. Moreover, we perform both a simulated-data and an archival-data analysis to examine how the forecasting performance of quantile regression against OLS changes with the shape of the profitability distribution. Considering the MAFE and mean squared forecast error (MSFE) criteria together, we see that the quantile regression is more accurate relative to OLS when the profitability to be forecast has a heavier-tailed distribution. In addition, the asymmetry of the profitability distribution has either a U-shape or an inverted-U-shape effect on the forecasting accuracy of quantile regression. An application of the distributional shape analysis framework to cash flow forecasting demonstrates the usefulness of the framework beyond profitability forecasting, providing additional empirical evidence on the positive effect of tail-heaviness and supporting the notion of an inverted-U-shape effect of asymmetry. This paper was accepted by Shiva Rajgopal, accounting.


2021 ◽  
pp. 107699862199436
Author(s):  
Yue Liu ◽  
Hongyun Liu

The prevalence and serious consequences of noneffortful responses from unmotivated examinees are well-known in educational measurement. In this study, we propose to apply an iterative purification process based on a response time residual method with fixed item parameter estimates to detect noneffortful responses. The proposed method is compared with the traditional residual method and noniterative method with fixed item parameters in two simulation studies in terms of noneffort detection accuracy and parameter recovery. The results show that when severity of noneffort is high, the proposed method leads to a much higher true positive rate with a small increase of false discovery rate. In addition, parameter estimation is significantly improved by the strategies of fixing item parameters and iteratively cleansing. These results suggest that the proposed method is a potential solution to reduce the impact of data contamination due to severe low test-taking effort and to obtain more accurate parameter estimates. An empirical study is also conducted to show the differences in the detection rate and parameter estimates among different approaches.


Author(s):  
David Izydorczyk ◽  
Arndt Bröder

AbstractExemplar models are often used in research on multiple-cue judgments to describe the underlying process of participants’ responses. In these experiments, participants are repeatedly presented with the same exemplars (e.g., poisonous bugs) and instructed to memorize these exemplars and their corresponding criterion values (e.g., the toxicity of a bug). We propose that there are two possible outcomes when participants judge one of the already learned exemplars in some later block of the experiment. They either have memorized the exemplar and their respective criterion value and are thus able to recall the exact value, or they have not learned the exemplar and thus have to judge its criterion value, as if it was a new stimulus. We argue that psychologically, the judgments of participants in a multiple-cue judgment experiment are a mixture of these two qualitatively distinct cognitive processes: judgment and recall. However, the cognitive modeling procedure usually applied does not make any distinction between these processes and the data generated by them. We investigated potential effects of disregarding the distinction between these two processes on the parameter recovery and the model fit of one exemplar model. We present results of a simulation as well as the reanalysis of five experimental data sets showing that the current combination of experimental design and modeling procedure can bias parameter estimates, impair their validity, and negatively affect the fit and predictive performance of the model. We also present a latent-mixture extension of the original model as a possible solution to these issues.


2018 ◽  
Vol 13 (12) ◽  
pp. 180
Author(s):  
Murad Harasheh ◽  
Alessandro Capocchi ◽  
Andrea Amaduzzi

Increasingly, innovation is seen as a novel leverage tool with which to create business and social value and thereby place its finders and users at a competitive advantage. Contemporary research suggests that the determinants of the innovation activity of firms are numerous. In this paper, we consider the financial and governance characteristics that might influence the innovation activity of a sample of 700 family firms in Italy. Our study was conducted over a 10-year period, from 2007 to 2016, using panel analysis models alongside robustness tests for the lagging effect and the probability regression as well as diagnostic statistics to ensure the use of an appropriate model. The results show that the existence of institutional investors, as a proxy for governance, has a persistent positive relationship with patent value, as a proxy for innovation, but not with the likelihood of being innovative. Moreover, financial indicators such as net working capital, earnings before interest, taxes, depreciation, and amortization, debt, and equity are found to explain innovation activity better than other indicators in both the panel and probability regressions. We also find very little significant difference between the sectors and regions featured in the study, suggesting that the relationship among them is quasi-systematic. Concluding the paper, our findings are discussed in relation to their policy implications and suggestions for further research are made.


2019 ◽  
Author(s):  
Rachel N. Denison ◽  
Jacob A. Parker ◽  
Marisa Carrasco

AbstractPupil size is an easily accessible, noninvasive online indicator of various perceptual and cognitive processes. Pupil measurements have the potential to reveal continuous processing dynamics throughout an experimental trial, including anticipatory responses. However, the relatively sluggish (∼2 s) response dynamics of pupil dilation make it challenging to connect changes in pupil size to events occurring close together in time. Researchers have used models to link changes in pupil size to specific trial events, but such methods have not been systematically evaluated. Here we developed and evaluated a general linear model (GLM) pipeline that estimates pupillary responses to multiple rapid events within an experimental trial. We evaluated the modeling approach using a sample dataset in which multiple sequential stimuli were presented within 2-s trials. We found: (1) Model fits improved when the pupil impulse response function (puRF) was fit for each observer. PuRFs varied substantially across individuals but were consistent for each individual. (2) Model fits also improved when pupil responses were not assumed to occur simultaneously with their associated trial events, but could have non-zero latencies. For example, pupil responses could anticipate predictable trial events. (3) Parameter recovery confirmed the validity of the fitting procedures, and we quantified the reliability of the parameter estimates for our sample dataset. (4) A cognitive task manipulation modulated pupil response amplitude. We provide our pupil analysis pipeline as open-source software (Pupil Response Estimation Toolbox: PRET) to facilitate the estimation of pupil responses and the evaluation of the estimates in other datasets.


2021 ◽  
Vol 65 (11) ◽  
pp. 1129-1135
Author(s):  
M. V. Popov ◽  
T. V. Smirnova

Abstract We have analyzed two-dimensional correlation functions from the dynamic spectra of 11 pulsars using the archival data of the “Radioastron” project. The time-sections of these functions were approximated by exponential functions with a power $$\alpha $$. It is shown that this approximation describes the shape of the correlation function much better than the Gaussian. The temporal structure function $$D(\Delta t)$$ for small values of the delay $$\Delta t$$is a power law with an index $$\alpha $$. The spectrum power of spatial inhomogeneities of the interstellar plasma is related to the power of the structure function as $$n = \alpha + 2$$. We have determined the characteristic scintillation time and the power $$n$$ in the direction of 11 pulsars. In the direction of three pulsars (B0329+54, B0823+26, and B1929+10), the spectrum power of spatial inhomogeneities of the interstellar plasma turned out to be very close to the value for the Kolmogorov spectrum ($$n = 3.67$$). For other pulsars, it ranges from 3.18 to 3.86. It is shown that the measured scintillation parameters are significantly influenced by the duration of the observation session, expressed by its ratio to the characteristic scintillation time. If this parameter is less than 10, the parameter estimates may be biased: the values of $$\alpha $$ and the characteristic scintillation time $${{t}_{{{\text{scint}}}}}$$ may decrease.


2019 ◽  
Vol 80 (1) ◽  
pp. 41-66 ◽  
Author(s):  
Dexin Shi ◽  
Taehun Lee ◽  
Amanda J. Fairchild ◽  
Alberto Maydeu-Olivares

This study compares two missing data procedures in the context of ordinal factor analysis models: pairwise deletion (PD; the default setting in Mplus) and multiple imputation (MI). We examine which procedure demonstrates parameter estimates and model fit indices closer to those of complete data. The performance of PD and MI are compared under a wide range of conditions, including number of response categories, sample size, percent of missingness, and degree of model misfit. Results indicate that both PD and MI yield parameter estimates similar to those from analysis of complete data under conditions where the data are missing completely at random (MCAR). When the data are missing at random (MAR), PD parameter estimates are shown to be severely biased across parameter combinations in the study. When the percentage of missingness is less than 50%, MI yields parameter estimates that are similar to results from complete data. However, the fit indices (i.e., χ2, RMSEA, and WRMR) yield estimates that suggested a worse fit than results observed in complete data. We recommend that applied researchers use MI when fitting ordinal factor models with missing data. We further recommend interpreting model fit based on the TLI and CFI incremental fit indices.


Sign in / Sign up

Export Citation Format

Share Document