Incorporating Prior Knowledge into the Analysis of Conjoint Studies

1995 ◽  
Vol 32 (2) ◽  
pp. 152-162 ◽  
Author(s):  
Greg M. Allenby ◽  
Neeraj Arora ◽  
James L. Ginter

The authors use conjoint analysis to provide interval-level estimates of part-worths allowing tradeoffs among attribute levels to be examined. Researchers often possess prior information about the part-worths, such as the order and range restrictions of product attribute levels. It is known, for example, that consumers would rather pay less for a specific product given that all other product attribute levels are unchanged. The authors present a Bayesian approach to incorporate prior ordinal information about these part-worths into the analysis of conjoint studies. Their method results in parameter estimates with greater face validity and predictive performance than estimates that do not utilize prior information or those that use traditional methods such as LINMAP. Unlike existing methods, the authors’ methods apply to both rating and choice-based conjoint studies.

Author(s):  
A. TETERUKOVSKIY

A problem of automatic detection of tracks in aerial photos is considered. We adopt a Bayesian approach and base our inference on an a priori knowledge of the structure of tracks. The probability of a pixel to belong to a track depends on how the pixel gray level differs from the gray levels of pixels in the neighborhood and on additional prior information. Several suggestions on how to formalize the prior knowledge about the shape of the tracks are made. The Gibbs sampler is used to construct the most probable configuration of tracks in the area. The method is applied to aerial photos with cell size of 1 sq. m. Even for detection of trails of width comparable with or smaller than the cell size, positive results can be achieved.


1970 ◽  
Vol 9 ◽  
pp. 41-48 ◽  
Author(s):  
R. P. Khatiwada ◽  
A. B. Sthapit

Conventional method of making statistical inference regarding food quality measure is absolutely based upon experimental data. It refuses to incorporate prior knowledge and historical data on parameter of interest. It is not well suited in the food quality control problems. We propose to use a Bayesian approach inferring the conformance of the data concerning quality run. This approach integrates the facts about the parameter of interest from the historical data or from the expert knowledge. The prior information are used along with the experimental data for the meaningful deduction. In this study, we used Bayesian approach to infer the weight of pouched ghee. Data are taken selecting random samples from a dairy industry. The prior information about average weight and the process standard deviation are taken from the prior knowledge of process specification and standards. Normal-Normal model is used to combine the prior and experimental data in Bayesian framework. We used user-friendly computer programmes, ‘First Bayes' and ‘WinBUGS' to obtain posterior distribution, estimating the process precision, credible intervals, and predictive distribution. Results are presented comparing with conventional methods. Fitting of the model is shown using kernel density and triplot of the distributions. Key words: credible interval; kernel density; posterior distribution; predictive distribution; triplot DOI: 10.3126/njst.v9i0.3163 Nepal Journal of Science and Technology 9 (2008) 41-48


2020 ◽  
Author(s):  
Laetitia Zmuda ◽  
Charlotte Baey ◽  
Paolo Mairano ◽  
Anahita Basirat

It is well-known that individuals can identify novel words in a stream of an artificial language using statistical dependencies. While underlying computations are thought to be similar from one stream to another (e.g. transitional probabilities between syllables), performance are not similar. According to the “linguistic entrenchment” hypothesis, this would be due to the fact that individuals have some prior knowledge regarding co-occurrences of elements in speech which intervene during verbal statistical learning. The focus of previous studies was on task performance. The goal of the current study is to examine the extent to which prior knowledge impacts metacognition (i.e. ability to evaluate one’s own cognitive processes). Participants were exposed to two different artificial languages. Using a fully Bayesian approach, we estimated an unbiased measure of metacognitive efficiency and compared the two languages in terms of task performance and metacognition. While task performance was higher in one of the languages, the metacognitive efficiency was similar in both languages. In addition, a model assuming no correlation between the two languages better accounted for our results compared to a model where correlations were introduced. We discuss the implications of our findings regarding the computations which underlie the interaction between input and prior knowledge during verbal statistical learning.


2014 ◽  
Vol 55 ◽  
Author(s):  
Jonas Mockus ◽  
Irina Vinogradova

Many real applications are using uncertain data This include expert decisions based on their subjective opinions, The uncertainty can be evaluated applying fuzzy sets theory or the methods of mathematical statistics. In this paper it is proposed to use the Bayesian approach by different distribution functions defining the expert opinion and some prior information. The results are illustrated evaluating the quality of distant education courses.


1986 ◽  
Vol 16 (5) ◽  
pp. 1116-1118 ◽  
Author(s):  
Edwin J. Green ◽  
William E. Strawderman

A method for determining the appropriate sample size to produce an estimate with a stated allowable percent error when the sample data is to be combined with prior information is presented. Application of the method in the case where the objective is to estimate volume per acre and prior knowledge is represented by a yield equation demonstrates that this method can reduce the amount of sample information that would be required if the yield equation were to be ignored.


2021 ◽  
Vol 29 (1) ◽  
Author(s):  
Hezlin Aryani Abd Rahman ◽  
Yap Bee Wah ◽  
Ong Seng Huat

Logistic regression is often used for the classification of a binary categorical dependent variable using various types of covariates (continuous or categorical). Imbalanced data will lead to biased parameter estimates and classification performance of the logistic regression model. Imbalanced data occurs when the number of cases in one category of the binary dependent variable is very much smaller than the other category. This simulation study investigates the effect of imbalanced data measured by imbalanced ratio on the parameter estimate of the binary logistic regression with a categorical covariate. Datasets were simulated with controlled different percentages of imbalance ratio (IR), from 1% to 50%, and for various sample sizes. The simulated datasets were then modeled using binary logistic regression. The bias in the estimates was measured using MSE (Mean Square Error). The simulation results provided evidence that the effect of imbalance ratio on the parameter estimate of the covariate decreased as sample size increased. The bias of the estimates depended on sample size whereby for sample size 100, 500, 1000 – 2000 and 2500 – 3500, the estimates were biased for IR below 30%, 10%, 5% and 2% respectively. Results also showed that parameter estimates were all biased at IR 1% for all sample size. An application using a real dataset supported the simulation results.


2021 ◽  
Author(s):  
Oliver Lüdtke ◽  
Alexander Robitzsch ◽  
Esther Ulitzsch

The bivariate Stable Trait, AutoRegressive Trait, and State (STARTS) model provides a general approach for estimating reciprocal effects between constructs over time. However, previous research has shown that this model is difficult to estimate using the maximum likelihood (ML) method (e.g., nonconvergence). In this article, we introduce a Bayesian approach for estimating the bivariate STARTS model and implement it in the software Stan. We discuss issues of model parameterization and show how appropriate prior distributions for model parameters can be selected. Specifically, we propose the four-parameter beta distribution as a flexible prior distribution for the autoregressive and cross-lagged effects. Using a simulation study, we show that the proposed Bayesian approach provides more accurate estimates than ML estimation in challenging data constellations. An example is presented to illustrate how the Bayesian approach can be used to stabilize the parameter estimates of the bivariate STARTS model.


Author(s):  
David Izydorczyk ◽  
Arndt Bröder

AbstractExemplar models are often used in research on multiple-cue judgments to describe the underlying process of participants’ responses. In these experiments, participants are repeatedly presented with the same exemplars (e.g., poisonous bugs) and instructed to memorize these exemplars and their corresponding criterion values (e.g., the toxicity of a bug). We propose that there are two possible outcomes when participants judge one of the already learned exemplars in some later block of the experiment. They either have memorized the exemplar and their respective criterion value and are thus able to recall the exact value, or they have not learned the exemplar and thus have to judge its criterion value, as if it was a new stimulus. We argue that psychologically, the judgments of participants in a multiple-cue judgment experiment are a mixture of these two qualitatively distinct cognitive processes: judgment and recall. However, the cognitive modeling procedure usually applied does not make any distinction between these processes and the data generated by them. We investigated potential effects of disregarding the distinction between these two processes on the parameter recovery and the model fit of one exemplar model. We present results of a simulation as well as the reanalysis of five experimental data sets showing that the current combination of experimental design and modeling procedure can bias parameter estimates, impair their validity, and negatively affect the fit and predictive performance of the model. We also present a latent-mixture extension of the original model as a possible solution to these issues.


2018 ◽  
Vol 51 (4) ◽  
pp. 1151-1161 ◽  
Author(s):  
Andreas Haahr Larsen ◽  
Lise Arleth ◽  
Steen Hansen

The structure of macromolecules can be studied by small-angle scattering (SAS), but as this is an ill-posed problem, prior knowledge about the sample must be included in the analysis. Regularization methods are used for this purpose, as already implemented in indirect Fourier transformation and bead-modeling-based analysis of SAS data, but not yet in the analysis of SAS data with analytical form factors. To fill this gap, a Bayesian regularization method was implemented, where the prior information was quantified as probability distributions for the model parameters and included via a functional S. The quantity Q = χ2 + αS was then minimized and the value of the regularization parameter α determined by probability maximization. The method was tested on small-angle X-ray scattering data from a sample of nanodiscs and a sample of micelles. The parameters refined with the Bayesian regularization method were closer to the prior values as compared with conventional χ2 minimization. Moreover, the errors on the refined parameters were generally smaller, owing to the inclusion of prior information. The Bayesian method stabilized the refined values of the fitted model upon addition of noise and can thus be used to retrieve information from data with low signal-to-noise ratio without risk of overfitting. Finally, the method provides a measure for the information content in data, N g, which represents the effective number of retrievable parameters, taking into account the imposed prior knowledge as well as the noise level in data.


2019 ◽  
Author(s):  
Meghana Srivatsav ◽  
Timothy John Luke ◽  
Pär Anders Granhag ◽  
Aldert Vrij

The aim of this study was to understand if guilty suspects’ perceptions regarding the prior information or evidence held by the interviewer against the suspect could be influenced through the content of the investigative questions. To test this idea, we explored three question-phrasing factors that we labeled as Topic Discussion (if a specific crime-related topic was discussed or not), Specificity (different levels of crime-related details included in the questions) and Stressor (emphasis on the importance of the specific crime-related detail in the questions). The three factors were chosen based on relevance theory, a psycholinguistic theory that explores how people draw inferences from the communicated content. Participants (N= 370) assumed the role of the suspect and read a crime narrative and an interview transcript based on the suspect’s activities. After reading the narrative and the transcripts, participants responded to scales that measured their perception of interviewer’s prior knowledge (PIK) regarding the suspects’ role in the crime, based on the questions posed by the interviewer in the transcripts. Of the three factors tested, we found that questioning about a specific crime-related topic (Topic Discussion) increased their PIK. This study is the first to explore the underlying mechanisms of how suspects draw inferences regarding the interviewer’s prior knowledge through the content of the investigative questions adopting concepts of psycholinguistic theory.


Sign in / Sign up

Export Citation Format

Share Document