The variable precision method for elicitation of probability weighting functions

2020 ◽  
Vol 128 ◽  
pp. 113166
Author(s):  
Junyi Chai ◽  
Eric W.T. Ngai
2021 ◽  
Author(s):  
Agnieszka Tymula ◽  
Yuri Imaizumi ◽  
Takashi Kawai ◽  
Jun Kunimatsu ◽  
Masayuki Matsumoto ◽  
...  

Research in behavioral economics and reinforcement learning has given rise to two influential theories describing human economic choice under uncertainty. The first, prospect theory, assumes that decision-makers use static mathematical functions, utility and probability weighting, to calculate the values of alternatives. The second, reinforcement learning theory, posits that dynamic mathematical functions update the values of alternatives based on experience through reward prediction error (RPE). To date, these theories have been examined in isolation without reference to one another. Therefore, it remains unclear whether RPE affects a decision-maker's utility and/or probability weighting functions, or whether these functions are indeed static as in prospect theory. Here, we propose a dynamic prospect theory model that combines prospect theory and RPE, and test this combined model using choice data on gambling behavior of captive macaques. We found that under standard prospect theory, monkeys, like humans, had a concave utility function. Unlike humans, monkeys exhibited a concave, rather than inverse-S shaped, probability weighting function. Our dynamic prospect theory model revealed that probability distortions, not the utility of rewards, solely and systematically varied with RPE: after a positive RPE, the estimated probability weighting functions became more concave, suggesting more optimistic belief about receiving rewards and over-weighted subjective probabilities at all probability levels. Thus, the probability perceptions in laboratory monkeys are not static even after extensive training, and are governed by a dynamic function well captured by the algorithmic feature of reinforcement learning. This novel evidence supports combining these two major theories to capture choice behavior under uncertainty.


2019 ◽  
Vol 7 (1) ◽  
pp. 110-139
Author(s):  
Mengxing Wei ◽  
Ali al-Nowaihi ◽  
Sanjit Dhami

We test a simple quantum decision model of the Ellsberg paradox. We find that the theoretical predictions of the model are in conformity with our experimental results. The predictions of our quantum model are not statistically significantly different from those of the source dependent model. The source dependent model requires the specification of probability weighting functions in order to fit the evidence. On the other hand, our quantum model makes no recourse to probability weighting functions. This suggests that much of what is normally attributed to probability weighting may actually be due to quantum probability. When we replace quantum probability by Kolmogorov probability in our model, then the Ellsberg paradox reemerges. Hence, we make essential use of quantum probability theory. All our development uses no more than standard linear algebra and real numbers, which are very familiar to economists. This makes our paper accessible to a wider audience than the quantum community. JEL Classification: D01, D81, D91


2020 ◽  
Vol 89 (4) ◽  
pp. 471-501
Author(s):  
Andreas Glöckner ◽  
Baiba Renerte ◽  
Ulrich Schmidt

Abstract The majority consensus in the empirical literature is that probability weighting functions are typically inverse-S shaped, that is, people tend to overweight small and underweight large probabilities. A separate stream of literature has reported event-splitting effects (also called violations of coalescing) and shown that they can explain violations of expected utility. This leads to the questions whether (1) the observed shape of weighting functions is a mere consequence of the coalesced presentation and, more generally, whether (2) preference elicitation should rely on presenting lotteries in a canonical split form instead of the commonly used coalesced form. We analyze data from a binary choice experiment where all lottery pairs are presented in both split and coalesced forms. Our results show that the presentation in a split form leads to a better fit of expected utility theory and to probability weighting functions that are closer to linear. We thus provide some evidence that the extent of probability weighting is not an ingrained feature, but rather a result of processing difficulties.


2014 ◽  
Vol 1 (1) ◽  
pp. 15-31 ◽  
Author(s):  
Sharmistha Bhattacharya Halder ◽  
Kalyani Debnath

Bayesian Decision theoretic rough set has been invented by the author. In this paper the attribute reduction by the aid of Bayesian decision theoretic rough set has been studied. Lot of other methods are there for attribute reduction such as Variable precision method, probabilistic approach, Bayesian method, Pawlaks rough set method using Boolean function. But with the help of some example it is shown that Bayesian decision theoretic rough set model gives better result than other method. Lastly an example of HIV /AIDS is taken and attribute reduction is done by this new method and various other method. It is shown that this method gives better result than the previously defined methods. By this method the authors get only the reduced attribute age which is the best significant attribute. Though in Pawlak model age sex or age living status are the reduced attribute and variable precision method fails to work here. In this paper attribute reduction is done by the help of discernibility matrix after determining the positive, boundary and negative region. This model is a hybrid model of Bayesian rough set model and decision theory. So this technique gives better result than Bayesian method and decision theoretic rough set method.


Sign in / Sign up

Export Citation Format

Share Document