small probabilities
Recently Published Documents


TOTAL DOCUMENTS

57
(FIVE YEARS 13)

H-INDEX

11
(FIVE YEARS 1)

2021 ◽  
Vol 2021 ◽  
pp. 1-21
Author(s):  
Shixu Liu ◽  
Jianchao Zhu ◽  
Said M. Easa ◽  
Lidan Guo ◽  
Shuyu Wang ◽  
...  

This paper analyzes the utility calculation principle of travelers from the perspective of mental accounting and proposes a travel choice behavior model that considers travel time and cost (MA-TC model). Then, a questionnaire is designed to analyze the results of the travel choice under different decision-making scenarios. Model parameters are estimated using nonlinear regression, and the utility calculation principles are developed under different hypothetical scenarios. Then, new expressions for the utility function under deterministic and risky conditions are presented. For verification, the nonlinear correlation coefficient and hit rate are used to compare the proposed MA-TC model with the other two models: (1) the classical prospect theory with travel time and cost (PT-TC model) and (2) mental accounting based on the original hedonic editing criterion (MA-HE model). The results show that model parameters under deterministic and risky conditions are pretty different. In the deterministic case, travelers have similar sensitivity to the change in gain and loss of travel time and cost. The prediction accuracy of the MA-TC model is 3% lower than the PT-TC model and 6% higher than the MA-HE model. Under risky conditions, travelers are more sensitive to the change in loss than to the change in gain. Additionally, travelers tend to overestimate small probabilities and underestimate high probabilities when losing more than when gaining. The prediction accuracy of the MA-TC model is 2% higher than the PT-TC model and 6% higher than the MA-HE model.


Ethics ◽  
2021 ◽  
Vol 132 (1) ◽  
pp. 204-217
Author(s):  
Petra Kosonen
Keyword(s):  

Entropy ◽  
2021 ◽  
Vol 23 (5) ◽  
pp. 603
Author(s):  
Arthur Prat-Carrabin ◽  
Florent Meyniel ◽  
Misha Tsodyks ◽  
Rava Azeredo da Silveira

When humans infer underlying probabilities from stochastic observations, they exhibit biases and variability that cannot be explained on the basis of sound, Bayesian manipulations of probability. This is especially salient when beliefs are updated as a function of sequential observations. We introduce a theoretical framework in which biases and variability emerge from a trade-off between Bayesian inference and the cognitive cost of carrying out probabilistic computations. We consider two forms of the cost: a precision cost and an unpredictability cost; these penalize beliefs that are less entropic and less deterministic, respectively. We apply our framework to the case of a Bernoulli variable: the bias of a coin is inferred from a sequence of coin flips. Theoretical predictions are qualitatively different depending on the form of the cost. A precision cost induces overestimation of small probabilities, on average, and a limited memory of past observations, and, consequently, a fluctuating bias. An unpredictability cost induces underestimation of small probabilities and a fixed bias that remains appreciable even for nearly unbiased observations. The case of a fair (equiprobable) coin, however, is singular, with non-trivial and slow fluctuations in the inferred bias. The proposed framework of costly Bayesian inference illustrates the richness of a `resource-rational’ (or `bounded-rational’) picture of seemingly irrational human cognition.


2021 ◽  
Vol 290 (1) ◽  
pp. 313-330 ◽  
Author(s):  
Vasileios E. Kontosakos ◽  
Keegan Mendonca ◽  
Athanasios A. Pantelous ◽  
Konstantin M. Zuev

Biometrika ◽  
2021 ◽  
Author(s):  
Haibing Zhao

Abstract Recently, post-selection inference on thousands of parameters has attracted considerable research interest. Specifically, Benjamini & Yekutieli (2005) considered constructing confidence intervals after selection. They proposed adjusting the confidence levels of marginal confidence intervals for the selected parameters to ensure control of the false coverage-statement rate. Although Benjamini-Yekutieli’s confidence intervals are widely used, they are uniformly inflated. In this article, two methods are proposed to narrow Benjamini-Yekutieli’s confidence intervals. The first method improves the confidence intervals by incorporating the selection event into the calculation. The second method further narrows confidence intervals in which some parameters are selected with very small probabilities, which results in underutilization of the target level for control of the false coverage-statement rate. A breast cancer dataset is analyzed to compare the methods.


2020 ◽  
pp. 014616722096905
Author(s):  
Qingzhou Sun ◽  
Jingyi Lu ◽  
Huanren Zhang ◽  
Yongfang Liu

People often exhibit biases in probability weighting such as overweighting small probabilities and underweighting large probabilities. Our research examines whether increased social distance would reduce such biases. Participants completed valuation and choice tasks of probabilistic lotteries under conditions with different social distances. The results showed that increased social distance reduced these biases in both hypothetical (Studies 1 and 2) and incentivized (Study 3) settings. This reduction was accompanied by a decrease in emotional intensity and an increase in the attention to probability in the decision-making process (Study 4). Moreover, the bias-buffering effect of social distance was stronger in the gain domain than in the loss domain (Studies 1–4). These results suggest that increasing the social distance from the beneficiaries of the decisions can reduce biases in probability weighting and shed light on the relationship between social distance and the emotional-cognitive process in decision-making.


Author(s):  
Daryl Bandstra ◽  
Alex M. Fraser

Abstract One of the leading threats to the integrity of oil and gas transmission pipeline systems is metal-loss corrosion. This threat is commonly managed by evaluating measurements obtained with in-line inspection tools, which locate and size individual metal-loss defects in order to plan maintenance and repair activities. Both deterministic and probabilistic methods are used in the pipeline industry to evaluate the severity of these defects. Probabilistic evaluations typically utilize structural reliability, which is an approach to designing and assessing structures that focuses on the calculation and prediction of the probability that a structure may fail. In the structural reliability approach, the probability of failure is obtained from a multidimensional integral. The solution to this integral is typically estimated numerically using Direct Monte Carlo (DMC) simulation as DMC is relatively simple and robust. The downside is that DMC requires a significant amount of computational effort to estimate small probabilities. The objective of this paper is to explore the use of a more efficient approach, called Subset Simulation (SS), to estimate the probability of burst failure for a pipeline metal-loss defect. We present comparisons between the probability of failure estimates generated for a sample defect by Direct Monte Carlo simulation and Subset Simulation for differing numbers of simulations. These cases illustrate the decreased computational effort required by Subset Simulation to produce stable probability of failure estimates, particularly for small probabilities. For defects with a burst probability in the range of 10−4 to 10−7, SS is shown to reduce the computational effort (time or cost) by 10 to 1,000 times. By significantly reducing the computational effort required to obtain stable estimates of small failure probabilities, this methodology reduces one of the major barriers to the use of reliability methods for system-wide pipeline reliability assessment.


Author(s):  
Annika Fredén ◽  
Ludovic Rheault ◽  
Indridi H. Indridason

Abstract People are commonly expected not to waste their vote on parties with small probabilities of being elected. Yet, many end up voting for underdogs. We argue that voters gauge the popular support for their preferred party from their social networks. When social networks function as echo chambers, a feature observed in real-life networks, voters overestimate underdogs’ chances of winning. We conduct voting experiments in which some treatment groups receive signals from a simulated network. We compare the effect of networks with a high degree of homogeneity against random networks. We find that homophilic networks increase the level of support for underdogs, which provides evidence to back up anecdotal claims that echo chambers foster the development of fringe parties.


2020 ◽  
Vol 23 (4) ◽  
pp. 1100-1128
Author(s):  
Ilke Aydogan ◽  
Yu Gao

Abstract A recent strand of the literature on decision-making under uncertainty has pointed to an intriguing behavioral gap between decisions made from description and decisions made from experience. This study reinvestigates this description-experience gap to understand the impact that sampling experience has on decisions under risk. Our study adopts a complete sampling paradigm to address the lack of control over experienced probabilities by requiring complete sampling without replacement. We also address the roles of utilities and ambiguity, which are central in most current decision models in economics. Thus, our experiment identifies the deviations from expected utility due to over- (or under-) weighting of probabilities. Our results confirm the existence of the behavioral gap, but they provide no evidence for the underweighting of small probabilities within the complete sampling treatment. We find that sampling experience attenuates rather than reverses the inverse S-shaped probability weighting under risk.


Sign in / Sign up

Export Citation Format

Share Document