bayesian update
Recently Published Documents


TOTAL DOCUMENTS

60
(FIVE YEARS 28)

H-INDEX

7
(FIVE YEARS 2)

2021 ◽  
Author(s):  
Mathias Sablé-Meyer ◽  
Janek Guerrini ◽  
Salvador Mascarenhas

We show that probabilistic decision-making behavior characteristic of reasoning by representativeness or typicality arises in minimalistic settings lacking many of the features previously thought to be necessary conditions for the phenomenon. Specifically, we develop a version of a classical experiment by Kahneman and Tversky (1973) on base-rate neglect, where participants have full access to the probabilistic distribution, conveyed entirely visually and without reliance on familiar stereotypes, rich descriptions, or individuating information. We argue that the notion of evidential support as studied in (Bayesian) confirmation theory offers a good account of our experimental findings, as has been proposed for related data points from the representativeness literature. In a nutshell, when faced with competing alternatives to choose from, humans are sometimes less interested in picking the option with the highest probability of being true (posterior probability), and instead choose the option best supported by available evidence. We point out that this theoretical avenue is descriptively powerful, but has an as-yet unclear explanatory dimension. Building on approaches to reasoning from linguistic semantics, we propose that the chief trigger of confirmation-theoretic mechanisms in deliberate reasoning is a linguistically-motivated tendency to interpret certain experimental setups as intrinsically contrastive, in a way best cashed out by modern linguistic semantic theories of questions. These questions generate pragmatic pressures for interpreting surrounding information as having been meant to help answer the question, which will naturally give rise to confirmation-theoretic effects, very plausibly as a byproduct of iterated Bayesian update as proposed by modern Bayesian theories of relevance-based reasoning in pragmatics. Our experiment provides preliminary but tantalizing evidence in favor of this hypothesis, as participants displayed significantly more confirmation-theoretic behavior in a condition that highlighted the question-like, contrastive nature of the task.


Energies ◽  
2021 ◽  
Vol 14 (15) ◽  
pp. 4612
Author(s):  
Zhenen Li ◽  
Xinyan Zhang ◽  
Tusongjiang Kari ◽  
Wei Hu

Vibration signals contain abundant information that reflects the health status of wind turbine high-speed shaft bearings ((HSSBs). Accurate health assessment and remaining useful life (RUL) prediction are the keys to the scientific maintenance of wind turbines. In this paper, a method based on the combination of a comprehensive evaluation function and a self-organizing feature map (SOM) network is proposed to construct a health indicator (HI) curve to characterizes the health state of HSSBs. Considering the difficulty in obtaining life cycle data of similar equipment in a short time, the exponential degradation model is selected as the degradation trajectory of HSSBs on the basis of the constructed HI curve, the Bayesian update model, and the expectation–maximization (EM) algorithm are used to predict the RUL of HSSBs. First, the time domain, frequency domain, and time–frequency domain degradation features of HSSBs are extracted. Second, a comprehensive evaluation function is constructed and used to select the degradation features with good performance. Third, the SOM network is used to fuse the selected degradation features to construct a one-dimensional HI curve. Finally, the exponential degradation model is selected as the degradation trajectory of HSSBs, and the Bayesian update and EM algorithm are used to predict the RUL of the HSSB. The monitoring data of a wind turbine HSSB in actual operation is used to validate the model. The HI curve constructed by the method in this paper can better reflect the degradation process of HSSBs. In terms of life prediction, the method in this paper has better prediction accuracy than the SVR model.


2021 ◽  
Author(s):  
Jinya Katsuyama ◽  
Yuhei Miyamoto ◽  
Kai Lu ◽  
Akihiro Mano ◽  
Yinsheng Li

Author(s):  
Luis Ceferino ◽  
Percy Galvez ◽  
Jean-Paul Ampuero ◽  
Anne Kiremidjian ◽  
Gregory Deierlein ◽  
...  

ABSTRACT This article introduces a framework to supplement short historical catalogs with synthetic catalogs and determine large earthquakes’ recurrence. For this assessment, we developed a parameter estimation technique for a probabilistic earthquake occurrence model that captures time and space interactions between large mainshocks. The technique is based on a two-step Bayesian update that uses a synthetic catalog from physics-based simulations for initial parameter estimation and then the historical catalog for further calibration, fully characterizing parameter uncertainty. The article also provides a formulation to combine multiple synthetic catalogs according to their likelihood of representing empirical earthquake stress drops and Global Positioning System-inferred interseismic coupling. We applied this technique to analyze large-magnitude earthquakes’ recurrence along 650 km of the subduction fault’s interface located offshore Lima, Peru. We built nine 2000 yr long synthetic catalogs using quasi-dynamic earthquake cycle simulations based on the rate-and-state friction law to supplement the 450 yr long historical catalog. When the synthetic catalogs are combined with the historical catalog without propagating their uncertainty, we found average relative reductions larger than 90% in the recurrence parameters’ uncertainty. When we propagated the physics-based simulations’ uncertainty to the posterior, the reductions in uncertainty decreased to 60%–70%. In two Bayesian assessments, we then show that using synthetic catalogs results in higher parameter uncertainty reductions than using only the historical catalog (69% vs. 60% and 83% vs. 80%), demonstrating that synthetic catalogs can be effectively combined with historical data, especially in tectonic regions with short historical catalogs. Finally, we show the implications of these results for time-dependent seismic hazard.


PLoS ONE ◽  
2021 ◽  
Vol 16 (6) ◽  
pp. e0237278
Author(s):  
Kazutaka Ueda ◽  
Takahiro Sekoguchi ◽  
Hideyoshi Yanagisawa

One becomes accustomed to repeated exposures, even for a novel event. In the present study, we investigated how predictability affects habituation to novelty by applying a mathematical model of arousal that we previously developed, and through the use of psychophysiological experiments to test the model’s prediction. We formalized habituation to novelty as a decrement in Kullback-Leibler divergence from Bayesian prior to posterior (i.e., information gain) representing arousal evoked from a novel event through Bayesian update. The model predicted an interaction effect between initial uncertainty and initial prediction error (i.e., predictability) on habituation to novelty: the greater the initial uncertainty, the faster the decrease in information gain (i.e., the sooner habituation occurs). This prediction was supported by experimental results using subjective reports of surprise and event-related potential (P300) evoked by visual-auditory incongruity. Our findings suggest that in highly uncertain situations, repeated exposure to stimuli can enhance habituation to novel stimuli.


Entropy ◽  
2020 ◽  
Vol 23 (1) ◽  
pp. 22
Author(s):  
Daniel Sanz-Alonso ◽  
Zijian Wang

Importance sampling is used to approximate Bayes’ rule in many computational approaches to Bayesian inverse problems, data assimilation and machine learning. This paper reviews and further investigates the required sample size for importance sampling in terms of the χ2-divergence between target and proposal. We illustrate through examples the roles that dimension, noise-level and other model parameters play in approximating the Bayesian update with importance sampling. Our examples also facilitate a new direct comparison of standard and optimal proposals for particle filtering.


Episteme ◽  
2020 ◽  
pp. 1-15
Author(s):  
Jan-Willem Romeijn

Abstract This paper explores the fact that linear opinion pooling can be represented as a Bayesian update on the opinions of others. It uses this fact to propose a new interpretation of the pooling weights. Relative to certain modelling assumptions the weights can be equated with the so-called truth-conduciveness known from the context of Condorcet's jury theorem. This suggests a novel way to elicit the weights.


Sign in / Sign up

Export Citation Format

Share Document