posteriori distribution
Recently Published Documents


TOTAL DOCUMENTS

13
(FIVE YEARS 3)

H-INDEX

5
(FIVE YEARS 1)

Author(s):  
David Kipping

Abstract Astronomy has always been propelled by the discovery of new phenomena lacking precedent, often followed by new theories to explain their existence and properties. In the modern era of large surveys tiling the sky at ever high precision and sampling rates, these serendipitous discoveries look set to continue, with recent examples including Boyajian’s Star, Fast Radio Bursts and ‘Oumuamua. Accordingly, we here look ahead and aim to provide a statistical framework for interpreting such events and providing guidance to future observations, under the basic premise that the phenomenon in question stochastically repeat at some unknown, constant rate, λ. Specifically, expressions are derived for 1) the a-posteriori distribution for λ, 2) the a-posteriori distribution for the recurrence time, and, 3) the benefit-to-cost ratio of further observations relative to that of the inaugural event. Some rule-of-thumb results for each of these are found to be 1) $\lambda < \lbrace 0.7, 2.3, 4.6\rbrace \, t_1^{-1}$ to $\lbrace 50, 90, 95\rbrace {{\ \rm per\ cent}}$ confidence (where t1 = time to obtain the first detection), 2) the recurrence time is t2 < {1, 9, 99} t1 to $\lbrace 50, 90, 95\rbrace {{\ \rm per\ cent}}$ confidence, with a lack of repetition by time t2 yielding a p-value of 1/[1 + (t2/t1)], and, 3) follow-up for ≲ 10 t1 is expected to be scientifically worthwhile under an array of differing assumptions about the object’s intrinsic scientific value. We apply these methods to the Breakthrough Listen Candidate 1 signal and tidal disruption events observed by TESS.


2020 ◽  
Vol 2020 ◽  
pp. 1-10
Author(s):  
Milka E. Escalera-Chávez ◽  
Carlos A. Rojas-Kramer

Objective. The aim of this work was to validate the statistical significance and unidimensionality of the construct formed by the variables of the revised and short version of the Smartphone Addiction Scale (SAS-SV), adapted into Spanish, when applied to Mexican university students. Method. The questionnaires were administered to 244 students of Bachelor’s Degree in Administration of the Universidad Autónoma de San Luis Potosí, Mexico; 174 women and 70 men, aged 17 to 30 years, between August and December 2018. A confirmatory factor analysis was performed, and the parameters of the variables were checked by maximum likelihood and also by Bayesian analysis. The reliability of the instrument was verified through Cronbach’s alpha. As a final analysis, estimates of nonstandardized weights for the maximum likelihood method were compared against Bayesian a posteriori distribution estimates. Results. As a result, the model was found to adequately describe the sample data, presenting very small standard error estimates, and it was validated with Cronbach’s alpha of 0.885. In both Bayesian and maximum likelihood analysis, it is consistently evident that the construct is unidimensional. However, for the sample studied, it was observed that 3 of the variables did not reach a significant weight for the model. Conclusion. It concludes that the variables that measure smartphone addiction on the SAS-SV scale adapted to Spanish, indeed, form a unidimensional construct when applied to Mexican university students, which is consistent with results from previous studies. However, it is identified necessary to conduct further studies, in order to explain the low significance obtained for 3 variables of the model.


Sensors ◽  
2020 ◽  
Vol 20 (21) ◽  
pp. 6011 ◽  
Author(s):  
Jan Steinbrener ◽  
Konstantin Posch ◽  
Jürgen Pilz

We present a novel approach for training deep neural networks in a Bayesian way. Compared to other Bayesian deep learning formulations, our approach allows for quantifying the uncertainty in model parameters while only adding very few additional parameters to be optimized. The proposed approach uses variational inference to approximate the intractable a posteriori distribution on basis of a normal prior. By representing the a posteriori uncertainty of the network parameters per network layer and depending on the estimated parameter expectation values, only very few additional parameters need to be optimized compared to a non-Bayesian network. We compare our approach to classical deep learning, Bernoulli dropout and Bayes by Backprop using the MNIST dataset. Compared to classical deep learning, the test error is reduced by 15%. We also show that the uncertainty information obtained can be used to calculate credible intervals for the network prediction and to optimize network architecture for the dataset at hand. To illustrate that our approach also scales to large networks and input vector sizes, we apply it to the GoogLeNet architecture on a custom dataset, achieving an average accuracy of 0.92. Using 95% credible intervals, all but one wrong classification result can be detected.


2018 ◽  
Vol 12 (4) ◽  
pp. 245 ◽  
Author(s):  
Luiz Henrique Marra da Silva Ribeiro ◽  
Matheus De Souza Costa ◽  
Luiz Alberto Beijo ◽  
Alberto Frank Lázaro Aguirre ◽  
Tatiane Gomes de Araújo ◽  
...  

The Bayesian approach in regression models has shown good results in parameter estimations, where it can increase accuracy and precision. The objective of the current study was to analyze the application of Bayesian statistics to the modeling yield for leaf dry matter (LM) and stem (SM), in kg ha-1, leaf ratio (LR), crude protein content for leaves (CPL) and stem (CPS) (%) of Brachiaria grass as a function of varying N doses (0; 100; 200 and 300 kg ha-1 yr-1). Simple and two degree polynomial linear regression models were analyzed. Information for a priori distributions was obtained from the literature. A posteriori distribution was generated using a Monte Carlo method via Markov chains. Parameters significance was assyed with HPD (Highest Posteriori Density) with a 95% interval. Model selections was performed using DIC (Deviance Information Criterion); and adjustment quality estimated with means and 95% HPD for Bayesian R2 distribution ranges. The models selected for the variables LM, SM and CPS were linear, while for LR and CPL, they were second level polynomial. The lowest doses that maximize response variables were: LM: 274 ha-1yr-1, SM: 280 ha-1yr-1, LR: 113 ha-1yr-1, CPL: 265 ha-1yr-1, CPS: 289 ha-1yr-1. The Bayesian approach allowed the inclusion of literatureverified a priori information, and the identification of evidence optimization range intervals.


2011 ◽  
Vol 255-260 ◽  
pp. 3632-3636 ◽  
Author(s):  
Jun Xiong ◽  
Xiao Lan Huang ◽  
Zeng Yan Cao

The ensemble Kalman filter (EnKF) is employed to simulate of streamflow of a slope sub-catchment during the rainfall infiltration process. With this method the whole process is treated as a dynamic stochastic system, and its streamflow is taken as the variable to describe the state of system. Furthermore, it is coupled with a hydrology model to cope with system uncertainty. Thus, the dynamical estimation of hydrological parameters is performed; the model variables and their uncertainty are obtained simultaneously. Numerical examples show that this strategy can effectively deal with observation noises and can provide the inversion results and the posteriori distribution of the priori information together. Compared with the conventional optimization algorithm, the new strategy combined with EnKF shows better character of real time response and model reliability.


1999 ◽  
Vol 1 (2) ◽  
pp. 75-82 ◽  
Author(s):  
Ezio Todini

The paper introduces the use of phase-state modelling as a means of estimating expected benefits or losses when dealing with decision processes under uncertainty of future events. For this reason the phase-space approach to time series, which generally aims at forecasting the expected value of a future event, is here also used to assess the forecasting uncertainty. Under the assumption of local stationarity the ensemble of generated future trajectories can be used to estimate a probability density that represents the a priori uncertainty of forecasts conditional on the latest measurements. This a priori density can then be used directly in the optimisation schemes if no additional information is available, or after deriving an a posteriori distribution in the Bayesian sense, by combining it with forecasts from deterministic models, here taken as noise-corrupted ‘pseudo-measurements’ of future events. Examples of application are given in the case of the Lake Como real-time management system as well as in the case of rainfall ensemble forecasts on the River Reno.


Geophysics ◽  
1995 ◽  
Vol 60 (4) ◽  
pp. 1169-1177 ◽  
Author(s):  
Mauricio D. Sacchi ◽  
Tadeusz J. Ulrych

We present a high‐resolution procedure to reconstruct common‐midpoint (CMP) gathers. First, we describe the forward and inverse transformations between offset and velocity space. Then, we formulate an underdetermined linear inverse problem in which the target is the artifacts‐free, aperture‐compensated velocity gather. We show that a sparse inversion leads to a solution that resembles the infinite‐aperture velocity gather. The latter is the velocity gather that should have been estimated with a simple conjugate operator designed from an infinite‐aperture seismic array. This high‐resolution velocity gather is then used to reconstruct the offset space. The algorithm is formally derived using two basic principles. First, we use the principle of maximum entropy to translate prior information about the unknown parameters into a probabilistic framework, in other words, to assign a probability density function to our model. Second, we apply Bayes’s rule to relate the a priori probability density function (pdf) with the pdf corresponding to the experimental uncertainties (likelihood function) to construct the a posteriori distribution of the unknown parameters. Finally the model is evaluated by maximizing the a posteriori distribution. When the problem is correctly regularized, the algorithm converges to a solution characterized by different degrees of sparseness depending on the required resolution. The solutions exhibit minimum entropy when the entropy is measured in terms of Burg’s definition. We emphasize two crucial differences in our approach with the familiar Burg method of maximum entropy spectral analysis. First, Burg’s entropy is minimized rather than maximized, which is equivalent to inferring as much as possible about the model from the data. Second, our approach uses the data as constraints in contrast with the classic maximum entropy spectral analysis approach where the autocorrelation function is the constraint. This implies that we recover not only amplitude information but also phase information, which serves to extrapolate the data outside the original aperture of the array. The tradeoff is controlled by a single parameter that under asymptotic conditions reduces the method to a damped least‐squares solution. Finally, the high‐resolution or aperture‐compensated velocity gather is used to extrapolate near‐ and far‐offset traces.


1994 ◽  
Vol 88 (2) ◽  
pp. 327-335 ◽  
Author(s):  
John E. Roemer

A continuum of voters, indexed by income, have preferences over economic outcomes. Two political parties each represent the interests of given constituencies of voters: the rich and the poor. Parties/candidates put forth policies—for instance, tax policy, where taxes finance a public good. Voters are uncertain about the theory of the economy, the function that maps policies into economic outcomes. Parties argue, as well, for theories of the economy. Each voter has a prior probability distribution over possible theories of the economy; after parties announce their theories of the economy, each voter constructs an a posteriori distribution over such theories. Suppose that voters are unsure how efficiently the government converts tax revenues into the public good. Under reasonable assumptions the party representing the rich argues that the government is very inefficient and the party representing the poor argues the opposite. What appear as liberal and conservative ideological views emerge as simply good strategies in the electoral game.


Sign in / Sign up

Export Citation Format

Share Document