Handover Count Based MAP Estimation of Velocity With Prior Distribution Approximated via NGSIM Data-Set

Author(s):  
Ravi Tiwari ◽  
Siddharth Deshmukh
2012 ◽  
Vol 24 (12) ◽  
pp. 3191-3212
Author(s):  
Kukjin Kang ◽  
Shun-ichi Amari

We study the Bayesian process to estimate the features of the environment. We focus on two aspects of the Bayesian process: how estimation error depends on the prior distribution of features and how the prior distribution can be learned from experience. The accuracy of the perception is underestimated when each feature of the environment is considered independently because many different features of the environment are usually highly correlated and the estimation error greatly depends on the correlations. The self-consistent learning process renews the prior distribution of correlated features jointly with the estimation of the environment. Here, maximum a posteriori probability (MAP) estimation decreases the effective dimensions of the feature vector. There are critical noise levels in self-consistent learning with MAP estimation, that cause hysteresis behaviors in learning. The self-consistent learning process with stochastic Bayesian estimation (SBE) makes the presumed distribution of environmental features converge to the true distribution for any level of channel noise. However, SBE is less accurate than MAP estimation. We also discuss another stochastic method of estimation, SBE2, which has a smaller estimation error than SBE without hysteresis.


2015 ◽  
Vol 06 (02) ◽  
pp. 1550002
Author(s):  
Pichid Kittisuwan

The need for efficient image denoising methods has grown with the massive production of digital images and movies of all kinds. The distortion of images by additive white Gaussian noise (AWGN) is common during its processing and transmission. This paper is concerned with dual-tree complex wavelet-based image denoising using Bayesian techniques. Indeed, one of the cruxes of the Bayesian image denoising algorithms is to estimate the local variance of the image. Here, we employ maximum a posteriori (MAP) estimation to calculate local observed variance with Maxwell density prior for local observed variance and Gaussian distribution for noisy wavelet coefficients. Evidently, our selection of prior distribution is motivated by analytical and computational tractability. The experimental results show that the proposed method yields good denoising results.


2021 ◽  
Vol 47 (3) ◽  
pp. 988-998
Author(s):  
Ayoade I. Adewole ◽  
Olusoga A. Fasoranbaku

Bayesian estimations have the advantages of taking into account the uncertainty of all parameter estimates which allows virtually the use of vague priors. This study focused on determining the quantile range at which optimal hyperparameter of normally distributed data with vague information could be obtained in Bayesian estimation of linear regression models. A Monte Carlo simulation approach was used to generate a sample size of 200 data-set. Observation precisions and posterior precisions were estimated from the regression output to determine the posterior means estimate for each model to derive the new dependent variables. The variances were divided into 10 equal parts to obtain the hyperparameters of the prior distribution. Average absolute deviation for model selection was used to validate the adequacy of each model. The study revealed the optimal hyperparameters located at 5th and 7th deciles. The research simplified the process of selecting the hyperparameters of prior distribution from the data with vague information in empirical Bayesian inferences. Keywords: Optimal Hyperparameters; Quantile Ranges; Bayesian Estimation; Vague prior


1997 ◽  
Vol 22 (4) ◽  
pp. 407-424 ◽  
Author(s):  
Alan L. Gross

The posterior distribution of the bivariate correlation ( ρxy) is analytically derived given a data set consisting N1 cases measured on both x and y, N2 cases measured only on x, and N3 cases measured only on y. The posterior distribution is shown to be a function of the subsample sizes, the sample correlation ( rxy) computed from the N1 complete cases, a set of four statistics which measure the extent to which the missing data are not missing completely at random, and the specified prior distribution for ρxy. A sampling study suggests that in small ( N = 20) and moderate ( N = 50) sized samples, posterior Bayesian interval estimates will dominate maximum likelihood based estimates in terms of coverage probability and expected interval widths when the prior distribution for ρxy is simply uniform on (0, 1). The advantage of the Bayesian method when more informative priors based on beta densities are employed is not as consistent.


2019 ◽  
Vol 52 (3) ◽  
pp. 397-423
Author(s):  
Luc Steinbuch ◽  
Thomas G. Orton ◽  
Dick J. Brus

AbstractArea-to-point kriging (ATPK) is a geostatistical method for creating high-resolution raster maps using data of the variable of interest with a much lower resolution. The data set of areal means is often considerably smaller ($$<\,50 $$<50 observations) than data sets conventionally dealt with in geostatistical analyses. In contemporary ATPK methods, uncertainty in the variogram parameters is not accounted for in the prediction; this issue can be overcome by applying ATPK in a Bayesian framework. Commonly in Bayesian statistics, posterior distributions of model parameters and posterior predictive distributions are approximated by Markov chain Monte Carlo sampling from the posterior, which can be computationally expensive. Therefore, a partly analytical solution is implemented in this paper, in order to (i) explore the impact of the prior distribution on predictions and prediction variances, (ii) investigate whether certain aspects of uncertainty can be disregarded, simplifying the necessary computations, and (iii) test the impact of various model misspecifications. Several approaches using simulated data, aggregated real-world point data, and a case study on aggregated crop yields in Burkina Faso are compared. The prior distribution is found to have minimal impact on the disaggregated predictions. In most cases with known short-range behaviour, an approach that disregards uncertainty in the variogram distance parameter gives a reasonable assessment of prediction uncertainty. However, some severe effects of model misspecification in terms of overly conservative or optimistic prediction uncertainties are found, highlighting the importance of model choice or integration into ATPK.


Motor Control ◽  
2016 ◽  
Vol 20 (3) ◽  
pp. 255-265
Author(s):  
Yin-Hua Chen ◽  
Isabella Verdinelli ◽  
Paola Cesari

This paper carries out a full Bayesian analysis for a data set examined in Chen & Cesari (2015). These data were collected for assessing people’s ability in evaluating short intervals of time. Chen & Cesari (2015) showed evidence of the existence of two independent internal clocks for evaluating time intervals below and above the second. We reexamine here, the same question by performing a complete statistical Bayesian analysis of the data. The Bayesian approach can be used to analyze these data thanks to the specific trial design. Data were obtained from evaluation of time ranges from two groups of individuals. More specifically, information gathered from a nontrained group (considered as baseline) allowed us to build a prior distribution for the parameter(s) of interest, and data from the trained group determined the likelihood function. This paper’s main goals are (i) showing how the Bayesian inferential method can be used in statistical analyses and (ii) showing that the Bayesian methodology gives additional support to the findings presented in Chen & Cesari (2015) regarding the existence of two internal clocks in assessing duration of time intervals.


2017 ◽  
Author(s):  
Sara van Erp ◽  
Josine Verhagen ◽  
Raoul P P P Grasman ◽  
Eric-Jan Wagenmakers

We present a data set containing 705 between-study heterogeneity estimates as reported in 61 articles published in Psychological Bulletin from 1990-2013. The data set also includes information about the number and type of effect sizes, the Q-statistic, and publication bias. The data set is stored in the Open Science Framework repository and can be used for several purposes: (1) to compare a specific heterogeneity estimate to the distribution of between-study heterogeneity estimates in psychology; (2) to construct an informed prior distribution for the between-study heterogeneity in psychology; (3) to obtain realistic population values for Monte Carlo simulations investigating the performance of meta-analytic methods.


1994 ◽  
Vol 144 ◽  
pp. 139-141 ◽  
Author(s):  
J. Rybák ◽  
V. Rušin ◽  
M. Rybanský

AbstractFe XIV 530.3 nm coronal emission line observations have been used for the estimation of the green solar corona rotation. A homogeneous data set, created from measurements of the world-wide coronagraphic network, has been examined with a help of correlation analysis to reveal the averaged synodic rotation period as a function of latitude and time over the epoch from 1947 to 1991.The values of the synodic rotation period obtained for this epoch for the whole range of latitudes and a latitude band ±30° are 27.52±0.12 days and 26.95±0.21 days, resp. A differential rotation of green solar corona, with local period maxima around ±60° and minimum of the rotation period at the equator, was confirmed. No clear cyclic variation of the rotation has been found for examinated epoch but some monotonic trends for some time intervals are presented.A detailed investigation of the original data and their correlation functions has shown that an existence of sufficiently reliable tracers is not evident for the whole set of examinated data. This should be taken into account in future more precise estimations of the green corona rotation period.


Author(s):  
Jules S. Jaffe ◽  
Robert M. Glaeser

Although difference Fourier techniques are standard in X-ray crystallography it has only been very recently that electron crystallographers have been able to take advantage of this method. We have combined a high resolution data set for frozen glucose embedded Purple Membrane (PM) with a data set collected from PM prepared in the frozen hydrated state in order to visualize any differences in structure due to the different methods of preparation. The increased contrast between protein-ice versus protein-glucose may prove to be an advantage of the frozen hydrated technique for visualizing those parts of bacteriorhodopsin that are embedded in glucose. In addition, surface groups of the protein may be disordered in glucose and ordered in the frozen state. The sensitivity of the difference Fourier technique to small changes in structure provides an ideal method for testing this hypothesis.


Author(s):  
D. E. Becker

An efficient, robust, and widely-applicable technique is presented for computational synthesis of high-resolution, wide-area images of a specimen from a series of overlapping partial views. This technique can also be used to combine the results of various forms of image analysis, such as segmentation, automated cell counting, deblurring, and neuron tracing, to generate representations that are equivalent to processing the large wide-area image, rather than the individual partial views. This can be a first step towards quantitation of the higher-level tissue architecture. The computational approach overcomes mechanical limitations, such as hysterisis and backlash, of microscope stages. It also automates a procedure that is currently done manually. One application is the high-resolution visualization and/or quantitation of large batches of specimens that are much wider than the field of view of the microscope.The automated montage synthesis begins by computing a concise set of landmark points for each partial view. The type of landmarks used can vary greatly depending on the images of interest. In many cases, image analysis performed on each data set can provide useful landmarks. Even when no such “natural” landmarks are available, image processing can often provide useful landmarks.


Sign in / Sign up

Export Citation Format

Share Document