conditional probability density
Recently Published Documents


TOTAL DOCUMENTS

77
(FIVE YEARS 22)

H-INDEX

15
(FIVE YEARS 2)

2021 ◽  
Vol 15 (1) ◽  
pp. 280-288
Author(s):  
Mahdi Rezapour ◽  
Khaled Ksaibati

Background: Kernel-based methods have gained popularity as employed model residual’s distribution might not be defined by any classical parametric distribution. Kernel-based method has been extended to estimate conditional densities instead of conditional distributions when data incorporate both discrete and continuous attributes. The method often has been based on smoothing parameters to use optimal values for various attributes. Thus, in case of an explanatory variable being independent of the dependent variable, that attribute would be dropped in the nonparametric method by assigning a large smoothing parameter, giving them uniform distributions so their variances to the model’s variance would be minimal. Objectives: The objective of this study was to identify factors to the severity of pedestrian crashes based on an unbiased method. Especially, this study was conducted to evaluate the applicability of kernel-based techniques of semi- and nonparametric methods on the crash dataset by means of confusion techniques. Methods: In this study, two non- and semi-parametric kernel-based methods were implemented to model the severity of pedestrian crashes. The estimation of the semi-parametric densities is based on the adoptive local smoothing and maximization of the quasi-likelihood function, which is similar somehow to the likelihood of the binary logit model. On the other hand, the nonparametric method is based on the selection of optimal smoothing parameters in estimation of the conditional probability density function to minimize mean integrated squared error (MISE). The performances of those models are evaluated by their prediction power. To have a benchmark for comparison, the standard logistic regression was also employed. Although those methods have been employed in other fields, this is one of the earliest studies that employed those techniques in the context of traffic safety. Results: The results highlighted that the nonparametric kernel-based method outperforms the semi-parametric (single-index model) and the standard logit model based on the confusion matrices. To have a vision about the bandwidth selection method for removal of the irrelevant attributes in nonparametric approach, we added some noisy predictors to the models and a comparison was made. Extensive discussion has been made in the content of this study regarding the methodological approach of the models. Conclusion: To summarize, alcohol and drug involvement, driving on non-level grade, and bad lighting conditions are some of the factors that increase the likelihood of pedestrian crash severity. This is one of the earliest studies that implemented the methods in the context of transportation problems. The nonparametric method is especially recommended to be used in the field of traffic safety when there are uncertainties regarding the importance of predictors as the technique would automatically drop unimportant predictors.


Author(s):  
Hien Duy Nguyen ◽  
TrungTin Nguyen ◽  
Faicel Chamroukhi ◽  
Geoffrey John McLachlan

AbstractMixture of experts (MoE) models are widely applied for conditional probability density estimation problems. We demonstrate the richness of the class of MoE models by proving denseness results in Lebesgue spaces, when inputs and outputs variables are both compactly supported. We further prove an almost uniform convergence result when the input is univariate. Auxiliary lemmas are proved regarding the richness of the soft-max gating function class, and their relationships to the class of Gaussian gating functions.


Author(s):  
George Britten-Neish

AbstractClark (Journal of Consciousness Studies, 25(3–4), 71–87, 2018) worries that predictive processing (PP) accounts of perception introduce a puzzling disconnect between the content of personal-level perceptual states and their underlying subpersonal representations. According to PP, in perception, the brain encodes information about the environment in conditional probability density distributions over causes of sensory input. But it seems perceptual experience only presents us with one way the world is at a time. If perception is at bottom probabilistic, shouldn’t this aspect of subpersonally represented content show up in consciousness? To address this worry, Clark argues that representations underlying personal-level content are constrained by the need to provide a single action-guiding take on the environment. However, this proposal rests a conception of the nature of agency, famously articulated by Davidson (1980a, b), that is inconsistent with a view of the mind as embodied-extended. Since Clark and other enactivist PP theorists present the extended mind as an important consequence of the predictive framework, the proposal is in tension with his complete view. I claim that this inconsistency could be resolved either by retaining the Davidsonian view of action and abandoning the extended-embodied approach, or by adopting a more processual, world-involving account of agency and perceptual experience than Clark currently endorses. To solve the puzzle he raises, Clark must become a radical enactivist or a consistent internalist.


Author(s):  
Geir Evensen

AbstractIt is common to formulate the history-matching problem using Bayes’ theorem. From Bayes’, the conditional probability density function (pdf) of the uncertain model parameters is proportional to the prior pdf of the model parameters, multiplied by the likelihood of the measurements. The static model parameters are random variables characterizing the reservoir model while the observations include, e.g., historical rates of oil, gas, and water produced from the wells. The reservoir prediction model is assumed perfect, and there are no errors besides those in the static parameters. However, this formulation is flawed. The historical rate data only approximately represent the real production of the reservoir and contain errors. History-matching methods usually take these errors into account in the conditioning but neglect them when forcing the simulation model by the observed rates during the historical integration. Thus, the model prediction depends on some of the same data used in the conditioning. The paper presents a formulation of Bayes’ theorem that considers the data dependency of the simulation model. In the new formulation, one must update both the poorly known model parameters and the rate-data errors. The result is an improved posterior ensemble of prediction models that better cover the observations with more substantial and realistic uncertainty. The implementation accounts correctly for correlated measurement errors and demonstrates the critical role of these correlations in reducing the update’s magnitude. The paper also shows the consistency of the subspace inversion scheme by Evensen (Ocean Dyn. 54, 539–560 2004) in the case with correlated measurement errors and demonstrates its accuracy when using a “larger” ensemble of perturbations to represent the measurement error covariance matrix.


2021 ◽  
Vol 8 ◽  
Author(s):  
Sotaro Fuchigami ◽  
Toru Niina ◽  
Shoji Takada

The atomic force microscopy (AFM) is a powerful tool for imaging structures of molecules bound on surfaces. To gain high-resolution structural information, one often superimposes structure models on the measured images. Motivated by high flexibility of biomolecules, we previously developed a flexible-fitting molecular dynamics (MD) method that allows protein structural changes upon superimposing. Since the AFM image largely depends on the AFM probe tip geometry, the fitting process requires accurate estimation of the parameters related to the tip geometry. Here, we performed a Bayesian statistical inference to estimate a tip radius of the AFM probe from a given AFM image via flexible-fitting molecular dynamics (MD) simulations. We first sampled conformations of the nucleosome that fit well the reference AFM image by the flexible-fitting with various tip radii. We then estimated an optimal tip parameter by maximizing the conditional probability density of the AFM image produced from the fitted structure.


2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Liyun Su ◽  
Meini Li ◽  
Shengli Zhao ◽  
Ting Xie

This paper combines the distributed sensor fusion system with the signal detection under chaotic noise to realize the distributed sensor fusion detection from chaotic background. First, based on the short-term predictability of the chaotic signal and its sensitivity to small interference, the phase space reconstruction of the observation signal of each sensor is carried out. Second, the distributed sensor local linear autoregressive (DS-LLAR) model is constructed to obtain the one-step prediction error of each sensor. Then, we construct a Bayesian risk model and also obtain the corresponding conditional probability density function under each sensor’s hypothesis test which firstly needs to fit the distribution of prediction errors according to the parameter estimation. Finally, the fusion optimization algorithm is designed based on the Bayesian fusion criterion, and the optimal decision rule of each sensor and the optimal fusion rule of the fusion center are jointly solved to effectively detect the weak pulse signal in the observation signal. Simulation experiments show that the proposed method which used a distributed sensor combined with a local linear model can effectively detect weak pulse signals from chaotic background.


2020 ◽  
Vol 70 (5) ◽  
pp. 1211-1230
Author(s):  
Abdus Saboor ◽  
Hassan S. Bakouch ◽  
Fernando A. Moala ◽  
Sheraz Hussain

AbstractIn this paper, a bivariate extension of exponentiated Fréchet distribution is introduced, namely a bivariate exponentiated Fréchet (BvEF) distribution whose marginals are univariate exponentiated Fréchet distribution. Several properties of the proposed distribution are discussed, such as the joint survival function, joint probability density function, marginal probability density function, conditional probability density function, moments, marginal and bivariate moment generating functions. Moreover, the proposed distribution is obtained by the Marshall-Olkin survival copula. Estimation of the parameters is investigated by the maximum likelihood with the observed information matrix. In addition to the maximum likelihood estimation method, we consider the Bayesian inference and least square estimation and compare these three methodologies for the BvEF. A simulation study is carried out to compare the performance of the estimators by the presented estimation methods. The proposed bivariate distribution with other related bivariate distributions are fitted to a real-life paired data set. It is shown that, the BvEF distribution has a superior performance among the compared distributions using several tests of goodness–of–fit.


2020 ◽  
Vol 5 (3) ◽  
pp. 1211-1223
Author(s):  
Christian Behnken ◽  
Matthias Wächter ◽  
Joachim Peinke

Abstract. The most intermittent behaviour of atmospheric turbulence is found for very short timescales. Based on a concatenation of conditional probability density functions (cpdf's) of nested wind speed increments, inspired by a Markov process in scale, we derive a short-time predictor for wind speed fluctuations around a non-stationary mean value and with a corresponding non-stationary variance. As a new quality this short-time predictor enables a multipoint reconstruction of wind data. The used cpdf's are (1) directly estimated from historical data from the offshore research platform FINO1 and (2) obtained from numerical solutions of a family of Fokker–Planck equations in the scale domain. The explicit forms of the Fokker–Planck equations are estimated from the given wind data. A good agreement between the statistics of the generated and measured synthetic wind speed fluctuations is found even on timescales below 1 s. This shows that our approach captures the short-time dynamics of real wind speed fluctuations very well. Our method is extended by taking the non-stationarity of the mean wind speed and its non-stationary variance into account.


Symmetry ◽  
2020 ◽  
Vol 12 (9) ◽  
pp. 1387
Author(s):  
Chun-Ping Shieh ◽  
Shih-Hung Yang ◽  
Yu-Shun Liu ◽  
Yun-Ting Kuo ◽  
Yu-Chun Lo ◽  
...  

Electroencephalography (EEG)-based brain computer interfaces (BCIs) translate motor imagery commands into the movements of an external device (e.g., a robotic arm). The automatic design of spectral and spatial filters is a challenging task, as the frequency bands of the spectral filters must be predefined by previously published studies and given that they may be affected during trials by artifacts and improper motor imagery (MI). This study aimed to eliminate the contaminated trials automatically during classifier training, and to simultaneously learn the spectral and spatial patterns without the need for predefined frequency bands. Compared with previous studies that measured the discriminative power of a frequency band based on mutual information, this study determined the difference of the class conditional probability density function between two MI classes. This information was further shared to measure the contamination level of the trial that simplified the computation. A particle-based approximation technique iteratively constructed a filter bank that extracted discriminative features, and simultaneously removed potentially contaminated trials. The particle weight was estimated by an analysis of variance F-test instead of mutual information as commonly used in previous studies. The experimental results of a publicly available dataset revealed that the proposed method outperformed the other BCI in terms of the classification accuracy. Asymmetrical spatial patterns were found on left- versus right-hand MI classifications. The learnt spectral and spatial patterns were consistent with prior neurophysiological knowledge.


Sign in / Sign up

Export Citation Format

Share Document