monte carlo sampling
Recently Published Documents


TOTAL DOCUMENTS

753
(FIVE YEARS 228)

H-INDEX

47
(FIVE YEARS 7)

2022 ◽  
Author(s):  
Umesh Khaniya ◽  
Junjun Mao ◽  
Rongmei Wei ◽  
Marilyn Gunner

Proteins are polyelectrolytes with acidic or basic amino acids making up ≈25% of the residues. The protonation state of all Asp, Glu, Arg, Lys, His and other protonatable residues, cofactors and ligands define each protonation microstate. As all of these residues will not be fully ionized or neutral, proteins exist in a mixture of microstates. The microstate distribution changes with pH. As the protein environment modifies the proton affinity of each site the distribution may also change in different reaction intermediates or as ligands are bound. Particular protonation microstates may be required for function, while others exist simply because there are many states with similar energy. Here, the protonation microstates generated in Monte Carlo sampling in MCCE are characterized in HEW lysozyme as a function of pH and bacterial photosynthetic reaction centers (RCs) in different reaction intermediates. The lowest energy and highest probability microstates are compared. The ∆G, ∆H and ∆S between the four protonation states of Glu35 and Asp52 in lysozyme are shown to be calculated with reasonable precision. A weighted Pearson correlation analysis identifies coupling between residue protonation states in RCs and how they change when the quinone in the QB site is reduced.


Author(s):  
Satoshi Hayakawa ◽  
Ken’ichiro Tanaka

AbstractIn this paper, we investigate application of mathematical optimization to construction of a cubature formula on Wiener space, which is a weak approximation method of stochastic differential equations introduced by Lyons and Victoir (Proc R Soc Lond A 460:169–198, 2004). After giving a brief review on the cubature theory on Wiener space, we show that a cubature formula of general dimension and degree can be obtained through a Monte Carlo sampling and linear programming. This paper also includes an extension of stochastic Tchakaloff’s theorem, which technically yields the proof of our primary result.


2021 ◽  
Author(s):  
◽  
Mashall Aryan

<p>The solution to many science and engineering problems includes identifying the minimum or maximum of an unknown continuous function whose evaluation inflicts non-negligible costs in terms of resources such as money, time, human attention or computational processing. In such a case, the choice of new points to evaluate is critical. A successful approach has been to choose these points by considering a distribution over plausible surfaces, conditioned on all previous points and their evaluations. In this sequential bi-step strategy, also known as Bayesian Optimization, first a prior is defined over possible functions and updated to a posterior in the light of available observations. Then using this posterior, namely the surrogate model, an infill criterion is formed and utilized to find the next location to sample from. By far the most common prior distribution and infill criterion are Gaussian Process and Expected Improvement, respectively.    The popularity of Gaussian Processes in Bayesian optimization is partially due to their ability to represent the posterior in closed form. Nevertheless, the Gaussian Process is afflicted with several shortcomings that directly affect its performance. For example, inference scales poorly with the amount of data, numerical stability degrades with the number of data points, and strong assumptions about the observation model are required, which might not be consistent with reality. These drawbacks encourage us to seek better alternatives. This thesis studies the application of Neural Networks to enhance Bayesian Optimization. It proposes several Bayesian optimization methods that use neural networks either as their surrogates or in the infill criterion.    This thesis introduces a novel Bayesian Optimization method in which Bayesian Neural Networks are used as a surrogate. This has reduced the computational complexity of inference in surrogate from cubic (on the number of observation) in GP to linear. Different variations of Bayesian Neural Networks (BNN) are put into practice and inferred using a Monte Carlo sampling. The results show that Monte Carlo Bayesian Neural Network surrogate could performed better than, or at least comparably to the Gaussian Process-based Bayesian optimization methods on a set of benchmark problems.  This work develops a fast Bayesian Optimization method with an efficient surrogate building process. This new Bayesian Optimization algorithm utilizes Bayesian Random-Vector Functional Link Networks as surrogate. In this family of models the inference is only performed on a small subset of the entire model parameters and the rest are randomly drawn from a prior. The proposed methods are tested on a set of benchmark continuous functions and hyperparameter optimization problems and the results show the proposed methods are competitive with state-of-the-art Bayesian Optimization methods.  This study proposes a novel Neural network-based infill criterion. In this method locations to sample from are found by minimizing the joint conditional likelihood of the new point and parameters of a neural network. The results show that in Bayesian Optimization methods with Bayesian Neural Network surrogates, this new infill criterion outperforms the expected improvement.   Finally, this thesis presents order-preserving generative models and uses it in a variational Bayesian context to infer Implicit Variational Bayesian Neural Network (IVBNN) surrogates for a new Bayesian Optimization. This new inference mechanism is more efficient and scalable than Monte Carlo sampling. The results show that IVBNN could outperform Monte Carlo BNN in Bayesian optimization of hyperparameters of machine learning models.</p>


2021 ◽  
Author(s):  
◽  
Mashall Aryan

<p>The solution to many science and engineering problems includes identifying the minimum or maximum of an unknown continuous function whose evaluation inflicts non-negligible costs in terms of resources such as money, time, human attention or computational processing. In such a case, the choice of new points to evaluate is critical. A successful approach has been to choose these points by considering a distribution over plausible surfaces, conditioned on all previous points and their evaluations. In this sequential bi-step strategy, also known as Bayesian Optimization, first a prior is defined over possible functions and updated to a posterior in the light of available observations. Then using this posterior, namely the surrogate model, an infill criterion is formed and utilized to find the next location to sample from. By far the most common prior distribution and infill criterion are Gaussian Process and Expected Improvement, respectively.    The popularity of Gaussian Processes in Bayesian optimization is partially due to their ability to represent the posterior in closed form. Nevertheless, the Gaussian Process is afflicted with several shortcomings that directly affect its performance. For example, inference scales poorly with the amount of data, numerical stability degrades with the number of data points, and strong assumptions about the observation model are required, which might not be consistent with reality. These drawbacks encourage us to seek better alternatives. This thesis studies the application of Neural Networks to enhance Bayesian Optimization. It proposes several Bayesian optimization methods that use neural networks either as their surrogates or in the infill criterion.    This thesis introduces a novel Bayesian Optimization method in which Bayesian Neural Networks are used as a surrogate. This has reduced the computational complexity of inference in surrogate from cubic (on the number of observation) in GP to linear. Different variations of Bayesian Neural Networks (BNN) are put into practice and inferred using a Monte Carlo sampling. The results show that Monte Carlo Bayesian Neural Network surrogate could performed better than, or at least comparably to the Gaussian Process-based Bayesian optimization methods on a set of benchmark problems.  This work develops a fast Bayesian Optimization method with an efficient surrogate building process. This new Bayesian Optimization algorithm utilizes Bayesian Random-Vector Functional Link Networks as surrogate. In this family of models the inference is only performed on a small subset of the entire model parameters and the rest are randomly drawn from a prior. The proposed methods are tested on a set of benchmark continuous functions and hyperparameter optimization problems and the results show the proposed methods are competitive with state-of-the-art Bayesian Optimization methods.  This study proposes a novel Neural network-based infill criterion. In this method locations to sample from are found by minimizing the joint conditional likelihood of the new point and parameters of a neural network. The results show that in Bayesian Optimization methods with Bayesian Neural Network surrogates, this new infill criterion outperforms the expected improvement.   Finally, this thesis presents order-preserving generative models and uses it in a variational Bayesian context to infer Implicit Variational Bayesian Neural Network (IVBNN) surrogates for a new Bayesian Optimization. This new inference mechanism is more efficient and scalable than Monte Carlo sampling. The results show that IVBNN could outperform Monte Carlo BNN in Bayesian optimization of hyperparameters of machine learning models.</p>


Author(s):  
Marks Legkovskis ◽  
Peter J Thomas ◽  
Michael Auinger

Abstract We summarise the results of a computational study involved with Uncertainty Quantification (UQ) in a benchmark turbulent burner flame simulation. UQ analysis of this simulation enables one to analyse the convergence performance of one of the most widely-used uncertainty propagation techniques, Polynomial Chaos Expansion (PCE) at varying levels of system smoothness. This is possible because in the burner flame simulations, the smoothness of the time-dependent temperature, which is the study's QoI is found to evolve with the flame development state. This analysis is deemed important as it is known that PCE cannot accurately surrogate non-smooth QoIs and thus perform convergent UQ. While this restriction is known and gets accounted for, there is no understanding whether there is a quantifiable scaling relationship between the PCE's convergence metrics and the level of QoI's smoothness. It is found that the level of QoI-smoothness can be quantified by its standard deviation allowing to observe the effect of QoI's level of smoothness on the PCE's convergence performance. It is found that for our flow scenario, there exists a power-law relationship between a comparative parameter, defined to measure the PCE's convergence performance relative to Monte Carlo sampling, and the QoI's standard deviation, which allows us to make a more weighted decision on the choice of the uncertainty propagation technique.


2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
Xuanjia Zuo ◽  
Liang Wang ◽  
Huizhong Lin ◽  
Sanku Dey ◽  
Li Yan

In this paper, the interest is in estimating the Weibull products when the available data is obtained via generalized progressive hybrid censoring. The testing scheme conducts products of interest under a more flexible way and allows collecting failure data in efficient and adaptable experimental scenarios than traditional lifetime testing. When the latent lifetime of products follows Weibull distribution, classical and Bayesian inferences are considered for unknown parameters. The existence and uniqueness of maximum likelihood estimates are established, and approximate confidence intervals are also constructed via asymptotic theory. Bayes point estimates as well as the credible intervals of the parameters are obtained, and correspondingly, Monte Carlo sampling technique is also provided for complex posterior computation. Extensive numerical analysis is carried out, and the results show that the generalized progressive hybrid censoring is an adaptive procedure in practical lifetime experiment, both proposed classical and Bayesian inferential approaches perform satisfactorily, and the Bayesian results are superior to conventional likelihood estimates.


2021 ◽  
pp. 110898
Author(s):  
Julia Konrad ◽  
Ionuţ-Gabriel Farcaş ◽  
Benjamin Peherstorfer ◽  
Alessandro Di Siena ◽  
Frank Jenko ◽  
...  

2021 ◽  
Vol 2128 (1) ◽  
pp. 012015
Author(s):  
Mohammed Ezzat Helal ◽  
Manal Ezzat Helal ◽  
Professor Sherif Fadel Fahmy

Abstract We investigate the molecular gene expressions studies and public databases for disease modelling using Probabilistic Graphical Models and Bayesian Inference. A case study on Spinal Muscle Atrophy Genome-Wide Association Study results is modelled and analyzed. The genes up and down-regulated in two stages of the disease development are linked to prior knowledge published in the public domain and co-expressions network is created and analyzed. The Molecular Pathways triggered by these genes are identified. The Bayesian inference posteriors distributions are estimated using a variational analytical algorithm and a Markov chain Monte Carlo sampling algorithm. Assumptions, limitations and possible future work are concluded.


2021 ◽  
Vol 2021 (12) ◽  
Author(s):  
Rikkert Frederix ◽  
Timea Vitos

Abstract We investigate the next-to-leading-colour (NLC) contributions to the colour matrix in the fundamental and the colour-flow decompositions for tree-level processes with all gluons, one quark pair and two quark pairs. By analytical examination of the colour factors, we find the non-zero elements in the colour matrix at NLC. At this colour order, together with the symmetry of the phase-space, it is reduced from factorial to polynomial the scaling of the contributing dual amplitudes as the number of partons participating in the scattering process is increased. This opens a path to an accurate tree-level matrix element generator of which all factorial complexity is removed, without resulting to Monte Carlo sampling over colour.


2021 ◽  
Vol 7 (1) ◽  
Author(s):  
Pedram Tavadze ◽  
Reese Boucher ◽  
Guillermo Avendaño-Franco ◽  
Keenan X. Kocan ◽  
Sobhit Singh ◽  
...  

AbstractThe density-functional theory is widely used to predict the physical properties of materials. However, it usually fails for strongly correlated materials. A popular solution is to use the Hubbard correction to treat strongly correlated electronic states. Unfortunately, the values of the Hubbard U and J parameters are initially unknown, and they can vary from one material to another. In this semi-empirical study, we explore the U and J parameter space of a group of iron-based compounds to simultaneously improve the prediction of physical properties (volume, magnetic moment, and bandgap). We used a Bayesian calibration assisted by Markov chain Monte Carlo sampling for three different exchange-correlation functionals (LDA, PBE, and PBEsol). We found that LDA requires the largest U correction. PBE has the smallest standard deviation and its U and J parameters are the most transferable to other iron-based compounds. Lastly, PBE predicts lattice parameters reasonably well without the Hubbard correction.


Sign in / Sign up

Export Citation Format

Share Document