scholarly journals Over-relaxed hit-and-run Monte Carlo for the uniform sampling of convex bodies with applications in metabolic network biophysics

2015 ◽  
Vol 26 (01) ◽  
pp. 1550010
Author(s):  
G. De Concini ◽  
D. De Martino

The uniform sampling of convex regions in high dimension is an important computational issue, both from theoretical and applied point of view. The hit-and-run Monte Carlo algorithms are the most efficient methods known to perform it and one of their bottlenecks relies in the difficulty of escaping from tight corners in high dimension. Inspired by optimized Monte Carlo methods used in statistical mechanics, we define a new algorithm by over-relaxing the hit-and-run dynamics. We made numerical simulations on high-dimensional simplexes and hypercubes in order to test its performances, pointing out its improved ability to escape from angles and finally apply it to an inference problem in the steady state dynamics of metabolic networks.

2018 ◽  
Vol 24 (4) ◽  
pp. 225-247 ◽  
Author(s):  
Xavier Warin

Abstract A new method based on nesting Monte Carlo is developed to solve high-dimensional semi-linear PDEs. Depending on the type of non-linearity, different schemes are proposed and theoretically studied: variance error are given and it is shown that the bias of the schemes can be controlled. The limitation of the method is that the maturity or the Lipschitz constants of the non-linearity should not be too high in order to avoid an explosion of the computational time. Many numerical results are given in high dimension for cases where analytical solutions are available or where some solutions can be computed by deep-learning methods.


Biometrika ◽  
2020 ◽  
Vol 107 (4) ◽  
pp. 1005-1012 ◽  
Author(s):  
Deborshee Sen ◽  
Matthias Sachs ◽  
Jianfeng Lu ◽  
David B Dunson

Summary Classification with high-dimensional data is of widespread interest and often involves dealing with imbalanced data. Bayesian classification approaches are hampered by the fact that current Markov chain Monte Carlo algorithms for posterior computation become inefficient as the number $p$ of predictors or the number $n$ of subjects to classify gets large, because of the increasing computational time per step and worsening mixing rates. One strategy is to employ a gradient-based sampler to improve mixing while using data subsamples to reduce the per-step computational complexity. However, the usual subsampling breaks down when applied to imbalanced data. Instead, we generalize piecewise-deterministic Markov chain Monte Carlo algorithms to include importance-weighted and mini-batch subsampling. These maintain the correct stationary distribution with arbitrarily small subsamples and substantially outperform current competitors. We provide theoretical support for the proposed approach and demonstrate its performance gains in simulated data examples and an application to cancer data.


Entropy ◽  
2018 ◽  
Vol 20 (2) ◽  
pp. 110 ◽  
Author(s):  
Yosra Marnissi ◽  
Emilie Chouzenoux ◽  
Amel Benazza-Benyahia ◽  
Jean-Christophe Pesquet

1988 ◽  
Vol 102 ◽  
pp. 79-81
Author(s):  
A. Goldberg ◽  
S.D. Bloom

AbstractClosed expressions for the first, second, and (in some cases) the third moment of atomic transition arrays now exist. Recently a method has been developed for getting to very high moments (up to the 12th and beyond) in cases where a “collective” state-vector (i.e. a state-vector containing the entire electric dipole strength) can be created from each eigenstate in the parent configuration. Both of these approaches give exact results. Herein we describe astatistical(or Monte Carlo) approach which requires onlyonerepresentative state-vector |RV> for the entire parent manifold to get estimates of transition moments of high order. The representation is achieved through the random amplitudes associated with each basis vector making up |RV>. This also gives rise to the dispersion characterizing the method, which has been applied to a system (in the M shell) with≈250,000 lines where we have calculated up to the 5th moment. It turns out that the dispersion in the moments decreases with the size of the manifold, making its application to very big systems statistically advantageous. A discussion of the method and these dispersion characteristics will be presented.


2021 ◽  
Vol 2 (2) ◽  
pp. 132-151
Author(s):  
Vito Vitali ◽  
Florent Chevallier ◽  
Alexis Jinaphanh ◽  
Andrea Zoia ◽  
Patrick Blaise

Modal expansions based on k-eigenvalues and α-eigenvalues are commonly used in order to investigate the reactor behaviour, each with a distinct point of view: the former is related to fission generations, whereas the latter is related to time. Well-known Monte Carlo methods exist to compute the direct k or α fundamental eigenmodes, based on variants of the power iteration. The possibility of computing adjoint eigenfunctions in continuous-energy transport has been recently implemented and tested in the development version of TRIPOLI-4®, using a modified version of the Iterated Fission Probability (IFP) method for the adjoint α calculation. In this work we present a preliminary comparison of direct and adjoint k and α eigenmodes by Monte Carlo methods, for small deviations from criticality. When the reactor is exactly critical, i.e., for k0 = 1 or equivalently α0 = 0, the fundamental modes of both eigenfunction bases coincide, as expected on physical grounds. However, for non-critical systems the fundamental k and α eigenmodes show significant discrepancies.


2021 ◽  
pp. 108041
Author(s):  
C.U. Schuster ◽  
T. Johnson ◽  
G. Papp ◽  
R. Bilato ◽  
S. Sipilä ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document