convergence acceleration
Recently Published Documents


TOTAL DOCUMENTS

375
(FIVE YEARS 39)

H-INDEX

27
(FIVE YEARS 2)

2022 ◽  
Author(s):  
Dylan Bayerl ◽  
Christopher Michael Andolina ◽  
Shyam Dwaraknath ◽  
Wissam A Saidi

Machine learning potentials (MLPs) for atomistic simulations have an enormous prospective impact on materials modeling, offering orders of magnitude speedup over density functional theory (DFT) calculations without appreciably sacrificing accuracy...


Author(s):  
Arnak V. Poghosyan ◽  
Lusine D. Poghosyan ◽  
Rafayel H. Barkhudaryan

We investigate the convergence of the quasi-periodic approximations in different frameworks and reveal exact asymptotic estimates of the corresponding errors. The estimates facilitate a fair comparison of the quasi-periodic approximations to other classical well-known approaches. We consider a special realization of the approximations by the inverse of the Vandermonde matrix, which makes it possible to prove the existence of the corresponding implementations, derive explicit formulas and explore convergence properties. We also show the application of polynomial corrections for the convergence acceleration of the quasi-periodic approximations. Numerical experiments reveal the auto-correction phenomenon related to the polynomial corrections so that utilization of approximate derivatives surprisingly results in better convergence compared to the expansions with the exact ones.


2021 ◽  
Vol 503 (2) ◽  
pp. 1897-1914
Author(s):  
Nicolas Chartier ◽  
Benjamin Wandelt ◽  
Yashar Akrami ◽  
Francisco Villaescusa-Navarro

ABSTRACT To exploit the power of next-generation large-scale structure surveys, ensembles of numerical simulations are necessary to give accurate theoretical predictions of the statistics of observables. High-fidelity simulations come at a towering computational cost. Therefore, approximate but fast simulations, surrogates, are widely used to gain speed at the price of introducing model error. We propose a general method that exploits the correlation between simulations and surrogates to compute fast, reduced-variance statistics of large-scale structure observables without model error at the cost of only a few simulations. We call this approach Convergence Acceleration by Regression and Pooling (CARPool). In numerical experiments with intentionally minimal tuning, we apply CARPool to a handful of gadget-iii  N-body simulations paired with surrogates computed using COmoving Lagrangian Acceleration. We find ∼100-fold variance reduction even in the non-linear regime, up to $k_\mathrm{max} \approx 1.2\, h {\rm Mpc^{-1}}$ for the matter power spectrum. CARPool realizes similar improvements for the matter bispectrum. In the nearly linear regime CARPool attains far larger sample variance reductions. By comparing to the 15 000 simulations from the Quijote suite, we verify that the CARPool estimates are unbiased, as guaranteed by construction, even though the surrogate misses the simulation truth by up to $60{{\ \rm per\ cent}}$ at high k. Furthermore, even with a fully configuration-space statistic like the non-linear matter density probability density function, CARPool achieves unbiased variance reduction factors of up to ∼10, without any further tuning. Conversely, CARPool can be used to remove model error from ensembles of fast surrogates by combining them with a few high-accuracy simulations.


Sign in / Sign up

Export Citation Format

Share Document