importance sampling
Recently Published Documents


TOTAL DOCUMENTS

1870
(FIVE YEARS 334)

H-INDEX

60
(FIVE YEARS 10)

2022 ◽  
pp. 108455
Author(s):  
Fernando Llorente ◽  
Luca Martino ◽  
Jesse Read ◽  
David Delgado
Keyword(s):  

2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Fanyu Meng ◽  
Wei Shao ◽  
Yuxia Su

Simplicial depth (SD) plays an important role in discriminant analysis, hypothesis testing, machine learning, and engineering computations. However, the computation of simplicial depth is hugely challenging because the exact algorithm is an NP problem with dimension d and sample size n as input arguments. The approximate algorithm for simplicial depth computation has extremely low efficiency, especially in high-dimensional cases. In this study, we design an importance sampling algorithm for the computation of simplicial depth. As an advanced Monte Carlo method, the proposed algorithm outperforms other approximate and exact algorithms in accuracy and efficiency, as shown by simulated and real data experiments. Furthermore, we illustrate the robustness of simplicial depth in regression analysis through a concrete physical data experiment.


2021 ◽  
Vol 32 (1) ◽  
Author(s):  
Juan Kuntz ◽  
Francesca R. Crucinio ◽  
Adam M. Johansen

AbstractWe introduce a class of Monte Carlo estimators that aim to overcome the rapid growth of variance with dimension often observed for standard estimators by exploiting the target’s independence structure. We identify the most basic incarnations of these estimators with a class of generalized U-statistics and thus establish their unbiasedness, consistency, and asymptotic normality. Moreover, we show that they obtain the minimum possible variance amongst a broad class of estimators, and we investigate their computational cost and delineate the settings in which they are most efficient. We exemplify the merger of these estimators with other well known Monte Carlo estimators so as to better adapt the latter to the target’s independence structure and improve their performance. We do this via three simple mergers: one with importance sampling, another with importance sampling squared, and a final one with pseudo-marginal Metropolis–Hastings. In all cases, we show that the resulting estimators are well founded and achieve lower variances than their standard counterparts. Lastly, we illustrate the various variance reductions through several examples.


Sign in / Sign up

Export Citation Format

Share Document