scholarly journals Using systematic sampling selection for Monte Carlo solutions of Feynman-Kac equations

2008 ◽  
Vol 40 (2) ◽  
pp. 454-472 ◽  
Author(s):  
Ivan Gentil ◽  
Bruno Rémillard

While the convergence properties of many sampling selection methods can be proven, there is one particular sampling selection method introduced in Baker (1987), closely related to ‘systematic sampling’ in statistics, that has been exclusively treated on an empirical basis. The main motivation of the paper is to start to study formally its convergence properties, since in practice it is by far the fastest selection method available. We will show that convergence results for the systematic sampling selection method are related to properties of peculiar Markov chains.

2008 ◽  
Vol 40 (02) ◽  
pp. 454-472 ◽  
Author(s):  
Ivan Gentil ◽  
Bruno Rémillard

While the convergence properties of many sampling selection methods can be proven, there is one particular sampling selection method introduced in Baker (1987), closely related to ‘systematic sampling’ in statistics, that has been exclusively treated on an empirical basis. The main motivation of the paper is to start to study formally its convergence properties, since in practice it is by far the fastest selection method available. We will show that convergence results for the systematic sampling selection method are related to properties of peculiar Markov chains.


1998 ◽  
Vol 35 (01) ◽  
pp. 1-11 ◽  
Author(s):  
Gareth O. Roberts ◽  
Jeffrey S. Rosenthal ◽  
Peter O. Schwartz

In this paper, we consider the question of which convergence properties of Markov chains are preserved under small perturbations. Properties considered include geometric ergodicity and rates of convergence. Perturbations considered include roundoff error from computer simulation. We are motivated primarily by interest in Markov chain Monte Carlo algorithms.


2018 ◽  
Vol 22 (4) ◽  
Author(s):  
Shuxia Ni ◽  
Qiang Xia ◽  
Jinshan Liu

Abstract In this paper, we propose and study an effective Bayesian subset selection method for two-threshold variable autoregressive (TTV-AR) models. The usual complexity of model selection is increased by capturing the uncertainty of the two unknown threshold levels and the two unknown delay lags. By using Markov chain Monte Carlo (MCMC) techniques with driven by a stochastic search, we can identify the best subset model from a large number of possible choices. Simulation experiments show that the proposed method works very well. As applied to the application to the Hang Seng index, we successfully distinguish the best subset TTV-AR model.


2014 ◽  
Vol 2014 ◽  
pp. 1-13 ◽  
Author(s):  
Xisheng Yu ◽  
Qiang Liu

The paper by Liu (2010) introduces a method termed the canonical least-squares Monte Carlo (CLM) which combines a martingale-constrained entropy model and a least-squares Monte Carlo algorithm to price American options. In this paper, we first provide the convergence results of CLM and numerically examine the convergence properties. Then, the comparative analysis is empirically conducted using a large sample of the S&P 100 Index (OEX) puts and IBM puts. The results on the convergence show that choosing the shifted Legendre polynomials with four regressors is more appropriate considering the pricing accuracy and the computational cost. With this choice, CLM method is empirically demonstrated to be superior to the benchmark methods of binominal tree and finite difference with historical volatilities.


1998 ◽  
Vol 35 (1) ◽  
pp. 1-11 ◽  
Author(s):  
Gareth O. Roberts ◽  
Jeffrey S. Rosenthal ◽  
Peter O. Schwartz

In this paper, we consider the question of which convergence properties of Markov chains are preserved under small perturbations. Properties considered include geometric ergodicity and rates of convergence. Perturbations considered include roundoff error from computer simulation. We are motivated primarily by interest in Markov chain Monte Carlo algorithms.


1979 ◽  
Vol 21 (2) ◽  
pp. 179-186 ◽  
Author(s):  
R. R. Hill Jr. ◽  
K. T. Leath

Three cycles of selection for resistance to Leptosphaerulina briosiana (Poll.) Graham &Luttrell were conducted in two alfalfa (Medicago sativa, L.) germplasm pools, MSA and MSB. Each germplasm pool was used to compare four methods of selection: phenotypic recurrent, half-sib family, full-sib family, and alternating generations of selfed family and half-sib family. Response to selection for resistance to L. briosiana was greater in MSA than in MSB. Differences between selection methods were not significant. Selection for resistance to L. briosiana generally increased resistance to Stemphylium botryosum Wallr., but the magnitude of the correlated response varied with germplasm pool and selection method. The initial selfed families in both germplasm pools were significantly less resistant to Colletotrichum trifolii Bain than the other family types. Resistance to C. trifolii increased with selfed family selection for resistance to L. briosiana in MSA but not in MSB.


Author(s):  
Fatemeh Alighardashi ◽  
Mohammad Ali Zare Chahooki

Improving the software product quality before releasing by periodic tests is one of the most expensive activities in software projects. Due to limited resources to modules test in software projects, it is important to identify fault-prone modules and use the test sources for fault prediction in these modules. Software fault predictors based on machine learning algorithms, are effective tools for identifying fault-prone modules. Extensive studies are being done in this field to find the connection between features of software modules, and their fault-prone. Some of features in predictive algorithms are ineffective and reduce the accuracy of prediction process. So, feature selection methods to increase performance of prediction models in fault-prone modules are widely used. In this study, we proposed a feature selection method for effective selection of features, by using combination of filter feature selection methods. In the proposed filter method, the combination of several filter feature selection methods presented as fused weighed filter method. Then, the proposed method caused convergence rate of feature selection as well as the accuracy improvement. The obtained results on NASA and PROMISE with ten datasets, indicates the effectiveness of proposed method in improvement of accuracy and convergence of software fault prediction.


Author(s):  
B. Venkatesh ◽  
J. Anuradha

In Microarray Data, it is complicated to achieve more classification accuracy due to the presence of high dimensions, irrelevant and noisy data. And also It had more gene expression data and fewer samples. To increase the classification accuracy and the processing speed of the model, an optimal number of features need to extract, this can be achieved by applying the feature selection method. In this paper, we propose a hybrid ensemble feature selection method. The proposed method has two phases, filter and wrapper phase in filter phase ensemble technique is used for aggregating the feature ranks of the Relief, minimum redundancy Maximum Relevance (mRMR), and Feature Correlation (FC) filter feature selection methods. This paper uses the Fuzzy Gaussian membership function ordering for aggregating the ranks. In wrapper phase, Improved Binary Particle Swarm Optimization (IBPSO) is used for selecting the optimal features, and the RBF Kernel-based Support Vector Machine (SVM) classifier is used as an evaluator. The performance of the proposed model are compared with state of art feature selection methods using five benchmark datasets. For evaluation various performance metrics such as Accuracy, Recall, Precision, and F1-Score are used. Furthermore, the experimental results show that the performance of the proposed method outperforms the other feature selection methods.


2021 ◽  
Vol 31 (4) ◽  
Author(s):  
Rémi Leluc ◽  
François Portier ◽  
Johan Segers

Sign in / Sign up

Export Citation Format

Share Document