Wavelet Basis Selection for Regression by Cross-Validation

Author(s):  
Seth A. Greenblatt
2012 ◽  
Vol 2012 ◽  
pp. 1-18
Author(s):  
Ali Al-Kenani ◽  
Keming Yu

We propose a cross-validation method suitable for smoothing of kernel quantile estimators. In particular, our proposed method selects the bandwidth parameter, which is known to play a crucial role in kernel smoothing, based on unbiased estimation of a mean integrated squared error curve of which the minimising value determines an optimal bandwidth. This method is shown to lead to asymptotically optimal bandwidth choice and we also provide some general theory on the performance of optimal, data-based methods of bandwidth choice. The numerical performances of the proposed methods are compared in simulations, and the new bandwidth selection is demonstrated to work very well.


2020 ◽  
Vol 10 (9) ◽  
pp. 3291
Author(s):  
Jesús F. Pérez-Gómez ◽  
Juana Canul-Reich ◽  
José Hernández-Torruco ◽  
Betania Hernández-Ocaña

Requiring only a few relevant characteristics from patients when diagnosing bacterial vaginosis is highly useful for physicians as it makes it less time consuming to collect these data. This would result in having a dataset of patients that can be more accurately diagnosed using only a subset of informative or relevant features in contrast to using the entire set of features. As such, this is a feature selection (FS) problem. In this work, decision tree and Relief algorithms were used as feature selectors. Experiments were conducted on a real dataset for bacterial vaginosis with 396 instances and 252 features/attributes. The dataset was obtained from universities located in Baltimore and Atlanta. The FS algorithms utilized feature rankings, from which the top fifteen features formed a new dataset that was used as input for both support vector machine (SVM) and logistic regression (LR) algorithms for classification. For performance evaluation, averages of 30 runs of 10-fold cross-validation were reported, along with balanced accuracy, sensitivity, and specificity as performance measures. A performance comparison of the results was made between using the total number of features against using the top fifteen. These results found similar attributes from our rankings compared to those reported in the literature. This study is part of ongoing research that is investigating a range of feature selection and classification methods.


2013 ◽  
Vol 17 (1) ◽  
pp. 27-36 ◽  
Author(s):  
Todor Ganchev ◽  
Mihalis Siafarikas ◽  
Iosif Mporas ◽  
Tsenka Stoyanova

2004 ◽  
Vol 13 (04) ◽  
pp. 791-800 ◽  
Author(s):  
HOLGER FRÖHLICH ◽  
OLIVIER CHAPELLE ◽  
BERNHARD SCHÖLKOPF

The problem of feature selection is a difficult combinatorial task in Machine Learning and of high practical relevance, e.g. in bioinformatics. Genetic Algorithms (GAs) offer a natural way to solve this problem. In this paper we present a special Genetic Algorithm, which especially takes into account the existing bounds on the generalization error for Support Vector Machines (SVMs). This new approach is compared to the traditional method of performing cross-validation and to other existing algorithms for feature selection.


2008 ◽  
Vol 59 (4) ◽  
pp. 819-825 ◽  
Author(s):  
Roger Nana ◽  
Tiejun Zhao ◽  
Keith Heberlein ◽  
Stephen M. LaConte ◽  
Xiaoping Hu

Sign in / Sign up

Export Citation Format

Share Document