scholarly journals On Stochastic Optimization and Statistical Learning in Reproducing Kernel Hilbert Spaces by Support Vector Machines (SVM)

Informatica ◽  
2009 ◽  
Vol 20 (2) ◽  
pp. 273-292 ◽  
Author(s):  
Vladimir Norkin ◽  
Michiel Keyzer
Author(s):  
ZHAOWEI SHANG ◽  
YUAN YAN TANG ◽  
BIN FANG ◽  
JING WEN ◽  
YAT ZHOU ONG

The fusion of wavelet technique and support vector machines (SVMs) has become an intensive study in recent years. Considering that the wavelet technique is the theoretical foundation of multiresolution analysis (MRA), it is valuable for us to investigate the problem of whether a good performance could be obtained if we combine the MRA with SVMs for signal approximation. Based on the fact that the feature space of SVM and the scale subspace in MRA can be viewed as the same Reproducing Kernel Hilbert Spaces (RKHS), a new algorithm named multiresolution signal decomposition and approximation based on SVM is proposed. The proposed algorithm which approximates the signals hierarchically at different resolutions, possesses better approximation of smoothness for signal than conventional MRA due to using the approximation criterion of the SVM. Experiments illustrate that our algorithm has better approximation of performance than the MRA when being applied to stationary and non-stationary signals.


2010 ◽  
Vol 22 (3) ◽  
pp. 793-829 ◽  
Author(s):  
Giorgio Gnecco ◽  
Marcello Sanguineti

Various regularization techniques are investigated in supervised learning from data. Theoretical features of the associated optimization problems are studied, and sparse suboptimal solutions are searched for. Rates of approximate optimization are estimated for sequences of suboptimal solutions formed by linear combinations of n-tuples of computational units, and statistical learning bounds are derived. As hypothesis sets, reproducing kernel Hilbert spaces and their subsets are considered.


2013 ◽  
Vol 11 (05) ◽  
pp. 1350020 ◽  
Author(s):  
HONGWEI SUN ◽  
QIANG WU

We study the asymptotical properties of indefinite kernel network with coefficient regularization and dependent sampling. The framework under investigation is different from classical kernel learning. Positive definiteness is not required by the kernel function and the samples are allowed to be weakly dependent with the dependence measured by a strong mixing condition. By a new kernel decomposition technique introduced in [27], two reproducing kernel Hilbert spaces and their associated kernel integral operators are used to characterize the properties and learnability of the hypothesis function class. Capacity independent error bounds and learning rates are deduced.


2014 ◽  
Vol 9 (4) ◽  
pp. 827-931 ◽  
Author(s):  
Joseph A. Ball ◽  
Dmitry S. Kaliuzhnyi-Verbovetskyi ◽  
Cora Sadosky ◽  
Victor Vinnikov

Sign in / Sign up

Export Citation Format

Share Document