mercer kernels
Recently Published Documents


TOTAL DOCUMENTS

20
(FIVE YEARS 2)

H-INDEX

7
(FIVE YEARS 1)

Author(s):  
Moritz Moeller ◽  
Tino Ullrich

AbstractIn this paper we study $$L_2$$ L 2 -norm sampling discretization and sampling recovery of complex-valued functions in RKHS on $$D \subset \mathbb {R}^d$$ D ⊂ R d based on random function samples. We only assume the finite trace of the kernel (Hilbert–Schmidt embedding into $$L_2$$ L 2 ) and provide several concrete estimates with precise constants for the corresponding worst-case errors. In general, our analysis does not need any additional assumptions and also includes the case of non-Mercer kernels and also non-separable RKHS. The fail probability is controlled and decays polynomially in n, the number of samples. Under the mild additional assumption of separability we observe improved rates of convergence related to the decay of the singular values. Our main tool is a spectral norm concentration inequality for infinite complex random matrices with independent rows complementing earlier results by Rudelson, Mendelson, Pajor, Oliveira and Rauhut.


2015 ◽  
Vol 52 (3) ◽  
pp. 459-468 ◽  
Author(s):  
Anthony Bourrier ◽  
Florent Perronnin ◽  
Rémi Gribonval ◽  
Patrick Pérez ◽  
Hervé Jégou

2014 ◽  
Vol 94 ◽  
pp. 421-433 ◽  
Author(s):  
Carlos Figuera ◽  
Óscar Barquero-Pérez ◽  
José Luis Rojo-Álvarez ◽  
Manel Martínez-Ramón ◽  
Alicia Guerrero-Curieses ◽  
...  

Author(s):  
HONGWEI SUN ◽  
PING LIU

A new multi-kernel regression learning algorithm is studied in this paper. In our setting, the hypothesis space is generated by two Mercer kernels, thus it has stronger approximation ability than the single kernel case. We provide the mathematical foundation for this regularized learning algorithm. We obtain satisfying capacity-dependent error bounds and learning rates by the covering number method.


2012 ◽  
Vol 2012 ◽  
pp. 1-8 ◽  
Author(s):  
Vsevolod Yugov ◽  
Itsuo Kumazawa

We describe and analyze a simple and effective two-step online boosting algorithm that allows us to utilize highly effective gradient descent-based methods developed for online SVM training without the need to fine-tune the kernel parameters, and we show its efficiency by several experiments. Our method is similar to AdaBoost in that it trains additional classifiers according to the weights provided by previously trained classifiers, but unlike AdaBoost, we utilize hinge-loss rather than exponential loss and modify algorithm for the online setting, allowing for varying number of classifiers. We show that our theoretical convergence bounds are similar to those of earlier algorithms, while allowing for greater flexibility. Our approach may also easily incorporate additional nonlinearity in form of Mercer kernels, although our experiments show that this is not necessary for most situations. The pre-training of the additional classifiers in our algorithms allows for greater accuracy while reducing the times associated with usual kernel-based approaches. We compare our algorithm to other online training algorithms, and we show, that for most cases with unknown kernel parameters, our algorithm outperforms other algorithms both in runtime and convergence speed.


2011 ◽  
Vol 74 (17) ◽  
pp. 3028-3035 ◽  
Author(s):  
Binbin Pan ◽  
Jianhuang Lai ◽  
Pong C. Yuen
Keyword(s):  
Low Rank ◽  

Sign in / Sign up

Export Citation Format

Share Document