2015 ◽  
Vol 27 (6) ◽  
pp. 1294-1320 ◽  
Author(s):  
Shao-Gao Lv

Gradient learning (GL), initially proposed by Mukherjee and Zhou ( 2006 ) has been proved to be a powerful tool for conducting variable selection and dimensional reduction simultaneously. This approach presents a nonparametric version of a gradient estimator with positive definite kernels without estimating the true function itself, so that the proposed version has wide applicability and allows for complex effects between predictors. In terms of theory, however, existing generalization bounds for GL depend on capacity-independent techniques, and the capacity of kernel classes cannot be characterized completely. Thus, this letter considers GL estimators that minimize the empirical convex risk. We prove generalization bounds for such estimators with rates that are faster than previous results. Moreover, we provide a novel upper bound for Rademacher chaos complexity of order two, which also plays an important role in general pairwise-type estimations, including ranking and score problems.


2021 ◽  
Vol 41 (3) ◽  
pp. 283-300
Author(s):  
Daniel Alpay ◽  
Palle E.T. Jorgensen

We give two new global and algorithmic constructions of the reproducing kernel Hilbert space associated to a positive definite kernel. We further present a general positive definite kernel setting using bilinear forms, and we provide new examples. Our results cover the case of measurable positive definite kernels, and we give applications to both stochastic analysis and metric geometry and provide a number of examples.


2021 ◽  
Vol 15 (5) ◽  
Author(s):  
Monika Drewnik ◽  
Tomasz Miller ◽  
Zbigniew Pasternak-Winiarski

AbstractThe aim of the paper is to create a link between the theory of reproducing kernel Hilbert spaces (RKHS) and the notion of a unitary representation of a group or of a groupoid. More specifically, it is demonstrated on one hand how to construct a positive definite kernel and an RKHS for a given unitary representation of a group(oid), and on the other hand how to retrieve the unitary representation of a group or a groupoid from a positive definite kernel defined on that group(oid) with the help of the Moore–Aronszajn theorem. The kernel constructed from the group(oid) representation is inspired by the kernel defined in terms of the convolution of functions on a locally compact group. Several illustrative examples of reproducing kernels related with unitary representations of groupoids are discussed in detail. The paper is concluded with the brief overview of the possible applications of the proposed constructions.


2013 ◽  
Vol 11 (05) ◽  
pp. 1350020 ◽  
Author(s):  
HONGWEI SUN ◽  
QIANG WU

We study the asymptotical properties of indefinite kernel network with coefficient regularization and dependent sampling. The framework under investigation is different from classical kernel learning. Positive definiteness is not required by the kernel function and the samples are allowed to be weakly dependent with the dependence measured by a strong mixing condition. By a new kernel decomposition technique introduced in [27], two reproducing kernel Hilbert spaces and their associated kernel integral operators are used to characterize the properties and learnability of the hypothesis function class. Capacity independent error bounds and learning rates are deduced.


Sign in / Sign up

Export Citation Format

Share Document