kernel hilbert spaces
Recently Published Documents


TOTAL DOCUMENTS

338
(FIVE YEARS 83)

H-INDEX

28
(FIVE YEARS 3)

Author(s):  
Osval Antonio Montesinos López ◽  
Abelardo Montesinos López ◽  
Jose Crossa

AbstractThe fundamentals for Reproducing Kernel Hilbert Spaces (RKHS) regression methods are described in this chapter. We first point out the virtues of RKHS regression methods and why these methods are gaining a lot of acceptance in statistical machine learning. Key elements for the construction of RKHS regression methods are provided, the kernel trick is explained in some detail, and the main kernel functions for building kernels are provided. This chapter explains some loss functions under a fixed model framework with examples of Gaussian, binary, and categorical response variables. We illustrate the use of mixed models with kernels by providing examples for continuous response variables. Practical issues for tuning the kernels are illustrated. We expand the RKHS regression methods under a Bayesian framework with practical examples applied to continuous and categorical response variables and by including in the predictor the main effects of environments, genotypes, and the genotype ×environment interaction. We show examples of multi-trait RKHS regression methods for continuous response variables. Finally, some practical issues of kernel compression methods are provided which are important for reducing the computation cost of implementing conventional RKHS methods.


2021 ◽  
Vol 15 (5) ◽  
Author(s):  
Monika Drewnik ◽  
Tomasz Miller ◽  
Zbigniew Pasternak-Winiarski

AbstractThe aim of the paper is to create a link between the theory of reproducing kernel Hilbert spaces (RKHS) and the notion of a unitary representation of a group or of a groupoid. More specifically, it is demonstrated on one hand how to construct a positive definite kernel and an RKHS for a given unitary representation of a group(oid), and on the other hand how to retrieve the unitary representation of a group or a groupoid from a positive definite kernel defined on that group(oid) with the help of the Moore–Aronszajn theorem. The kernel constructed from the group(oid) representation is inspired by the kernel defined in terms of the convolution of functions on a locally compact group. Several illustrative examples of reproducing kernels related with unitary representations of groupoids are discussed in detail. The paper is concluded with the brief overview of the possible applications of the proposed constructions.


2021 ◽  
Vol 12 (4) ◽  
pp. 1-21
Author(s):  
Xiangjun Shen ◽  
Kou Lu ◽  
Sumet Mehta ◽  
Jianming Zhang ◽  
Weifeng Liu ◽  
...  

In this article, a novel ensemble model, called Multiple Kernel Ensemble Learning (MKEL), is developed by introducing a unified ensemble loss. Different from the previous multiple kernel learning (MKL) methods, which attempt to seek a linear combination of basis kernels as a unified kernel, our MKEL model aims to find multiple solutions in corresponding Reproducing Kernel Hilbert Spaces (RKHSs) simultaneously. To achieve this goal, multiple individual kernel losses are integrated into a unified ensemble loss. Therefore, each model can co-optimize to learn its optimal parameters by minimizing a unified ensemble loss in multiple RKHSs. Furthermore, we apply our proposed ensemble loss into the deep network paradigm and take the sub-network as a kernel mapping from the original input space into a feature space, named Deep-MKEL (D-MKEL). Our D-MKEL model can utilize the diversified deep individual sub-networks into a whole unified network to improve the classification performance. With this unified loss design, our D-MKEL model can make our network much wider than other traditional deep kernel networks and more parameters are learned and optimized. Experimental results on several mediate UCI classification and computer vision datasets demonstrate that our MKEL model can achieve the best classification performance among comparative MKL methods, such as Simple MKL, GMKL, Spicy MKL, and Matrix-Regularized MKL. On the contrary, experimental results on large-scale CIFAR-10 and SVHN datasets concretely show the advantages and potentialities of the proposed D-MKEL approach compared to state-of-the-art deep kernel methods.


Sign in / Sign up

Export Citation Format

Share Document