Functional Sliced Inverse Regression in a Reproducing Kernel Hilbert Space: a Theoretical Connection to Functional Linear Regression

2019 ◽  
Author(s):  
Guochang Wang ◽  
Heng Lian
Author(s):  
Heng Chen ◽  
Wei Huang ◽  
Di-Rong Chen

Sliced inverse regression (SIR) is a powerful method to deal with a dimension reduction model. As well known, SIR is equivalent to a transformation-based projection pursuit, where the optimal directions are just the directions in SIR. In this paper, we consider simultaneous estimations of optimal directions for functional data and optimal transformations. We take a reproducing kernel Hilbert space approach. Both the directions and the transformations are chosen from reproducing kernel Hilbert spaces. A learning rate is established for the estimators.


2020 ◽  
Vol 18 (04) ◽  
pp. 697-714
Author(s):  
Yang Zhou ◽  
Di-Rong Chen

In functional data analysis, linear prediction problems have been widely studied based on the functional linear regression model. However, restrictive condition is needed to ensure the existence of the coefficient function. In this paper, a general linear prediction model is considered on the framework of reproducing kernel Hilbert space, which includes both the functional linear regression model and the point impact model. We show that from the point view of prediction, this general model works as well even the coefficient function does not exist. Moreover, under mild conditions, the minimax optimal rate of convergence is established for the prediction under the integrated mean squared prediction error. In particular, the rate reduces to the existing result when the coefficient function exists.


Author(s):  
Michael T Jury ◽  
Robert T W Martin

Abstract We extend the Lebesgue decomposition of positive measures with respect to Lebesgue measure on the complex unit circle to the non-commutative (NC) multi-variable setting of (positive) NC measures. These are positive linear functionals on a certain self-adjoint subspace of the Cuntz–Toeplitz $C^{\ast }-$algebra, the $C^{\ast }-$algebra of the left creation operators on the full Fock space. This theory is fundamentally connected to the representation theory of the Cuntz and Cuntz–Toeplitz $C^{\ast }-$algebras; any *−representation of the Cuntz–Toeplitz $C^{\ast }-$algebra is obtained (up to unitary equivalence), by applying a Gelfand–Naimark–Segal construction to a positive NC measure. Our approach combines the theory of Lebesgue decomposition of sesquilinear forms in Hilbert space, Lebesgue decomposition of row isometries, free semigroup algebra theory, NC reproducing kernel Hilbert space theory, and NC Hardy space theory.


Author(s):  
Dominic Knoch ◽  
Christian R. Werner ◽  
Rhonda C. Meyer ◽  
David Riewe ◽  
Amine Abbadi ◽  
...  

Abstract Key message Complementing or replacing genetic markers with transcriptomic data and use of reproducing kernel Hilbert space regression based on Gaussian kernels increases hybrid prediction accuracies for complex agronomic traits in canola. In plant breeding, hybrids gained particular importance due to heterosis, the superior performance of offspring compared to their inbred parents. Since the development of new top performing hybrids requires labour-intensive and costly breeding programmes, including testing of large numbers of experimental hybrids, the prediction of hybrid performance is of utmost interest to plant breeders. In this study, we tested the effectiveness of hybrid prediction models in spring-type oilseed rape (Brassica napus L./canola) employing different omics profiles, individually and in combination. To this end, a population of 950 F1 hybrids was evaluated for seed yield and six other agronomically relevant traits in commercial field trials at several locations throughout Europe. A subset of these hybrids was also evaluated in a climatized glasshouse regarding early biomass production. For each of the 477 parental rapeseed lines, 13,201 single nucleotide polymorphisms (SNPs), 154 primary metabolites, and 19,479 transcripts were determined and used as predictive variables. Both, SNP markers and transcripts, effectively predict hybrid performance using (genomic) best linear unbiased prediction models (gBLUP). Compared to models using pure genetic markers, models incorporating transcriptome data resulted in significantly higher prediction accuracies for five out of seven agronomic traits, indicating that transcripts carry important information beyond genomic data. Notably, reproducing kernel Hilbert space regression based on Gaussian kernels significantly exceeded the predictive abilities of gBLUP models for six of the seven agronomic traits, demonstrating its potential for implementation in future canola breeding programmes.


Author(s):  
Fabio Sigrist

AbstractWe introduce a novel boosting algorithm called ‘KTBoost’ which combines kernel boosting and tree boosting. In each boosting iteration, the algorithm adds either a regression tree or reproducing kernel Hilbert space (RKHS) regression function to the ensemble of base learners. Intuitively, the idea is that discontinuous trees and continuous RKHS regression functions complement each other, and that this combination allows for better learning of functions that have parts with varying degrees of regularity such as discontinuities and smooth parts. We empirically show that KTBoost significantly outperforms both tree and kernel boosting in terms of predictive accuracy in a comparison on a wide array of data sets.


Sign in / Sign up

Export Citation Format

Share Document