sparse kernel methods
Recently Published Documents


TOTAL DOCUMENTS

5
(FIVE YEARS 1)

H-INDEX

2
(FIVE YEARS 0)

Heredity ◽  
2021 ◽  
Author(s):  
Abelardo Montesinos-López ◽  
Osval Antonio Montesinos-López ◽  
José Cricelio Montesinos-López ◽  
Carlos Alberto Flores-Cortes ◽  
Roberto de la Rosa ◽  
...  

AbstractThe primary objective of this paper is to provide a guide on implementing Bayesian generalized kernel regression methods for genomic prediction in the statistical software R. Such methods are quite efficient for capturing complex non-linear patterns that conventional linear regression models cannot. Furthermore, these methods are also powerful for leveraging environmental covariates, such as genotype × environment (G×E) prediction, among others. In this study we provide the building process of seven kernel methods: linear, polynomial, sigmoid, Gaussian, Exponential, Arc-cosine 1 and Arc-cosine L. Additionally, we highlight illustrative examples for implementing exact kernel methods for genomic prediction under a single-environment, a multi-environment and multi-trait framework, as well as for the implementation of sparse kernel methods under a multi-environment framework. These examples are followed by a discussion on the strengths and limitations of kernel methods and, subsequently by conclusions about the main contributions of this paper.


2016 ◽  
Vol 2016 ◽  
pp. 1-11 ◽  
Author(s):  
Chunyuan Zhang ◽  
Qingxin Zhu ◽  
Xinzheng Niu

By combining with sparse kernel methods, least-squares temporal difference (LSTD) algorithms can construct the feature dictionary automatically and obtain a better generalization ability. However, the previous kernel-based LSTD algorithms do not consider regularization and their sparsification processes are batch or offline, which hinder their widespread applications in online learning problems. In this paper, we combine the following five techniques and propose two novel kernel recursive LSTD algorithms: (i) online sparsification, which can cope with unknown state regions and be used for online learning, (ii)L2andL1regularization, which can avoid overfitting and eliminate the influence of noise, (iii) recursive least squares, which can eliminate matrix-inversion operations and reduce computational complexity, (iv) a sliding-window approach, which can avoid caching all history samples and reduce the computational cost, and (v) the fixed-point subiteration and online pruning, which can makeL1regularization easy to implement. Finally, simulation results on two 50-state chain problems demonstrate the effectiveness of our algorithms.


2003 ◽  
Vol 36 (16) ◽  
pp. 795-800
Author(s):  
Steve R. Gunn

Sign in / Sign up

Export Citation Format

Share Document