reproducing kernel hilbert space
Recently Published Documents


TOTAL DOCUMENTS

365
(FIVE YEARS 120)

H-INDEX

24
(FIVE YEARS 5)

2022 ◽  
Vol 12 ◽  
Author(s):  
David Bonnett ◽  
Yongle Li ◽  
Jose Crossa ◽  
Susanne Dreisigacker ◽  
Bhoja Basnet ◽  
...  

We investigated increasing genetic gain for grain yield using early generation genomic selection (GS). A training set of 1,334 elite wheat breeding lines tested over three field seasons was used to generate Genomic Estimated Breeding Values (GEBVs) for grain yield under irrigated conditions applying markers and three different prediction methods: (1) Genomic Best Linear Unbiased Predictor (GBLUP), (2) GBLUP with the imputation of missing genotypic data by Ridge Regression BLUP (rrGBLUP_imp), and (3) Reproducing Kernel Hilbert Space (RKHS) a.k.a. Gaussian Kernel (GK). F2 GEBVs were generated for 1,924 individuals from 38 biparental cross populations between 21 parents selected from the training set. Results showed that F2 GEBVs from the different methods were not correlated. Experiment 1 consisted of selecting F2s with the highest average GEBVs and advancing them to form genomically selected bulks and make intercross populations aiming to combine favorable alleles for yield. F4:6 lines were derived from genomically selected bulks, intercrosses, and conventional breeding methods with similar numbers from each. Results of field-testing for Experiment 1 did not find any difference in yield with genomic compared to conventional selection. Experiment 2 compared the predictive ability of the different GEBV calculation methods in F2 using a set of single plant-derived F2:4 lines from randomly selected F2 plants. Grain yield results from Experiment 2 showed a significant positive correlation between observed yields of F2:4 lines and predicted yield GEBVs of F2 single plants from GK (the predictive ability of 0.248, P < 0.001) and GBLUP (0.195, P < 0.01) but no correlation with rrGBLUP_imp. Results demonstrate the potential for the application of GS in early generations of wheat breeding and the importance of using the appropriate statistical model for GEBV calculation, which may not be the same as the best model for inbreds.


Sensors ◽  
2021 ◽  
Vol 21 (24) ◽  
pp. 8408
Author(s):  
Elie Sfeir ◽  
Rangeet Mitra ◽  
Georges Kaddoum ◽  
Vimal Bhatia

Non-orthogonal multiple access (NOMA) has emerged as a promising technology that allows for multiplexing several users over limited time-frequency resources. Among existing NOMA methods, sparse code multiple access (SCMA) is especially attractive; not only for its coding gain using suitable codebook design methodologies, but also for the guarantee of optimal detection using message passing algorithm (MPA). Despite SCMA’s benefits, the bit error rate (BER) performance of SCMA systems is known to degrade due to nonlinear power amplifiers at the transmitter. To mitigate this degradation, two types of detectors have recently emerged, namely, the Bussgang-based approaches and the reproducing kernel Hilbert space (RKHS)-based approaches. This paper presents analytical results on the error-floor of the Bussgang-based MPA, and compares it with a universally optimal RKHS-based MPA using random Fourier features (RFF). Although the Bussgang-based MPA is computationally simpler, it attains a higher BER floor compared to its RKHS-based counterpart. This error floor and the BER’s performance gap are quantified analytically and validated via computer simulations.


Symmetry ◽  
2021 ◽  
Vol 13 (12) ◽  
pp. 2393
Author(s):  
Hong-Xia Dou ◽  
Liang-Jian Deng

The underlying function in reproducing kernel Hilbert space (RKHS) may be degraded by outliers or deviations, resulting in a symmetry ill-posed problem. This paper proposes a nonconvex minimization model with ℓ0-quasi norm based on RKHS to depict this degraded problem. The underlying function in RKHS can be represented by the linear combination of reproducing kernels and their coefficients. Thus, we turn to estimate the related coefficients in the nonconvex minimization problem. An efficient algorithm is designed to solve the given nonconvex problem by the mathematical program with equilibrium constraints (MPEC) and proximal-based strategy. We theoretically prove that the sequences generated by the designed algorithm converge to the nonconvex problem’s local optimal solutions. Numerical experiment also demonstrates the effectiveness of the proposed method.


2021 ◽  
Author(s):  
Hongzhi Tong

Abstract To cope with the challenges of memory bottleneck and algorithmic scalability when massive data sets are involved, we propose a distributed least squares procedure in the framework of functional linear model and reproducing kernel Hilbert space. This approach divides the big data set into multiple subsets, applies regularized least squares regression on each of them, and then averages the individual outputs as a final prediction. We establish the non-asymptotic prediction error bounds for the proposed learning strategy under some regularity conditions. When the target function only has weak regularity, we also introduce some unlabelled data to construct a semi-supervised approach to enlarge the number of the partitioned subsets. Results in present paper provide a theoretical guarantee that the distributed algorithm can achieve the optimal rate of convergence while allowing the whole data set to be partitioned into a large number of subsets for parallel processing.


2021 ◽  
Author(s):  
Wei Zhang ◽  
Zhen He ◽  
Di WANG

Abstract Distribution regression is the regression case where the input objects are distributions. Many machine learning problems can be analysed in this framework, such as multi-instance learning and learning from noisy data. This paper attempts to build a conformal predictive system(CPS) for distribution regression, where the prediction of the system for a test input is a cumulative distribution function(CDF) of the corresponding test label. The CDF output by a CPS provides useful information about the test label, as it can estimate the probability of any event related to the label and be transformed to prediction interval and prediction point with the help of the corresponding quantiles. Furthermore, a CPS has the property of validity as the prediction CDFs and the prediction intervals are statistically compatible with the realizations. This property is desired for many risk-sensitive applications, such as weather forecast. To the best of our knowledge, this is the first work to extend the learning framework of CPS to distribution regression problems. We first embed the input distributions to a reproducing kernel Hilbert space using kernel mean embedding approximated by random Fourier features, and then build a fast CPS on the top of the embeddings. While inheriting the property of validity from the learning framework of CPS, our algorithm is simple, easy to implement and fast. The proposed approach is tested on synthetic data sets and can be used to tackle the problem of statistical postprocessing of ensemble forecasts, which demonstrates the effectiveness of our algorithm for distribution regression problems.


2021 ◽  
Vol 2021 (12) ◽  
pp. 124009
Author(s):  
Behrooz Ghorbani ◽  
Song Mei ◽  
Theodor Misiakiewicz ◽  
Andrea Montanari

Abstract For a certain scaling of the initialization of stochastic gradient descent (SGD), wide neural networks (NN) have been shown to be well approximated by reproducing kernel Hilbert space (RKHS) methods. Recent empirical work showed that, for some classification tasks, RKHS methods can replace NNs without a large loss in performance. On the other hand, two-layers NNs are known to encode richer smoothness classes than RKHS and we know of special examples for which SGD-trained NN provably outperform RKHS. This is true even in the wide network limit, for a different scaling of the initialization. How can we reconcile the above claims? For which tasks do NNs outperform RKHS? If covariates are nearly isotropic, RKHS methods suffer from the curse of dimensionality, while NNs can overcome it by learning the best low-dimensional representation. Here we show that this curse of dimensionality becomes milder if the covariates display the same low-dimensional structure as the target function, and we precisely characterize this tradeoff. Building on these results, we present the spiked covariates model that can capture in a unified framework both behaviors observed in earlier work. We hypothesize that such a latent low-dimensional structure is present in image classification. We test numerically this hypothesis by showing that specific perturbations of the training distribution degrade the performances of RKHS methods much more significantly than NNs.


2021 ◽  
Vol 2021 (1) ◽  
Author(s):  
Shatha Hasan ◽  
Nadir Djeddi ◽  
Mohammed Al-Smadi ◽  
Shrideh Al-Omari ◽  
Shaher Momani ◽  
...  

AbstractThis paper deals with the generalized Bagley–Torvik equation based on the concept of the Caputo–Fabrizio fractional derivative using a modified reproducing kernel Hilbert space treatment. The generalized Bagley–Torvik equation is studied along with initial and boundary conditions to investigate numerical solution in the Caputo–Fabrizio sense. Regarding the generalized Bagley–Torvik equation with initial conditions, in order to have a better approach and lower cost, we reformulate the issue as a system of fractional differential equations while preserving the second type of these equations. Reproducing kernel functions are established to construct an orthogonal system used to formulate the analytical and approximate solutions of both equations in the appropriate Hilbert spaces. The feasibility of the proposed method and the effect of the novel derivative with the nonsingular kernel were verified by listing and treating several numerical examples with the required accuracy and speed. From a numerical point of view, the results obtained indicate the accuracy, efficiency, and reliability of the proposed method in solving various real life problems.


Sign in / Sign up

Export Citation Format

Share Document