scholarly journals Indefinite Kernel Network withlq-Norm Regularization

2016 ◽  
Vol 2016 ◽  
pp. 1-6
Author(s):  
Zhongfeng Qu ◽  
Hongwei Sun

We study the asymptotical properties of indefinite kernel network withlq-norm regularization. The framework under investigation is different from classical kernel learning. Positive semidefiniteness is not required by the kernel function. By a new step stone technique, without any interior cone condition for input spaceXandLτcondition for the probability measureρX, satisfied error bounds and learning rates are deduced.

2013 ◽  
Vol 11 (05) ◽  
pp. 1350020 ◽  
Author(s):  
HONGWEI SUN ◽  
QIANG WU

We study the asymptotical properties of indefinite kernel network with coefficient regularization and dependent sampling. The framework under investigation is different from classical kernel learning. Positive definiteness is not required by the kernel function and the samples are allowed to be weakly dependent with the dependence measured by a strong mixing condition. By a new kernel decomposition technique introduced in [27], two reproducing kernel Hilbert spaces and their associated kernel integral operators are used to characterize the properties and learnability of the hypothesis function class. Capacity independent error bounds and learning rates are deduced.


2016 ◽  
Vol 25 (3) ◽  
pp. 417-429
Author(s):  
Chong Wu ◽  
Lu Wang ◽  
Zhe Shi

AbstractFor the financial distress prediction model based on support vector machine, there are no theories concerning how to choose a proper kernel function in a data-dependent way. This paper proposes a method of modified kernel function that can availably enhance classification accuracy. We apply an information-geometric method to modifying a kernel that is based on the structure of the Riemannian geometry induced in the input space by the kernel. A conformal transformation of a kernel from input space to higher-dimensional feature space enlarges volume elements locally near support vectors that are situated around the classification boundary and reduce the number of support vectors. This paper takes the Gaussian radial basis function as the internal kernel. Additionally, this paper combines the above method with the theories of standard regularization and non-dimensionalization to construct the new model. In the empirical analysis section, the paper adopts the financial data of Chinese listed companies. It uses five groups of experiments with different parameters to compare the classification accuracy. We can make the conclusion that the model of modified kernel function can effectively reduce the number of support vectors, and improve the classification accuracy.


Author(s):  
HONGWEI SUN ◽  
PING LIU

A new multi-kernel regression learning algorithm is studied in this paper. In our setting, the hypothesis space is generated by two Mercer kernels, thus it has stronger approximation ability than the single kernel case. We provide the mathematical foundation for this regularized learning algorithm. We obtain satisfying capacity-dependent error bounds and learning rates by the covering number method.


2012 ◽  
Vol 2012 ◽  
pp. 1-16 ◽  
Author(s):  
Yukui Zhu ◽  
Hongwei Sun

We investigate the consistency of spectral regularization algorithms. We generalize the usual definition of regularization function to enrich the content of spectral regularization algorithms. Under a more general prior condition, using refined error decompositions and techniques of operator norm estimation, satisfactory error bounds and learning rates are proved.


Symmetry ◽  
2019 ◽  
Vol 11 (3) ◽  
pp. 325 ◽  
Author(s):  
Shengbing Ren ◽  
Wangbo Shen ◽  
Chaudry Siddique ◽  
You Li

The deep multiple kernel learning (DMKL) method has caused widespread concern due to its better results compared with shallow multiple kernel learning. However, existing DMKL methods, which have a fixed number of layers and fixed type of kernels, have poor ability to adapt to different data sets and are difficult to find suitable model parameters to improve the test accuracy. In this paper, we propose a self-adaptive deep multiple kernel learning (SA-DMKL) method. Our SA-DMKL method can adapt the model through optimizing the model parameters of each kernel function with a grid search method and change the numbers and types of kernel function in each layer according to the generalization bound that is evaluated with Rademacher chaos complexity. Experiments on the three datasets of University of California—Irvine (UCI) and image dataset Caltech 256 validate the effectiveness of the proposed method on three aspects.


2019 ◽  
Vol 50 (1) ◽  
pp. 165-188
Author(s):  
Hui Xue ◽  
Lin Wang ◽  
Songcan Chen ◽  
Yunyun Wang

2012 ◽  
Vol 2012 ◽  
pp. 1-18 ◽  
Author(s):  
Shao-Gao Lv ◽  
Jin-De Zhu

The problem of learning the kernel function with linear combinations of multiple kernels has attracted considerable attention recently in machine learning. Specially, by imposing anlp-norm penalty on the kernel combination coefficient, multiple kernel learning (MKL) was proved useful and effective for theoretical analysis and practical applications (Kloft et al., 2009, 2011). In this paper, we present a theoretical analysis on the approximation error and learning ability of thelp-norm MKL. Our analysis shows explicit learning rates forlp-norm MKL and demonstrates some notable advantages compared with traditional kernel-based learning algorithms where the kernel is fixed.


Sign in / Sign up

Export Citation Format

Share Document