scholarly journals Multiple indefinite kernel learning with mixed norm regularization

Author(s):  
Matthieu Kowalski ◽  
Marie Szafranski ◽  
Liva Ralaivola
2016 ◽  
Vol 2016 ◽  
pp. 1-6
Author(s):  
Zhongfeng Qu ◽  
Hongwei Sun

We study the asymptotical properties of indefinite kernel network withlq-norm regularization. The framework under investigation is different from classical kernel learning. Positive semidefiniteness is not required by the kernel function. By a new step stone technique, without any interior cone condition for input spaceXandLτcondition for the probability measureρX, satisfied error bounds and learning rates are deduced.


2019 ◽  
Vol 50 (1) ◽  
pp. 165-188
Author(s):  
Hui Xue ◽  
Lin Wang ◽  
Songcan Chen ◽  
Yunyun Wang

2020 ◽  
Vol 191 ◽  
pp. 105272
Author(s):  
Hui Xue ◽  
Yu Song ◽  
Hai-Ming Xu

Author(s):  
Hui Xue ◽  
Yu Song ◽  
Hai-Ming Xu

Multiple kernel learning for feature selection (MKL-FS) utilizes kernels to explore complex properties of features and performs better in embedded methods. However, the kernels in MKL-FS are generally limited to be positive definite. In fact, indefinite kernels often emerge in actual applications and can achieve better empirical performance. But due to the non-convexity of indefinite kernels, existing MKL-FS methods are usually inapplicable and the corresponding research is also relatively little. In this paper, we propose a novel multiple indefinite kernel feature selection method (MIK-FS) based on the primal framework of indefinite kernel support vector machine (IKSVM), which applies an indefinite base kernel for each feature and then exerts an l1-norm constraint on kernel combination coefficients to select features automatically. A two-stage algorithm is further presented to optimize the coefficients of IKSVM and kernel combination alternately. In the algorithm, we reformulate the non-convex optimization problem of primal IKSVM as a difference of convex functions (DC) programming and transform the non-convex problem into a convex one with the affine minorization approximation. Experiments on real-world datasets demonstrate that MIK-FS is superior to some related state-of-the-art methods in both feature selection and classification performance.


Quantum ◽  
2021 ◽  
Vol 5 ◽  
pp. 531
Author(s):  
Xinbiao Wang ◽  
Yuxuan Du ◽  
Yong Luo ◽  
Dacheng Tao

A key problem in the field of quantum computing is understanding whether quantum machine learning (QML) models implemented on noisy intermediate-scale quantum (NISQ) machines can achieve quantum advantages. Recently, Huang et al. [Nat Commun 12, 2631] partially answered this question by the lens of quantum kernel learning. Namely, they exhibited that quantum kernels can learn specific datasets with lower generalization error over the optimal classical kernel methods. However, most of their results are established on the ideal setting and ignore the caveats of near-term quantum machines. To this end, a crucial open question is: does the power of quantum kernels still hold under the NISQ setting? In this study, we fill this knowledge gap by exploiting the power of quantum kernels when the quantum system noise and sample error are considered. Concretely, we first prove that the advantage of quantum kernels is vanished for large size of datasets, few number of measurements, and large system noise. With the aim of preserving the superiority of quantum kernels in the NISQ era, we further devise an effective method via indefinite kernel learning. Numerical simulations accord with our theoretical results. Our work provides theoretical guidance of exploring advanced quantum kernels to attain quantum advantages on NISQ devices.


Author(s):  
Guo ◽  
Xiaoqian Zhang ◽  
Zhigui Liu ◽  
Xuqian Xue ◽  
Qian Wang ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document