indefinite kernel
Recently Published Documents


TOTAL DOCUMENTS

28
(FIVE YEARS 7)

H-INDEX

7
(FIVE YEARS 2)

Quantum ◽  
2021 ◽  
Vol 5 ◽  
pp. 531
Author(s):  
Xinbiao Wang ◽  
Yuxuan Du ◽  
Yong Luo ◽  
Dacheng Tao

A key problem in the field of quantum computing is understanding whether quantum machine learning (QML) models implemented on noisy intermediate-scale quantum (NISQ) machines can achieve quantum advantages. Recently, Huang et al. [Nat Commun 12, 2631] partially answered this question by the lens of quantum kernel learning. Namely, they exhibited that quantum kernels can learn specific datasets with lower generalization error over the optimal classical kernel methods. However, most of their results are established on the ideal setting and ignore the caveats of near-term quantum machines. To this end, a crucial open question is: does the power of quantum kernels still hold under the NISQ setting? In this study, we fill this knowledge gap by exploiting the power of quantum kernels when the quantum system noise and sample error are considered. Concretely, we first prove that the advantage of quantum kernels is vanished for large size of datasets, few number of measurements, and large system noise. With the aim of preserving the superiority of quantum kernels in the NISQ era, we further devise an effective method via indefinite kernel learning. Numerical simulations accord with our theoretical results. Our work provides theoretical guidance of exploring advanced quantum kernels to attain quantum advantages on NISQ devices.


2020 ◽  
Vol 191 ◽  
pp. 105272
Author(s):  
Hui Xue ◽  
Yu Song ◽  
Hai-Ming Xu

2019 ◽  
Vol 17 (06) ◽  
pp. 947-975 ◽  
Author(s):  
Lei Shi

We investigate the distributed learning with coefficient-based regularization scheme under the framework of kernel regression methods. Compared with the classical kernel ridge regression (KRR), the algorithm under consideration does not require the kernel function to be positive semi-definite and hence provides a simple paradigm for designing indefinite kernel methods. The distributed learning approach partitions a massive data set into several disjoint data subsets, and then produces a global estimator by taking an average of the local estimator on each data subset. Easy exercisable partitions and performing algorithm on each subset in parallel lead to a substantial reduction in computation time versus the standard approach of performing the original algorithm on the entire samples. We establish the first mini-max optimal rates of convergence for distributed coefficient-based regularization scheme with indefinite kernels. We thus demonstrate that compared with distributed KRR, the concerned algorithm is more flexible and effective in regression problem for large-scale data sets.


2019 ◽  
Vol 14 (2) ◽  
pp. 349-363
Author(s):  
Hui Xue ◽  
Haiming Xu ◽  
Xiaohong Chen ◽  
Yunyun Wang
Keyword(s):  

2019 ◽  
Vol 50 (1) ◽  
pp. 165-188
Author(s):  
Hui Xue ◽  
Lin Wang ◽  
Songcan Chen ◽  
Yunyun Wang

2019 ◽  
Vol 30 (3) ◽  
pp. 765-776 ◽  
Author(s):  
Fanghui Liu ◽  
Xiaolin Huang ◽  
Chen Gong ◽  
Jie Yang ◽  
Johan A. K. Suykens

2018 ◽  
Vol 318 ◽  
pp. 213-226 ◽  
Author(s):  
Frank-Michael Schleif ◽  
Andrej Gisbrecht ◽  
Peter Tino

2018 ◽  
Vol 78 ◽  
pp. 144-153 ◽  
Author(s):  
Siamak Mehrkanoon ◽  
Xiaolin Huang ◽  
Johan A.K. Suykens

Sign in / Sign up

Export Citation Format

Share Document