Quantum Adiabatic Evolution Algorithm for a Quantum Neural Network

Author(s):  
Mitsunaga Kinjo ◽  
Shigeo Sato ◽  
Koji Nakajima
2003 ◽  
Vol 58 (4) ◽  
pp. 201-203 ◽  
Author(s):  
Joonwoo Bae ◽  
Younghun Kwon

The quantum adiabatic evolution algorithm suggested by Farhi et al. was effective in solving NPcomplete problems. The algorithm is governed by the adiabatic theorem. Therefore, in order to reduce the running time, it is essential to examine the minimum energy gap between the ground level and the next one in the evolution. In this paper we show a way of speedup in the quantum adiabatic evolution algorithm, using an extended Hamiltonian. We present the exact relation between the energy gap and the elements of the extended Hamiltonian, which provides a new point of view to reduce the running time.


Science ◽  
2001 ◽  
Vol 292 (5516) ◽  
pp. 472-475 ◽  
Author(s):  
E. Farhi ◽  
J. Goldstone ◽  
S. Gutmann ◽  
J. Lapan ◽  
A. Lundgren ◽  
...  

2010 ◽  
Vol 20-23 ◽  
pp. 612-617 ◽  
Author(s):  
Wei Sun ◽  
Yu Jun He ◽  
Ming Meng

The paper presents a novel quantum neural network (QNN) model with variable selection for short term load forecasting. In the proposed QNN model, first, the combiniation of maximum conditonal entropy theory and principal component analysis method is used to select main influential factors with maximum correlation degree to power load index, thus getting effective input variables set. Then the quantum neural network forecating model is constructed. The proposed QNN forecastig model is tested for certain province load data. The experiments and the performance with QNN neural network model are given, and the results showed the method could provide a satisfactory improvement of the forecasting accuracy compared with traditional BP network model.


2021 ◽  
Vol 2021 (4) ◽  
Author(s):  
Sayantan Choudhury ◽  
Ankan Dutta ◽  
Debisree Ray

Abstract In this work, our prime objective is to study the phenomena of quantum chaos and complexity in the machine learning dynamics of Quantum Neural Network (QNN). A Parameterized Quantum Circuits (PQCs) in the hybrid quantum-classical framework is introduced as a universal function approximator to perform optimization with Stochastic Gradient Descent (SGD). We employ a statistical and differential geometric approach to study the learning theory of QNN. The evolution of parametrized unitary operators is correlated with the trajectory of parameters in the Diffusion metric. We establish the parametrized version of Quantum Complexity and Quantum Chaos in terms of physically relevant quantities, which are not only essential in determining the stability, but also essential in providing a very significant lower bound to the generalization capability of QNN. We explicitly prove that when the system executes limit cycles or oscillations in the phase space, the generalization capability of QNN is maximized. Finally, we have determined the generalization capability bound on the variance of parameters of the QNN in a steady state condition using Cauchy Schwartz Inequality.


2018 ◽  
Vol 16 (08) ◽  
pp. 1840001
Author(s):  
Johannes Bausch

The goal of this work is to define a notion of a “quantum neural network” to classify data, which exploits the low-energy spectrum of a local Hamiltonian. As a concrete application, we build a binary classifier, train it on some actual data and then test its performance on a simple classification task. More specifically, we use Microsoft’s quantum simulator, LIQ[Formula: see text][Formula: see text], to construct local Hamiltonians that can encode trained classifier functions in their ground space, and which can be probed by measuring the overlap with test states corresponding to the data to be classified. To obtain such a classifier Hamiltonian, we further propose a training scheme based on quantum annealing which is completely closed-off to the environment and which does not depend on external measurements until the very end, avoiding unnecessary decoherence during the annealing procedure. For a network of size [Formula: see text], the trained network can be stored as a list of [Formula: see text] coupling strengths. We address the question of which interactions are most suitable for a given classification task, and develop a qubit-saving optimization for the training procedure on a simulated annealing device. Furthermore, a small neural network to classify colors into red versus blue is trained and tested, and benchmarked against the annealing parameters.


2019 ◽  
Vol 14 (1) ◽  
pp. 124-134 ◽  
Author(s):  
Shuai Zhang ◽  
Yong Chen ◽  
Xiaoling Huang ◽  
Yishuai Cai

Online feedback is an effective way of communication between government departments and citizens. However, the daily high number of public feedbacks has increased the burden on government administrators. The deep learning method is good at automatically analyzing and extracting deep features of data, and then improving the accuracy of classification prediction. In this study, we aim to use the text classification model to achieve the automatic classification of public feedbacks to reduce the work pressure of administrator. In particular, a convolutional neural network model combined with word embedding and optimized by differential evolution algorithm is adopted. At the same time, we compared it with seven common text classification models, and the results show that the model we explored has good classification performance under different evaluation metrics, including accuracy, precision, recall, and F1-score.


2019 ◽  
Author(s):  
Elizabeth Behrman ◽  
Nam Nguyen ◽  
James Steck

<p>Noise and decoherence are two major obstacles to the implementation of large-scale quantum computing. Because of the no-cloning theorem, which says we cannot make an exact copy of an arbitrary quantum state, simple redundancy will not work in a quantum context, and unwanted interactions with the environment can destroy coherence and thus the quantum nature of the computation. Because of the parallel and distributed nature of classical neural networks, they have long been successfully used to deal with incomplete or damaged data. In this work, we show that our model of a quantum neural network (QNN) is similarly robust to noise, and that, in addition, it is robust to decoherence. Moreover, robustness to noise and decoherence is not only maintained but improved as the size of the system is increased. Noise and decoherence may even be of advantage in training, as it helps correct for overfitting. We demonstrate the robustness using entanglement as a means for pattern storage in a qubit array. Our results provide evidence that machine learning approaches can obviate otherwise recalcitrant problems in quantum computing. </p> <p> </p>


2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Jinjing Shi ◽  
Shuhui Chen ◽  
Yuhu Lu ◽  
Yanyan Feng ◽  
Ronghua Shi ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document