scholarly journals Layerwise learning for quantum neural networks

2021 ◽  
Vol 3 (1) ◽  
Author(s):  
Andrea Skolik ◽  
Jarrod R. McClean ◽  
Masoud Mohseni ◽  
Patrick van der Smagt ◽  
Martin Leib

AbstractWith the increased focus on quantum circuit learning for near-term applications on quantum devices, in conjunction with unique challenges presented by cost function landscapes of parametrized quantum circuits, strategies for effective training are becoming increasingly important. In order to ameliorate some of these challenges, we investigate a layerwise learning strategy for parametrized quantum circuits. The circuit depth is incrementally grown during optimization, and only subsets of parameters are updated in each training step. We show that when considering sampling noise, this strategy can help avoid the problem of barren plateaus of the error surface due to the low depth of circuits, low number of parameters trained in one step, and larger magnitude of gradients compared to training the full circuit. These properties make our algorithm preferable for execution on noisy intermediate-scale quantum devices. We demonstrate our approach on an image-classification task on handwritten digits, and show that layerwise learning attains an 8% lower generalization error on average in comparison to standard learning schemes for training quantum circuits of the same size. Additionally, the percentage of runs that reach lower test errors is up to 40% larger compared to training the full circuit, which is susceptible to creeping onto a plateau during training.

Quantum ◽  
2019 ◽  
Vol 3 ◽  
pp. 170
Author(s):  
Hammam Qassim ◽  
Joel J. Wallman ◽  
Joseph Emerson

Simulating quantum circuits classically is an important area of research in quantum information, with applications in computational complexity and validation of quantum devices. One of the state-of-the-art simulators, that of Bravyi et al, utilizes a randomized sparsification technique to approximate the output state of a quantum circuit by a stabilizer sum with a reduced number of terms. In this paper, we describe an improved Monte Carlo algorithm for performing randomized sparsification. This algorithm reduces the runtime of computing the approximate state by the factorℓ/m, whereℓandmare respectively the total and non-Clifford gate counts. The main technique is a circuit recompilation routine based on manipulating exponentiated Pauli operators. The recompilation routine also facilitates numerical search for Clifford decompositions of products of non-Clifford gates, which can further reduce the runtime in certain cases by reducing the 1-norm of the vector of expansion,‖a‖1. It may additionally lead to a framework for optimizing circuit implementations over a gate-set, reducing the overhead for state-injection in fault-tolerant implementations. We provide a concise exposition of randomized sparsification, and describe how to use it to estimate circuit amplitudes in a way which can be generalized to a broader class of gates and states. This latter method can be used to obtain additive error estimates of circuit probabilities with a faster runtime than the full techniques of Bravyi et al. Such estimates are useful for validating near-term quantum devices provided that the target probability is not exponentially small.


Entropy ◽  
2021 ◽  
Vol 23 (10) ◽  
pp. 1281
Author(s):  
Chiara Leadbeater ◽  
Louis Sharrock ◽  
Brian Coyle ◽  
Marcello Benedetti

Generative modelling is an important unsupervised task in machine learning. In this work, we study a hybrid quantum-classical approach to this task, based on the use of a quantum circuit born machine. In particular, we consider training a quantum circuit born machine using f-divergences. We first discuss the adversarial framework for generative modelling, which enables the estimation of any f-divergence in the near term. Based on this capability, we introduce two heuristics which demonstrably improve the training of the born machine. The first is based on f-divergence switching during training. The second introduces locality to the divergence, a strategy which has proved important in similar applications in terms of mitigating barren plateaus. Finally, we discuss the long-term implications of quantum devices for computing f-divergences, including algorithms which provide quadratic speedups to their estimation. In particular, we generalise existing algorithms for estimating the Kullback–Leibler divergence and the total variation distance to obtain a fault-tolerant quantum algorithm for estimating another f-divergence, namely, the Pearson divergence.


2019 ◽  
Vol 5 (10) ◽  
pp. eaaw9918 ◽  
Author(s):  
D. Zhu ◽  
N. M. Linke ◽  
M. Benedetti ◽  
K. A. Landsman ◽  
N. H. Nguyen ◽  
...  

Generative modeling is a flavor of machine learning with applications ranging from computer vision to chemical design. It is expected to be one of the techniques most suited to take advantage of the additional resources provided by near-term quantum computers. Here, we implement a data-driven quantum circuit training algorithm on the canonical Bars-and-Stripes dataset using a quantum-classical hybrid machine. The training proceeds by running parameterized circuits on a trapped ion quantum computer and feeding the results to a classical optimizer. We apply two separate strategies, Particle Swarm and Bayesian optimization to this task. We show that the convergence of the quantum circuit to the target distribution depends critically on both the quantum hardware and classical optimization strategy. Our study represents the first successful training of a high-dimensional universal quantum circuit and highlights the promise and challenges associated with hybrid learning schemes.


2022 ◽  
Vol 32 (1) ◽  
Author(s):  
ShiJie Wei ◽  
YanHu Chen ◽  
ZengRong Zhou ◽  
GuiLu Long

AbstractQuantum machine learning is one of the most promising applications of quantum computing in the noisy intermediate-scale quantum (NISQ) era. We propose a quantum convolutional neural network(QCNN) inspired by convolutional neural networks (CNN), which greatly reduces the computing complexity compared with its classical counterparts, with O((log2M)6) basic gates and O(m2+e) variational parameters, where M is the input data size, m is the filter mask size, and e is the number of parameters in a Hamiltonian. Our model is robust to certain noise for image recognition tasks and the parameters are independent on the input sizes, making it friendly to near-term quantum devices. We demonstrate QCNN with two explicit examples. First, QCNN is applied to image processing, and numerical simulation of three types of spatial filtering, image smoothing, sharpening, and edge detection is performed. Secondly, we demonstrate QCNN in recognizing image, namely, the recognition of handwritten numbers. Compared with previous work, this machine learning model can provide implementable quantum circuits that accurately corresponds to a specific classical convolutional kernel. It provides an efficient avenue to transform CNN to QCNN directly and opens up the prospect of exploiting quantum power to process information in the era of big data.


Quantum ◽  
2021 ◽  
Vol 5 ◽  
pp. 592
Author(s):  
Piotr Czarnik ◽  
Andrew Arrasmith ◽  
Patrick J. Coles ◽  
Lukasz Cincio

Achieving near-term quantum advantage will require accurate estimation of quantum observables despite significant hardware noise. For this purpose, we propose a novel, scalable error-mitigation method that applies to gate-based quantum computers. The method generates training data {Xinoisy,Xiexact} via quantum circuits composed largely of Clifford gates, which can be efficiently simulated classically, where Xinoisy and Xiexact are noisy and noiseless observables respectively. Fitting a linear ansatz to this data then allows for the prediction of noise-free observables for arbitrary circuits. We analyze the performance of our method versus the number of qubits, circuit depth, and number of non-Clifford gates. We obtain an order-of-magnitude error reduction for a ground-state energy problem on 16 qubits in an IBMQ quantum computer and on a 64-qubit noisy simulator.


Quantum ◽  
2021 ◽  
Vol 5 ◽  
pp. 559
Author(s):  
Yasunari Suzuki ◽  
Yoshiaki Kawase ◽  
Yuya Masumura ◽  
Yuria Hiraga ◽  
Masahiro Nakadai ◽  
...  

To explore the possibilities of a near-term intermediate-scale quantum algorithm and long-term fault-tolerant quantum computing, a fast and versatile quantum circuit simulator is needed. Here, we introduce Qulacs, a fast simulator for quantum circuits intended for research purpose. We show the main concepts of Qulacs, explain how to use its features via examples, describe numerical techniques to speed-up simulation, and demonstrate its performance with numerical benchmarks.


2012 ◽  
Vol 2012 ◽  
pp. 1-6
Author(s):  
Hong-Quan ZHao ◽  
Seiya Kasai

One-dimensional nanowire quantum devices and basic quantum logic AND and OR unit on hexagonal nanowire units controlled by wrap gate (WPG) were designed and fabricated on GaAs-based one-dimensional electron gas (1-DEG) regular nanowire network with hexagonal topology. These basic quantum logic units worked correctly at 35 K, and clear quantum conductance was achieved on the node device, logic AND circuit unit, and logic OR circuit unit. Binary-decision-diagram- (BDD-) based arithmetic logic unit (ALU) is realized on GaAs-based regular nanowire network with hexagonal topology by the same fabrication method as that of the quantum devices and basic circuits. This BDD-based ALU circuit worked correctly at room temperature. Since these quantum devices and circuits are basic units of the BDD ALU combinational circuit, the possibility of integrating these quantum devices and basic quantum circuits into the BDD-based quantum circuit with more complicated structures was discussed. We are prospecting the realization of quantum BDD combinational circuitries with very small of energy consumption and very high density of integration.


Quantum ◽  
2021 ◽  
Vol 5 ◽  
pp. 539
Author(s):  
Johannes Jakob Meyer

The recent advent of noisy intermediate-scale quantum devices, especially near-term quantum computers, has sparked extensive research efforts concerned with their possible applications. At the forefront of the considered approaches are variational methods that use parametrized quantum circuits. The classical and quantum Fisher information are firmly rooted in the field of quantum sensing and have proven to be versatile tools to study such parametrized quantum systems. Their utility in the study of other applications of noisy intermediate-scale quantum devices, however, has only been discovered recently. Hoping to stimulate more such applications, this article aims to further popularize classical and quantum Fisher information as useful tools for near-term applications beyond quantum sensing. We start with a tutorial that builds an intuitive understanding of classical and quantum Fisher information and outlines how both quantities can be calculated on near-term devices. We also elucidate their relationship and how they are influenced by noise processes. Next, we give an overview of the core results of the quantum sensing literature and proceed to a comprehensive review of recent applications in variational quantum algorithms and quantum machine learning.


2021 ◽  
Vol 2 (2) ◽  
pp. 1-24
Author(s):  
Chih-Chieh Chen ◽  
Masaya Watabe ◽  
Kodai Shiba ◽  
Masaru Sogabe ◽  
Katsuyoshi Sakamoto ◽  
...  

Applying quantum processors to model a high-dimensional function approximator is a typical method in quantum machine learning with potential advantage. It is conjectured that the unitarity of quantum circuits provides possible regularization to avoid overfitting. However, it is not clear how the regularization interplays with the expressibility under the limitation of current Noisy-Intermediate Scale Quantum devices. In this article, we perform simulations and theoretical analysis of the quantum circuit learning problem with hardware-efficient ansatz. Thorough numerical simulations show that the expressibility and generalization error scaling of the ansatz saturate when the circuit depth increases, implying the automatic regularization to avoid the overfitting issue in the quantum circuit learning scenario. This observation is supported by the theory on PAC learnability, which proves that VC dimension is upper bounded due to the locality and unitarity of the hardware-efficient ansatz. Our study provides supporting evidence for automatic regularization by unitarity to suppress overfitting and guidelines for possible performance improvement under hardware constraints.


2021 ◽  
Vol 7 (1) ◽  
Author(s):  
Michael A. Perlin ◽  
Zain H. Saleem ◽  
Martin Suchara ◽  
James C. Osborn

AbstractWe introduce maximum-likelihood fragment tomography (MLFT) as an improved circuit cutting technique for running clustered quantum circuits on quantum devices with a limited number of qubits. In addition to minimizing the classical computing overhead of circuit cutting methods, MLFT finds the most likely probability distribution for the output of a quantum circuit, given the measurement data obtained from the circuit’s fragments. We demonstrate the benefits of MLFT for accurately estimating the output of a fragmented quantum circuit with numerical experiments on random unitary circuits. Finally, we show that circuit cutting can estimate the output of a clustered circuit with higher fidelity than full circuit execution, thereby motivating the use of circuit cutting as a standard tool for running clustered circuits on quantum hardware.


Sign in / Sign up

Export Citation Format

Share Document