scholarly journals Classification with Quantum Neural Networks on Near Term Processors

2020 ◽  
Author(s):  
Edward Farhi ◽  
Hartmut Neven

We introduce a quantum neural network, QNN, that can represent labeled data, classical or quantum, and be trained by supervised learning. The quantum circuit consists of a sequence of parameter dependent unitary transformations which acts on an input quantum state. For binary classification a single Pauli operator is measured on a designated readout qubit. The measured output is the quantum neural network’s predictor of the binary label of the input state. We show through classical simulation that parameters can be found that allow the QNN to learn to correctly distinguish the two data sets. We then discuss presenting the data as quantum superpositions of computational basis states corresponding to different label values. Here we show through simulation that learning is possible. We consider using our QNN to learn the label of a general quantum state. By example we show that this can be done. Our work is exploratory and relies on the classical simulation of small quantum systems. The QNN proposed here was designed with near-term quantum processors in mind. Therefore it will be possible to run this QNN on a near term gate model quantum computer where its power can be explored beyond what can be explored with simulation.

2021 ◽  
Vol 2 (1) ◽  
pp. 1-35
Author(s):  
Adrien Suau ◽  
Gabriel Staffelbach ◽  
Henri Calandra

In the last few years, several quantum algorithms that try to address the problem of partial differential equation solving have been devised: on the one hand, “direct” quantum algorithms that aim at encoding the solution of the PDE by executing one large quantum circuit; on the other hand, variational algorithms that approximate the solution of the PDE by executing several small quantum circuits and making profit of classical optimisers. In this work, we propose an experimental study of the costs (in terms of gate number and execution time on a idealised hardware created from realistic gate data) associated with one of the “direct” quantum algorithm: the wave equation solver devised in [32]. We show that our implementation of the quantum wave equation solver agrees with the theoretical big-O complexity of the algorithm. We also explain in great detail the implementation steps and discuss some possibilities of improvements. Finally, our implementation proves experimentally that some PDE can be solved on a quantum computer, even if the direct quantum algorithm chosen will require error-corrected quantum chips, which are not believed to be available in the short-term.


Electronics ◽  
2021 ◽  
Vol 10 (14) ◽  
pp. 1690
Author(s):  
Teague Tomesh ◽  
Pranav Gokhale ◽  
Eric R. Anschuetz ◽  
Frederic T. Chong

Many quantum algorithms for machine learning require access to classical data in superposition. However, for many natural data sets and algorithms, the overhead required to load the data set in superposition can erase any potential quantum speedup over classical algorithms. Recent work by Harrow introduces a new paradigm in hybrid quantum-classical computing to address this issue, relying on coresets to minimize the data loading overhead of quantum algorithms. We investigated using this paradigm to perform k-means clustering on near-term quantum computers, by casting it as a QAOA optimization instance over a small coreset. We used numerical simulations to compare the performance of this approach to classical k-means clustering. We were able to find data sets with which coresets work well relative to random sampling and where QAOA could potentially outperform standard k-means on a coreset. However, finding data sets where both coresets and QAOA work well—which is necessary for a quantum advantage over k-means on the entire data set—appears to be challenging.


Quantum ◽  
2021 ◽  
Vol 5 ◽  
pp. 592
Author(s):  
Piotr Czarnik ◽  
Andrew Arrasmith ◽  
Patrick J. Coles ◽  
Lukasz Cincio

Achieving near-term quantum advantage will require accurate estimation of quantum observables despite significant hardware noise. For this purpose, we propose a novel, scalable error-mitigation method that applies to gate-based quantum computers. The method generates training data {Xinoisy,Xiexact} via quantum circuits composed largely of Clifford gates, which can be efficiently simulated classically, where Xinoisy and Xiexact are noisy and noiseless observables respectively. Fitting a linear ansatz to this data then allows for the prediction of noise-free observables for arbitrary circuits. We analyze the performance of our method versus the number of qubits, circuit depth, and number of non-Clifford gates. We obtain an order-of-magnitude error reduction for a ground-state energy problem on 16 qubits in an IBMQ quantum computer and on a 64-qubit noisy simulator.


2021 ◽  
Vol 3 (4) ◽  
Author(s):  
Daniel Evans

Quick Quantum Circuit Simulation (QQCS) is a software system for computing the result of a quantum circuit using a notation that derives directly from the circuit, expressed in a single input line. Quantum circuits begin with an initial quantum state of one or more qubits, which are the quantum analog to classical bits. The initial state is modified by a sequence of quantum gates, quantum machine language instructions, to get the final state. Measurements are made of the final state and displayed as a classical binary result. Measurements are postponed to the end of the circuit because a quantum state collapses when measured and produces probabilistic results, a consequence of quantum uncertainty. A circuit may be run many times on a quantum computer to refine the probabilistic result. Mathematically, quantum states are 2n -dimensional vectors over the complex number field, where n is the number of qubits. A gate is a 2n ×2n unitary matrix of complex values. Matrix multiplication models the application of a gate to a quantum state. QQCS is a mathematical rendering of each step of a quantum algorithm represented as a circuit, and as such, can present a trace of the quantum state of the circuit after each gate, compute gate equivalents for each circuit step, and perform measurements at any point in the circuit without state collapse. Output displays are in vector coefficients or Dirac bra-ket notation. It is an easy-to-use educational tool for students new to quantum computing.


2022 ◽  
Vol 9 ◽  
Author(s):  
Mahabubul Alam ◽  
Swaroop Ghosh

Quantum machine learning (QML) is promising for potential speedups and improvements in conventional machine learning (ML) tasks. Existing QML models that use deep parametric quantum circuits (PQC) suffer from a large accumulation of gate errors and decoherence. To circumvent this issue, we propose a new QML architecture called QNet. QNet consists of several small quantum neural networks (QNN). Each of these smaller QNN’s can be executed on small quantum computers that dominate the NISQ-era machines. By carefully choosing the size of these QNN’s, QNet can exploit arbitrary size quantum computers to solve supervised ML tasks of any scale. It also enables heterogeneous technology integration in a single QML application. Through empirical studies, we show the trainability and generalization of QNet and the impact of various configurable variables on its performance. We compare QNet performance against existing models and discuss potential issues and design considerations. In our study, we show 43% better accuracy on average over the existing models on noisy quantum hardware emulators. More importantly, QNet provides a blueprint to build noise-resilient QML models with a collection of small quantum neural networks with near-term noisy quantum devices.


Quantum ◽  
2019 ◽  
Vol 3 ◽  
pp. 170
Author(s):  
Hammam Qassim ◽  
Joel J. Wallman ◽  
Joseph Emerson

Simulating quantum circuits classically is an important area of research in quantum information, with applications in computational complexity and validation of quantum devices. One of the state-of-the-art simulators, that of Bravyi et al, utilizes a randomized sparsification technique to approximate the output state of a quantum circuit by a stabilizer sum with a reduced number of terms. In this paper, we describe an improved Monte Carlo algorithm for performing randomized sparsification. This algorithm reduces the runtime of computing the approximate state by the factorℓ/m, whereℓandmare respectively the total and non-Clifford gate counts. The main technique is a circuit recompilation routine based on manipulating exponentiated Pauli operators. The recompilation routine also facilitates numerical search for Clifford decompositions of products of non-Clifford gates, which can further reduce the runtime in certain cases by reducing the 1-norm of the vector of expansion,‖a‖1. It may additionally lead to a framework for optimizing circuit implementations over a gate-set, reducing the overhead for state-injection in fault-tolerant implementations. We provide a concise exposition of randomized sparsification, and describe how to use it to estimate circuit amplitudes in a way which can be generalized to a broader class of gates and states. This latter method can be used to obtain additive error estimates of circuit probabilities with a faster runtime than the full techniques of Bravyi et al. Such estimates are useful for validating near-term quantum devices provided that the target probability is not exponentially small.


2022 ◽  
Vol 12 (1) ◽  
Author(s):  
Pei-Hua Wang ◽  
Jen-Hao Chen ◽  
Yufeng Jane Tseng

AbstractPharmaceutical patent analysis is the key to product protection for pharmaceutical companies. In patent claims, a Markush structure is a standard chemical structure drawing with variable substituents. Overlaps between apparently dissimilar Markush structures are nearly unrecognizable when the structures span a broad chemical space. We propose a quantum search-based method which performs an exact comparison between two non-enumerated Markush structures with a constraint satisfaction oracle. The quantum circuit is verified with a quantum simulator and the real effect of noise is estimated using a five-qubit superconductivity-based IBM quantum computer. The possibilities of measuring the correct states can be increased by improving the connectivity of the most computation intensive qubits. Depolarizing error is the most influential error. The quantum method to exactly compares two patents is hard to simulate classically and thus creates a quantum advantage in patent analysis.


2020 ◽  
Vol 6 (1) ◽  
Author(s):  
Cristina Cîrstoiu ◽  
Zoë Holmes ◽  
Joseph Iosue ◽  
Lukasz Cincio ◽  
Patrick J. Coles ◽  
...  

Abstract Trotterization-based, iterative approaches to quantum simulation (QS) are restricted to simulation times less than the coherence time of the quantum computer (QC), which limits their utility in the near term. Here, we present a hybrid quantum-classical algorithm, called variational fast forwarding (VFF), for decreasing the quantum circuit depth of QSs. VFF seeks an approximate diagonalization of a short-time simulation to enable longer-time simulations using a constant number of gates. Our error analysis provides two results: (1) the simulation error of VFF scales at worst linearly in the fast-forwarded simulation time, and (2) our cost function’s operational meaning as an upper bound on average-case simulation error provides a natural termination condition for VFF. We implement VFF for the Hubbard, Ising, and Heisenberg models on a simulator. In addition, we implement VFF on Rigetti’s QC to demonstrate simulation beyond the coherence time. Finally, we show how to estimate energy eigenvalues using VFF.


2019 ◽  
Vol 5 (10) ◽  
pp. eaaw9918 ◽  
Author(s):  
D. Zhu ◽  
N. M. Linke ◽  
M. Benedetti ◽  
K. A. Landsman ◽  
N. H. Nguyen ◽  
...  

Generative modeling is a flavor of machine learning with applications ranging from computer vision to chemical design. It is expected to be one of the techniques most suited to take advantage of the additional resources provided by near-term quantum computers. Here, we implement a data-driven quantum circuit training algorithm on the canonical Bars-and-Stripes dataset using a quantum-classical hybrid machine. The training proceeds by running parameterized circuits on a trapped ion quantum computer and feeding the results to a classical optimizer. We apply two separate strategies, Particle Swarm and Bayesian optimization to this task. We show that the convergence of the quantum circuit to the target distribution depends critically on both the quantum hardware and classical optimization strategy. Our study represents the first successful training of a high-dimensional universal quantum circuit and highlights the promise and challenges associated with hybrid learning schemes.


2016 ◽  
pp. 134-178 ◽  
Author(s):  
Nathan Wiebe ◽  
Martin Roetteler

We develop a method for approximate synthesis of single-qubit rotations of the form e−if(φ1,...,φk)X that is based on the Repeat-Until-Success (RUS) framework for quantum circuit synthesis. We demonstrate how smooth computable functions f can be synthesized from two basic primitives. This synthesis approach constitutes a manifestly quantum form of arithmetic that differs greatly from the approaches commonly used in quantum algorithms. The key advantage of our approach is that it requires far fewer qubits than existing approaches: as a case in point, we show that using as few as 3 ancilla qubits, one can obtain RUS circuits for approximate multiplication and reciprocals. We also analyze the costs of performing multiplication and inversion on a quantum computer using conventional approaches and find that they can require too many qubits to execute on a small quantum computer, unlike our approach.


Sign in / Sign up

Export Citation Format

Share Document