scholarly journals Clifford recompilation for faster classical simulation of quantum circuits

Quantum ◽  
2019 ◽  
Vol 3 ◽  
pp. 170
Author(s):  
Hammam Qassim ◽  
Joel J. Wallman ◽  
Joseph Emerson

Simulating quantum circuits classically is an important area of research in quantum information, with applications in computational complexity and validation of quantum devices. One of the state-of-the-art simulators, that of Bravyi et al, utilizes a randomized sparsification technique to approximate the output state of a quantum circuit by a stabilizer sum with a reduced number of terms. In this paper, we describe an improved Monte Carlo algorithm for performing randomized sparsification. This algorithm reduces the runtime of computing the approximate state by the factorℓ/m, whereℓandmare respectively the total and non-Clifford gate counts. The main technique is a circuit recompilation routine based on manipulating exponentiated Pauli operators. The recompilation routine also facilitates numerical search for Clifford decompositions of products of non-Clifford gates, which can further reduce the runtime in certain cases by reducing the 1-norm of the vector of expansion,‖a‖1. It may additionally lead to a framework for optimizing circuit implementations over a gate-set, reducing the overhead for state-injection in fault-tolerant implementations. We provide a concise exposition of randomized sparsification, and describe how to use it to estimate circuit amplitudes in a way which can be generalized to a broader class of gates and states. This latter method can be used to obtain additive error estimates of circuit probabilities with a faster runtime than the full techniques of Bravyi et al. Such estimates are useful for validating near-term quantum devices provided that the target probability is not exponentially small.

Entropy ◽  
2021 ◽  
Vol 23 (10) ◽  
pp. 1281
Author(s):  
Chiara Leadbeater ◽  
Louis Sharrock ◽  
Brian Coyle ◽  
Marcello Benedetti

Generative modelling is an important unsupervised task in machine learning. In this work, we study a hybrid quantum-classical approach to this task, based on the use of a quantum circuit born machine. In particular, we consider training a quantum circuit born machine using f-divergences. We first discuss the adversarial framework for generative modelling, which enables the estimation of any f-divergence in the near term. Based on this capability, we introduce two heuristics which demonstrably improve the training of the born machine. The first is based on f-divergence switching during training. The second introduces locality to the divergence, a strategy which has proved important in similar applications in terms of mitigating barren plateaus. Finally, we discuss the long-term implications of quantum devices for computing f-divergences, including algorithms which provide quadratic speedups to their estimation. In particular, we generalise existing algorithms for estimating the Kullback–Leibler divergence and the total variation distance to obtain a fault-tolerant quantum algorithm for estimating another f-divergence, namely, the Pearson divergence.


Quantum ◽  
2021 ◽  
Vol 5 ◽  
pp. 559
Author(s):  
Yasunari Suzuki ◽  
Yoshiaki Kawase ◽  
Yuya Masumura ◽  
Yuria Hiraga ◽  
Masahiro Nakadai ◽  
...  

To explore the possibilities of a near-term intermediate-scale quantum algorithm and long-term fault-tolerant quantum computing, a fast and versatile quantum circuit simulator is needed. Here, we introduce Qulacs, a fast simulator for quantum circuits intended for research purpose. We show the main concepts of Qulacs, explain how to use its features via examples, describe numerical techniques to speed-up simulation, and demonstrate its performance with numerical benchmarks.


2021 ◽  
Vol 3 (1) ◽  
Author(s):  
Andrea Skolik ◽  
Jarrod R. McClean ◽  
Masoud Mohseni ◽  
Patrick van der Smagt ◽  
Martin Leib

AbstractWith the increased focus on quantum circuit learning for near-term applications on quantum devices, in conjunction with unique challenges presented by cost function landscapes of parametrized quantum circuits, strategies for effective training are becoming increasingly important. In order to ameliorate some of these challenges, we investigate a layerwise learning strategy for parametrized quantum circuits. The circuit depth is incrementally grown during optimization, and only subsets of parameters are updated in each training step. We show that when considering sampling noise, this strategy can help avoid the problem of barren plateaus of the error surface due to the low depth of circuits, low number of parameters trained in one step, and larger magnitude of gradients compared to training the full circuit. These properties make our algorithm preferable for execution on noisy intermediate-scale quantum devices. We demonstrate our approach on an image-classification task on handwritten digits, and show that layerwise learning attains an 8% lower generalization error on average in comparison to standard learning schemes for training quantum circuits of the same size. Additionally, the percentage of runs that reach lower test errors is up to 40% larger compared to training the full circuit, which is susceptible to creeping onto a plateau during training.


2022 ◽  
Vol 32 (1) ◽  
Author(s):  
ShiJie Wei ◽  
YanHu Chen ◽  
ZengRong Zhou ◽  
GuiLu Long

AbstractQuantum machine learning is one of the most promising applications of quantum computing in the noisy intermediate-scale quantum (NISQ) era. We propose a quantum convolutional neural network(QCNN) inspired by convolutional neural networks (CNN), which greatly reduces the computing complexity compared with its classical counterparts, with O((log2M)6) basic gates and O(m2+e) variational parameters, where M is the input data size, m is the filter mask size, and e is the number of parameters in a Hamiltonian. Our model is robust to certain noise for image recognition tasks and the parameters are independent on the input sizes, making it friendly to near-term quantum devices. We demonstrate QCNN with two explicit examples. First, QCNN is applied to image processing, and numerical simulation of three types of spatial filtering, image smoothing, sharpening, and edge detection is performed. Secondly, we demonstrate QCNN in recognizing image, namely, the recognition of handwritten numbers. Compared with previous work, this machine learning model can provide implementable quantum circuits that accurately corresponds to a specific classical convolutional kernel. It provides an efficient avenue to transform CNN to QCNN directly and opens up the prospect of exploiting quantum power to process information in the era of big data.


Quantum ◽  
2021 ◽  
Vol 5 ◽  
pp. 592
Author(s):  
Piotr Czarnik ◽  
Andrew Arrasmith ◽  
Patrick J. Coles ◽  
Lukasz Cincio

Achieving near-term quantum advantage will require accurate estimation of quantum observables despite significant hardware noise. For this purpose, we propose a novel, scalable error-mitigation method that applies to gate-based quantum computers. The method generates training data {Xinoisy,Xiexact} via quantum circuits composed largely of Clifford gates, which can be efficiently simulated classically, where Xinoisy and Xiexact are noisy and noiseless observables respectively. Fitting a linear ansatz to this data then allows for the prediction of noise-free observables for arbitrary circuits. We analyze the performance of our method versus the number of qubits, circuit depth, and number of non-Clifford gates. We obtain an order-of-magnitude error reduction for a ground-state energy problem on 16 qubits in an IBMQ quantum computer and on a 64-qubit noisy simulator.


Quantum ◽  
2021 ◽  
Vol 5 ◽  
pp. 492
Author(s):  
Philippe Suchsland ◽  
Francesco Tacchino ◽  
Mark H. Fischer ◽  
Titus Neupert ◽  
Panagiotis Kl. Barkoutsos ◽  
...  

We present a hardware agnostic error mitigation algorithm for near term quantum processors inspired by the classical Lanczos method. This technique can reduce the impact of different sources of noise at the sole cost of an increase in the number of measurements to be performed on the target quantum circuit, without additional experimental overhead. We demonstrate through numerical simulations and experiments on IBM Quantum hardware that the proposed scheme significantly increases the accuracy of cost functions evaluations within the framework of variational quantum algorithms, thus leading to improved ground-state calculations for quantum chemistry and physics problems beyond state-of-the-art results.


2012 ◽  
Vol 2012 ◽  
pp. 1-6
Author(s):  
Hong-Quan ZHao ◽  
Seiya Kasai

One-dimensional nanowire quantum devices and basic quantum logic AND and OR unit on hexagonal nanowire units controlled by wrap gate (WPG) were designed and fabricated on GaAs-based one-dimensional electron gas (1-DEG) regular nanowire network with hexagonal topology. These basic quantum logic units worked correctly at 35 K, and clear quantum conductance was achieved on the node device, logic AND circuit unit, and logic OR circuit unit. Binary-decision-diagram- (BDD-) based arithmetic logic unit (ALU) is realized on GaAs-based regular nanowire network with hexagonal topology by the same fabrication method as that of the quantum devices and basic circuits. This BDD-based ALU circuit worked correctly at room temperature. Since these quantum devices and circuits are basic units of the BDD ALU combinational circuit, the possibility of integrating these quantum devices and basic quantum circuits into the BDD-based quantum circuit with more complicated structures was discussed. We are prospecting the realization of quantum BDD combinational circuitries with very small of energy consumption and very high density of integration.


Quantum ◽  
2021 ◽  
Vol 5 ◽  
pp. 539
Author(s):  
Johannes Jakob Meyer

The recent advent of noisy intermediate-scale quantum devices, especially near-term quantum computers, has sparked extensive research efforts concerned with their possible applications. At the forefront of the considered approaches are variational methods that use parametrized quantum circuits. The classical and quantum Fisher information are firmly rooted in the field of quantum sensing and have proven to be versatile tools to study such parametrized quantum systems. Their utility in the study of other applications of noisy intermediate-scale quantum devices, however, has only been discovered recently. Hoping to stimulate more such applications, this article aims to further popularize classical and quantum Fisher information as useful tools for near-term applications beyond quantum sensing. We start with a tutorial that builds an intuitive understanding of classical and quantum Fisher information and outlines how both quantities can be calculated on near-term devices. We also elucidate their relationship and how they are influenced by noise processes. Next, we give an overview of the core results of the quantum sensing literature and proceed to a comprehensive review of recent applications in variational quantum algorithms and quantum machine learning.


Quantum ◽  
2020 ◽  
Vol 4 ◽  
pp. 341
Author(s):  
Xiu-Zhe Luo ◽  
Jin-Guo Liu ◽  
Pan Zhang ◽  
Lei Wang

We introduce Yao, an extensible, efficient open-source framework for quantum algorithm design. Yao features generic and differentiable programming of quantum circuits. It achieves state-of-the-art performance in simulating small to intermediate-sized quantum circuits that are relevant to near-term applications. We introduce the design principles and critical techniques behind Yao. These include the quantum block intermediate representation of quantum circuits, a builtin automatic differentiation engine optimized for reversible computing, and batched quantum registers with GPU acceleration. The extensibility and efficiency of Yao help boost innovation in quantum algorithm design.


2022 ◽  
Vol 3 (1) ◽  
pp. 1-14
Author(s):  
Alexandru Paler ◽  
Robert Basmadjian

Quantum circuits are difficult to simulate, and their automated optimisation is complex as well. Significant optimisations have been achieved manually (pen and paper) and not by software. This is the first in-depth study on the cost of compiling and optimising large-scale quantum circuits with state-of-the-art quantum software. We propose a hierarchy of cost metrics covering the quantum software stack and use energy as the long-term cost of operating hardware. We are going to quantify optimisation costs by estimating the energy consumed by a CPU doing the quantum circuit optimisation. We use QUANTIFY, a tool based on Google Cirq, to optimise bucket brigade QRAM and multiplication circuits having between 32 and 8,192 qubits. Although our classical optimisation methods have polynomial complexity, we observe that their energy cost grows extremely fast with the number of qubits. We profile the methods and software and provide evidence that there are high constant costs associated to the operations performed during optimisation. The costs are the result of dynamically typed programming languages and the generic data structures used in the background. We conclude that state-of-the-art quantum software frameworks have to massively improve their scalability to be practical for large circuits.


Sign in / Sign up

Export Citation Format

Share Document