scholarly journals A flexible high-performance simulator for verifying and benchmarking quantum circuits implemented on real hardware

2019 ◽  
Vol 5 (1) ◽  
Author(s):  
Benjamin Villalonga ◽  
Sergio Boixo ◽  
Bron Nelson ◽  
Christopher Henze ◽  
Eleanor Rieffel ◽  
...  

Abstract Here we present qFlex, a flexible tensor network-based quantum circuit simulator. qFlex can compute both the exact amplitudes, essential for the verification of the quantum hardware, as well as low-fidelity amplitudes, to mimic sampling from Noisy Intermediate-Scale Quantum (NISQ) devices. In this work, we focus on random quantum circuits (RQCs) in the range of sizes expected for supremacy experiments. Fidelity f simulations are performed at a cost that is 1/f lower than perfect fidelity ones. We also present a technique to eliminate the overhead introduced by rejection sampling in most tensor network approaches. We benchmark the simulation of square lattices and Google’s Bristlecone QPU. Our analysis is supported by extensive simulations on NASA HPC clusters Pleiades and Electra. For our most computationally demanding simulation, the two clusters combined reached a peak of 20 Peta Floating Point Operations per Second (PFLOPS) (single precision), i.e., 64% of their maximum achievable performance, which represents the largest numerical computation in terms of sustained FLOPs and the number of nodes utilized ever run on NASA HPC clusters. Finally, we introduce a novel multithreaded, cache-efficient tensor index permutation algorithm of general application.

Quantum ◽  
2021 ◽  
Vol 5 ◽  
pp. 559
Author(s):  
Yasunari Suzuki ◽  
Yoshiaki Kawase ◽  
Yuya Masumura ◽  
Yuria Hiraga ◽  
Masahiro Nakadai ◽  
...  

To explore the possibilities of a near-term intermediate-scale quantum algorithm and long-term fault-tolerant quantum computing, a fast and versatile quantum circuit simulator is needed. Here, we introduce Qulacs, a fast simulator for quantum circuits intended for research purpose. We show the main concepts of Qulacs, explain how to use its features via examples, describe numerical techniques to speed-up simulation, and demonstrate its performance with numerical benchmarks.


2021 ◽  
Vol 20 (7) ◽  
Author(s):  
Ismail Ghodsollahee ◽  
Zohreh Davarzani ◽  
Mariam Zomorodi ◽  
Paweł Pławiak ◽  
Monireh Houshmand ◽  
...  

AbstractAs quantum computation grows, the number of qubits involved in a given quantum computer increases. But due to the physical limitations in the number of qubits of a single quantum device, the computation should be performed in a distributed system. In this paper, a new model of quantum computation based on the matrix representation of quantum circuits is proposed. Then, using this model, we propose a novel approach for reducing the number of teleportations in a distributed quantum circuit. The proposed method consists of two phases: the pre-processing phase and the optimization phase. In the pre-processing phase, it considers the bi-partitioning of quantum circuits by Non-Dominated Sorting Genetic Algorithm (NSGA-III) to minimize the number of global gates and to distribute the quantum circuit into two balanced parts with equal number of qubits and minimum number of global gates. In the optimization phase, two heuristics named Heuristic I and Heuristic II are proposed to optimize the number of teleportations according to the partitioning obtained from the pre-processing phase. Finally, the proposed approach is evaluated on many benchmark quantum circuits. The results of these evaluations show an average of 22.16% improvement in the teleportation cost of the proposed approach compared to the existing works in the literature.


2021 ◽  
Vol 2021 (4) ◽  
Author(s):  
A. Ramesh Chandra ◽  
Jan de Boer ◽  
Mario Flory ◽  
Michal P. Heller ◽  
Sergio Hörtner ◽  
...  

Abstract We propose that finite cutoff regions of holographic spacetimes represent quantum circuits that map between boundary states at different times and Wilsonian cutoffs, and that the complexity of those quantum circuits is given by the gravitational action. The optimal circuit minimizes the gravitational action. This is a generalization of both the “complexity equals volume” conjecture to unoptimized circuits, and path integral optimization to finite cutoffs. Using tools from holographic $$ T\overline{T} $$ T T ¯ , we find that surfaces of constant scalar curvature play a special role in optimizing quantum circuits. We also find an interesting connection of our proposal to kinematic space, and discuss possible circuit representations and gate counting interpretations of the gravitational action.


2021 ◽  
Vol 2 (3) ◽  
Author(s):  
Thomas Ayral ◽  
François-Marie Le Régent ◽  
Zain Saleem ◽  
Yuri Alexeev ◽  
Martin Suchara

AbstractOur recent work (Ayral et al. in Proceedings of IEEE computer society annual symposium on VLSI, ISVLSI, pp 138–140, 2020. 10.1109/ISVLSI49217.2020.00034) showed the first implementation of the Quantum Divide and Compute (QDC) method, which allows to break quantum circuits into smaller fragments with fewer qubits and shallower depth. This accommodates the limited number of qubits and short coherence times of quantum processors. This article investigates the impact of different noise sources—readout error, gate error and decoherence—on the success probability of the QDC procedure. We perform detailed noise modeling on the Atos Quantum Learning Machine, allowing us to understand tradeoffs and formulate recommendations about which hardware noise sources should be preferentially optimized. We also describe in detail the noise models we used to reproduce experimental runs on IBM’s Johannesburg processor. This article also includes a detailed derivation of the equations used in the QDC procedure to compute the output distribution of the original quantum circuit from the output distribution of its fragments. Finally, we analyze the computational complexity of the QDC method for the circuit under study via tensor-network considerations, and elaborate on the relation the QDC method with tensor-network simulation methods.


2021 ◽  
pp. 2150360
Author(s):  
Wanghao Ren ◽  
Zhiming Li ◽  
Yiming Huang ◽  
Runqiu Guo ◽  
Lansheng Feng ◽  
...  

Quantum machine learning is expected to be one of the potential applications that can be realized in the near future. Finding potential applications for it has become one of the hot topics in the quantum computing community. With the increase of digital image processing, researchers try to use quantum image processing instead of classical image processing to improve the ability of image processing. Inspired by previous studies on the adversarial quantum circuit learning, we introduce a quantum generative adversarial framework for loading and learning a quantum image. In this paper, we extend quantum generative adversarial networks to the quantum image processing field and show how to learning and loading an classical image using quantum circuits. By reducing quantum gates without gradient changes, we reduced the number of basic quantum building block from 15 to 13. Our framework effectively generates pure state subject to bit flip, bit phase flip, phase flip, and depolarizing channel noise. We numerically simulate the loading and learning of classical images on the MINST database and CIFAR-10 database. In the quantum image processing field, our framework can be used to learn a quantum image as a subroutine of other quantum circuits. Through numerical simulation, our method can still quickly converge under the influence of a variety of noises.


Quantum ◽  
2021 ◽  
Vol 5 ◽  
pp. 592
Author(s):  
Piotr Czarnik ◽  
Andrew Arrasmith ◽  
Patrick J. Coles ◽  
Lukasz Cincio

Achieving near-term quantum advantage will require accurate estimation of quantum observables despite significant hardware noise. For this purpose, we propose a novel, scalable error-mitigation method that applies to gate-based quantum computers. The method generates training data {Xinoisy,Xiexact} via quantum circuits composed largely of Clifford gates, which can be efficiently simulated classically, where Xinoisy and Xiexact are noisy and noiseless observables respectively. Fitting a linear ansatz to this data then allows for the prediction of noise-free observables for arbitrary circuits. We analyze the performance of our method versus the number of qubits, circuit depth, and number of non-Clifford gates. We obtain an order-of-magnitude error reduction for a ground-state energy problem on 16 qubits in an IBMQ quantum computer and on a 64-qubit noisy simulator.


Quantum ◽  
2021 ◽  
Vol 5 ◽  
pp. 410
Author(s):  
Johnnie Gray ◽  
Stefanos Kourtis

Tensor networks represent the state-of-the-art in computational methods across many disciplines, including the classical simulation of quantum many-body systems and quantum circuits. Several applications of current interest give rise to tensor networks with irregular geometries. Finding the best possible contraction path for such networks is a central problem, with an exponential effect on computation time and memory footprint. In this work, we implement new randomized protocols that find very high quality contraction paths for arbitrary and large tensor networks. We test our methods on a variety of benchmarks, including the random quantum circuit instances recently implemented on Google quantum chips. We find that the paths obtained can be very close to optimal, and often many orders or magnitude better than the most established approaches. As different underlying geometries suit different methods, we also introduce a hyper-optimization approach, where both the method applied and its algorithmic parameters are tuned during the path finding. The increase in quality of contraction schemes found has significant practical implications for the simulation of quantum many-body systems and particularly for the benchmarking of new quantum chips. Concretely, we estimate a speed-up of over 10,000× compared to the original expectation for the classical simulation of the Sycamore `supremacy' circuits.


Author(s):  
Jingyuan Wang ◽  
Kai Feng ◽  
Junjie Wu

The deep network model, with the majority built on neural networks, has been proved to be a powerful framework to represent complex data for high performance machine learning. In recent years, more and more studies turn to nonneural network approaches to build diverse deep structures, and the Deep Stacking Network (DSN) model is one of such approaches that uses stacked easy-to-learn blocks to build a parameter-training-parallelizable deep network. In this paper, we propose a novel SVM-based Deep Stacking Network (SVM-DSN), which uses the DSN architecture to organize linear SVM classifiers for deep learning. A BP-like layer tuning scheme is also proposed to ensure holistic and local optimizations of stacked SVMs simultaneously. Some good math properties of SVM, such as the convex optimization, is introduced into the DSN framework by our model. From a global view, SVM-DSN can iteratively extract data representations layer by layer as a deep neural network but with parallelizability, and from a local view, each stacked SVM can converge to its optimal solution and obtain the support vectors, which compared with neural networks could lead to interesting improvements in anti-saturation and interpretability. Experimental results on both image and text data sets demonstrate the excellent performances of SVM-DSN compared with some competitive benchmark models.


Sign in / Sign up

Export Citation Format

Share Document