scholarly journals Commuting quantum circuits with few outputs are unlikely to be classically simulatable

2016 ◽  
Vol 16 (3&4) ◽  
pp. 251-270 ◽  
Author(s):  
Yasuhiro Takahashi ◽  
Seiichiro Tani ◽  
Takeshi Yamazaki ◽  
Kazuyuki Tanaka

We study the classical simulatability of commuting quantum circuits with n input qubits and O(log n) output qubits, where a quantum circuit is classically simulatable if its output probability distribution can be sampled up to an exponentially small additive error in classical polynomial time. Our main result is that there exists a commuting quantum circuit that is not classically simulatable unless the polynomial hierarchy collapses to the third level. This is the first formal evidence that a commuting quantum circuit is not classically simulatable even when the number of output qubits is O(log n). Then, we consider a generalized version of the circuit and clarify the condition under which it is classically simulatable. Lastly, using a proof similar to that of the main result, we provide an evidence that a slightly extended Clifford circuit is not classically simulatable.

2014 ◽  
Vol 14 (13&14) ◽  
pp. 1149-1164
Author(s):  
Yasuhiro Takahashi ◽  
Takeshi Yamazaki ◽  
Kazuyuki Tanaka

We study the classical simulatability of constant-depth polynomial-size quantum circuits followed by only one single-qubit measurement, where the circuits consist of universal gates on at most two qubits and additional gates on an unbounded number of qubits. First, we consider unbounded Toffoli gates as additional gates and deal with the weak simulation, i.e., sampling the output probability distribution. We show that there exists a constant-depth quantum circuit with only one unbounded Toffoli gate that is not weakly simulatable, unless $\bqp \subseteq \postbpp \cap \am$. Then, we consider unbounded fan-out gates as additional gates and deal with the strong simulation, i.e., computing the output probability. We show that there exists a constant-depth quantum circuit with only two unbounded fan-out gates that is not strongly simulatable, unless $\p = \pp$. These results are in contrast to the fact that any constant-depth quantum circuit without additional gates on an unbounded number of qubits is strongly and weakly simulatable.


Quantum ◽  
2017 ◽  
Vol 1 ◽  
pp. 8 ◽  
Author(s):  
Michael J. Bremner ◽  
Ashley Montanaro ◽  
Dan J. Shepherd

The class of commuting quantum circuits known as IQP (instantaneous quantum polynomial-time) has been shown to be hard to simulate classically, assuming certain complexity-theoretic conjectures. Here we study the power of IQP circuits in the presence of physically motivated constraints. First, we show that there is a family of sparse IQP circuits that can be implemented on a square lattice of n qubits in depth O(sqrt(n) log n), and which is likely hard to simulate classically. Next, we show that, if an arbitrarily small constant amount of noise is applied to each qubit at the end of any IQP circuit whose output probability distribution is sufficiently anticoncentrated, there is a polynomial-time classical algorithm that simulates sampling from the resulting distribution, up to constant accuracy in total variation distance. However, we show that purely classical error-correction techniques can be used to design IQP circuits which remain hard to simulate classically, even in the presence of arbitrary amounts of noise of this form. These results demonstrate the challenges faced by experiments designed to demonstrate quantum supremacy over classical computation, and how these challenges can be overcome.


Author(s):  
Abel Molina ◽  
John Watrous

Yao's 1995 publication ‘Quantum circuit complexity’ in Proceedings of the 34th Annual IEEE Symposium on Foundations of Computer Science , pp. 352–361, proved that quantum Turing machines and quantum circuits are polynomially equivalent computational models: t ≥ n steps of a quantum Turing machine running on an input of length n can be simulated by a uniformly generated family of quantum circuits with size quadratic in t , and a polynomial-time uniformly generated family of quantum circuits can be simulated by a quantum Turing machine running in polynomial time. We revisit the simulation of quantum Turing machines with uniformly generated quantum circuits, which is the more challenging of the two simulation tasks, and present a variation on the simulation method employed by Yao together with an analysis of it. This analysis reveals that the simulation of quantum Turing machines can be performed by quantum circuits having depth linear in t , rather than quadratic depth, and can be extended to variants of quantum Turing machines, such as ones having multi-dimensional tapes. Our analysis is based on an extension of method described by Arright, Nesme and Werner in 2011 in Journal of Computer and System Sciences 77 , 372–378. ( doi:10.1016/j.jcss.2010.05.004 ), that allows for the localization of causal unitary evolutions.


Quantum ◽  
2018 ◽  
Vol 2 ◽  
pp. 106 ◽  
Author(s):  
Tomoyuki Morimae ◽  
Yuki Takeuchi ◽  
Harumichi Nishimura

We introduce a simple sub-universal quantum computing model, which we call the Hadamard-classical circuit with one-qubit (HC1Q) model. It consists of a classical reversible circuit sandwiched by two layers of Hadamard gates, and therefore it is in the second level of the Fourier hierarchy. We show that output probability distributions of the HC1Q model cannot be classically efficiently sampled within a multiplicative error unless the polynomial-time hierarchy collapses to the second level. The proof technique is different from those used for previous sub-universal models, such as IQP, Boson Sampling, and DQC1, and therefore the technique itself might be useful for finding other sub-universal models that are hard to classically simulate. We also study the classical verification of quantum computing in the second level of the Fourier hierarchy. To this end, we define a promise problem, which we call the probability distribution distinguishability with maximum norm (PDD-Max). It is a promise problem to decide whether output probability distributions of two quantum circuits are far apart or close. We show that PDD-Max is BQP-complete, but if the two circuits are restricted to some types in the second level of the Fourier hierarchy, such as the HC1Q model or the IQP model, PDD-Max has a Merlin-Arthur system with quantum polynomial-time Merlin and classical probabilistic polynomial-time Arthur.


2010 ◽  
Vol 08 (05) ◽  
pp. 807-819
Author(s):  
YU TANAKA

To understand quantum gate array complexity, we define a problem named exact non-identity check, which is a decision problem to determine whether a given classical description of a quantum circuit is strictly equivalent to the identity or not. We show that the computational complexity of this problem is non-deterministic quantum polynomial-time (NQP)-complete. As corollaries, it is derived that exact non-equivalence check of two given classical descriptions of quantum circuits is also NQP-complete and that minimizing the number of quantum gates for a given quantum circuit without changing the implemented unitary operation is NQP-hard.


2021 ◽  
Vol 7 (1) ◽  
Author(s):  
Michael A. Perlin ◽  
Zain H. Saleem ◽  
Martin Suchara ◽  
James C. Osborn

AbstractWe introduce maximum-likelihood fragment tomography (MLFT) as an improved circuit cutting technique for running clustered quantum circuits on quantum devices with a limited number of qubits. In addition to minimizing the classical computing overhead of circuit cutting methods, MLFT finds the most likely probability distribution for the output of a quantum circuit, given the measurement data obtained from the circuit’s fragments. We demonstrate the benefits of MLFT for accurately estimating the output of a fragmented quantum circuit with numerical experiments on random unitary circuits. Finally, we show that circuit cutting can estimate the output of a clustered circuit with higher fidelity than full circuit execution, thereby motivating the use of circuit cutting as a standard tool for running clustered circuits on quantum hardware.


Quantum ◽  
2021 ◽  
Vol 5 ◽  
pp. 465
Author(s):  
Leonardo Novo ◽  
Juani Bermejo-Vega ◽  
Raúl García-Patrón

The problem of sampling outputs of quantum circuits has been proposed as a candidate for demonstrating a quantum computational advantage (sometimes referred to as quantum "supremacy"). In this work, we investigate whether quantum advantage demonstrations can be achieved for more physically-motivated sampling problems, related to measurements of physical observables. We focus on the problem of sampling the outcomes of an energy measurement, performed on a simple-to-prepare product quantum state – a problem we refer to as energy sampling. For different regimes of measurement resolution and measurement errors, we provide complexity theoretic arguments showing that the existence of efficient classical algorithms for energy sampling is unlikely. In particular, we describe a family of Hamiltonians with nearest-neighbour interactions on a 2D lattice that can be efficiently measured with high resolution using a quantum circuit of commuting gates (IQP circuit), whereas an efficient classical simulation of this process should be impossible. In this high resolution regime, which can only be achieved for Hamiltonians that can be exponentially fast-forwarded, it is possible to use current theoretical tools tying quantum advantage statements to a polynomial-hierarchy collapse whereas for lower resolution measurements such arguments fail. Nevertheless, we show that efficient classical algorithms for low-resolution energy sampling can still be ruled out if we assume that quantum computers are strictly more powerful than classical ones. We believe our work brings a new perspective to the problem of demonstrating quantum advantage and leads to interesting new questions in Hamiltonian complexity.


Quantum ◽  
2020 ◽  
Vol 4 ◽  
pp. 264 ◽  
Author(s):  
Alexander M. Dalzell ◽  
Aram W. Harrow ◽  
Dax Enshan Koh ◽  
Rolando L. La Placa

Quantum computational supremacy arguments, which describe a way for a quantum computer to perform a task that cannot also be done by a classical computer, typically require some sort of computational assumption related to the limitations of classical computation. One common assumption is that the polynomial hierarchy (PH) does not collapse, a stronger version of the statement that P≠NP, which leads to the conclusion that any classical simulation of certain families of quantum circuits requires time scaling worse than any polynomial in the size of the circuits. However, the asymptotic nature of this conclusion prevents us from calculating exactly how many qubits these quantum circuits must have for their classical simulation to be intractable on modern classical supercomputers. We refine these quantum computational supremacy arguments and perform such a calculation by imposing fine-grained versions of the non-collapse conjecture. Our first two conjectures poly3-NSETH(a) and per-int-NSETH(b) take specific classical counting problems related to the number of zeros of a degree-3 polynomial in n variables over F2 or the permanent of an n×n integer-valued matrix, and assert that any non-deterministic algorithm that solves them requires 2cn time steps, where c∈{a,b}. A third conjecture poly3-ave-SBSETH(a′) asserts a similar statement about average-case algorithms living in the exponential-time version of the complexity class SBP. We analyze evidence for these conjectures and argue that they are plausible when a=1/2, b=0.999 and a′=1/2.Imposing poly3-NSETH(1/2) and per-int-NSETH(0.999), and assuming that the runtime of a hypothetical quantum circuit simulation algorithm would scale linearly with the number of gates/constraints/optical elements, we conclude that Instantaneous Quantum Polynomial-Time (IQP) circuits with 208 qubits and 500 gates, Quantum Approximate Optimization Algorithm (QAOA) circuits with 420 qubits and 500 constraints and boson sampling circuits (i.e. linear optical networks) with 98 photons and 500 optical elements are large enough for the task of producing samples from their output distributions up to constant multiplicative error to be intractable on current technology. Imposing poly3-ave-SBSETH(1/2), we additionally rule out simulations with constant additive error for IQP and QAOA circuits of the same size. Without the assumption of linearly increasing simulation time, we can make analogous statements for circuits with slightly fewer qubits but requiring 104 to 107 gates.


2021 ◽  
Vol 20 (7) ◽  
Author(s):  
Ismail Ghodsollahee ◽  
Zohreh Davarzani ◽  
Mariam Zomorodi ◽  
Paweł Pławiak ◽  
Monireh Houshmand ◽  
...  

AbstractAs quantum computation grows, the number of qubits involved in a given quantum computer increases. But due to the physical limitations in the number of qubits of a single quantum device, the computation should be performed in a distributed system. In this paper, a new model of quantum computation based on the matrix representation of quantum circuits is proposed. Then, using this model, we propose a novel approach for reducing the number of teleportations in a distributed quantum circuit. The proposed method consists of two phases: the pre-processing phase and the optimization phase. In the pre-processing phase, it considers the bi-partitioning of quantum circuits by Non-Dominated Sorting Genetic Algorithm (NSGA-III) to minimize the number of global gates and to distribute the quantum circuit into two balanced parts with equal number of qubits and minimum number of global gates. In the optimization phase, two heuristics named Heuristic I and Heuristic II are proposed to optimize the number of teleportations according to the partitioning obtained from the pre-processing phase. Finally, the proposed approach is evaluated on many benchmark quantum circuits. The results of these evaluations show an average of 22.16% improvement in the teleportation cost of the proposed approach compared to the existing works in the literature.


2021 ◽  
Vol 2021 (4) ◽  
Author(s):  
A. Ramesh Chandra ◽  
Jan de Boer ◽  
Mario Flory ◽  
Michal P. Heller ◽  
Sergio Hörtner ◽  
...  

Abstract We propose that finite cutoff regions of holographic spacetimes represent quantum circuits that map between boundary states at different times and Wilsonian cutoffs, and that the complexity of those quantum circuits is given by the gravitational action. The optimal circuit minimizes the gravitational action. This is a generalization of both the “complexity equals volume” conjecture to unoptimized circuits, and path integral optimization to finite cutoffs. Using tools from holographic $$ T\overline{T} $$ T T ¯ , we find that surfaces of constant scalar curvature play a special role in optimizing quantum circuits. We also find an interesting connection of our proposal to kinematic space, and discuss possible circuit representations and gate counting interpretations of the gravitational action.


Sign in / Sign up

Export Citation Format

Share Document