scholarly journals Quantum advantage of unitary Clifford circuits with magic state inputs

Author(s):  
Mithuna Yoganathan ◽  
Richard Jozsa ◽  
Sergii Strelchuk

We study the computational power of unitary Clifford circuits with solely magic state inputs (CM circuits), supplemented by classical efficient computation. We show that CM circuits are hard to classically simulate up to multiplicative error (assuming polynomial hierarchy non-collapse), and also up to additive error under plausible average-case hardness conjectures. Unlike other such known classes, a broad variety of possible conjectures apply. Along the way, we give an extension of the Gottesman–Knill theorem that applies to universal computation, showing that for Clifford circuits with joint stabilizer and non-stabilizer inputs, the stabilizer part can be eliminated in favour of classical simulation, leaving a Clifford circuit on only the non-stabilizer part. Finally, we discuss implementational advantages of CM circuits.

2017 ◽  
Vol 17 (3&4) ◽  
pp. 262-282
Author(s):  
Dax E. Koh

Extended Clifford circuits straddle the boundary between classical and quantum computational power. Whether such circuits are efficiently classically simulable seems to depend delicately on the ingredients of the circuits. While some combinations of ingredients lead to efficiently classically simulable circuits, other combinations, which might just be slightly different, lead to circuits which are likely not. We extend the results of Jozsa and Van den Nest [Quant. Info. Comput. 14, 633 (2014)] by studying two further extensions of Clifford circuits. First, we consider how the classical simulation complexity changes when we allow for more general measurements. Second, we investigate different notions of what it means to ‘classically simulate’ a quantum circuit. These further extensions give us 24 new combinations of ingredients compared to Jozsa and Van den Nest, and we give a complete classification of their classical simulation complexities. Our results provide more examples where seemingly modest changes to the ingredients of Clifford circuits lead to “large” changes in the classical simulation complexities of the circuits, and also include new examples of extended Clifford circuits that exhibit “quantum supremacy”, in the sense that it is not possible to efficiently classically sample from the output distributions of such circuits, unless the polynomial hierarchy collapses.


Quantum ◽  
2020 ◽  
Vol 4 ◽  
pp. 264 ◽  
Author(s):  
Alexander M. Dalzell ◽  
Aram W. Harrow ◽  
Dax Enshan Koh ◽  
Rolando L. La Placa

Quantum computational supremacy arguments, which describe a way for a quantum computer to perform a task that cannot also be done by a classical computer, typically require some sort of computational assumption related to the limitations of classical computation. One common assumption is that the polynomial hierarchy (PH) does not collapse, a stronger version of the statement that P≠NP, which leads to the conclusion that any classical simulation of certain families of quantum circuits requires time scaling worse than any polynomial in the size of the circuits. However, the asymptotic nature of this conclusion prevents us from calculating exactly how many qubits these quantum circuits must have for their classical simulation to be intractable on modern classical supercomputers. We refine these quantum computational supremacy arguments and perform such a calculation by imposing fine-grained versions of the non-collapse conjecture. Our first two conjectures poly3-NSETH(a) and per-int-NSETH(b) take specific classical counting problems related to the number of zeros of a degree-3 polynomial in n variables over F2 or the permanent of an n×n integer-valued matrix, and assert that any non-deterministic algorithm that solves them requires 2cn time steps, where c∈{a,b}. A third conjecture poly3-ave-SBSETH(a′) asserts a similar statement about average-case algorithms living in the exponential-time version of the complexity class SBP. We analyze evidence for these conjectures and argue that they are plausible when a=1/2, b=0.999 and a′=1/2.Imposing poly3-NSETH(1/2) and per-int-NSETH(0.999), and assuming that the runtime of a hypothetical quantum circuit simulation algorithm would scale linearly with the number of gates/constraints/optical elements, we conclude that Instantaneous Quantum Polynomial-Time (IQP) circuits with 208 qubits and 500 gates, Quantum Approximate Optimization Algorithm (QAOA) circuits with 420 qubits and 500 constraints and boson sampling circuits (i.e. linear optical networks) with 98 photons and 500 optical elements are large enough for the task of producing samples from their output distributions up to constant multiplicative error to be intractable on current technology. Imposing poly3-ave-SBSETH(1/2), we additionally rule out simulations with constant additive error for IQP and QAOA circuits of the same size. Without the assumption of linearly increasing simulation time, we can make analogous statements for circuits with slightly fewer qubits but requiring 104 to 107 gates.


Author(s):  
Matthias Wölfel

The way we store, distribute and access textural information has undergone a dramatic change starting by the introduction of movable type around the 1450s. The way, however, we present and perceive written information has not changed much since then. But why is that? Technology has been a key driver in what is now called digital media. It provides a broad variety of possibilities to present written information. Until today, these possibilities stay nearly untouched and current textural representation is taken for granted and unalterable. In this article, the authors argue that, for real progress and innovation, people have to rethink text and to accept textual representations in digital media as an independent and alterable media. The authors summarize different approaches to augment text to foster the discussion and drive further developments.


2014 ◽  
Vol 72 (1) ◽  
pp. 130-136 ◽  
Author(s):  
Saang-Yoon Hyun ◽  
Mark N. Maunder ◽  
Brian J. Rothschild

Abstract Many fish stock assessments use a survey index and assume a stochastic error in the index on which a likelihood function of associated parameters is built and optimized for the parameter estimation. The purpose of this paper is to evaluate the assumption that the standard deviation for the difference in the log-transformed index is approximately equal to the coefficient of variation of the index, and also to examine the homo- and heteroscedasticity of the errors. The traditional practice is to assume a common variance of the index errors over time for estimation convenience. However, if additional information is available about year-to-year variability in the errors, such as year-to-year coefficient of variation, then we suggest that the heteroscedasticity assumption should be considered. We examined five methods with the assumption of a multiplicative error in the survey index and two methods with that of an additive error in the index: M1, homoscedasticity in the multiplicative error model; M2, heteroscedasticity in the multiplicative error model; M3, M2 with approximate weighting and an additional parameter for scaling variance; M4–M5, pragmatic practices; M6, homoscedasticity in the additive error model; M7, heteroscedasticity in the additive error model. M1–M2 and M6–M7 are strictly based on statistical theories, whereas M3–M5 are not. Heteroscedasticity methods M2, M3, and M7 consistently outperformed the other methods. However, we select M2 as the best method. M3 requires one more parameter than M2. M7 has problems arising from the use of the raw scale as opposed to the logarithm transformation. Furthermore, the fitted survey index in M7 can be negative although its domain is positive.


2021 ◽  
Vol 43 (suppl 1) ◽  
Author(s):  
Daniel Jost Brod

Recent years have seen a flurry of activity in the fields of quantum computing and quantum complexity theory, which aim to understand the computational capabilities of quantum systems by applying the toolbox of computational complexity theory. This paper explores the conceptually rich and technologically useful connection between the dynamics of free quantum particles and complexity theory. I review results on the computational power of two simple quantum systems, built out of noninteracting bosons (linear optics) or noninteracting fermions. These rudimentary quantum computers display radically different capabilities—while free fermions are easy to simulate on a classical computer, and therefore devoid of nontrivial computational power, a free-boson computer can perform tasks expected to be classically intractable. To build the argument for these results, I introduce concepts from computational complexity theory. I describe some complexity classes, starting with P and NP and building up to the less common #P and polynomial hierarchy, and the relations between them. I identify how probabilities in free-bosonic and free-fermionic systems fit within this classification, which then underpins their difference in computational power. This paper is aimed at graduate or advanced undergraduate students with a Physics background, hopefully serving as a soft introduction to this exciting and highly evolving field.


2020 ◽  
Vol 31 (01) ◽  
pp. 117-132
Author(s):  
Andrei Păun ◽  
Florin-Daniel Bîlbîe

We investigate the spiking neural P systems with communication on request (SNQ P systems) that are devices in the area of neural like P systems abstracting the way in which neurons work and process information. Here we discuss the SNQ P systems using the rule application strategy as defined by Linqiang Pan and collaborators and we are able to improve their result of universality of such systems using two types of spikes. In the current work, we prove that only one type of spikes is sufficient for reaching the computational power of Turing Machines for these devices, bringing closer to implementation such a device. The result holds both in maximum parallel manner application of the rules as well as the maximum-sequentiality application of rules.


Author(s):  
Richard Jozsa ◽  
Akimasa Miyake

Let G ( A ,  B ) denote the two-qubit gate that acts as the one-qubit SU (2) gates A and B in the even and odd parity subspaces, respectively, of two qubits. Using a Clifford algebra formalism, we show that arbitrary uniform families of circuits of these gates, restricted to act only on nearest neighbour (n.n.) qubit lines, can be classically efficiently simulated. This reproduces a result originally proved by Valiant using his matchgate formalism, and subsequently related by others to free fermionic physics. We further show that if the n.n. condition is slightly relaxed, to allow the same gates to act only on n.n. and next n.n. qubit lines, then the resulting circuits can efficiently perform universal quantum computation. From this point of view, the gap between efficient classical and quantum computational power is bridged by a very modest use of a seemingly innocuous resource (qubit swapping). We also extend the simulation result above in various ways. In particular, by exploiting properties of Clifford operations in conjunction with the Jordan–Wigner representation of a Clifford algebra, we show how one may generalize the simulation result above to provide further classes of classically efficiently simulatable quantum circuits, which we call Gaussian quantum circuits.


2009 ◽  
Vol 147-149 ◽  
pp. 67-73
Author(s):  
Andrzej Grono ◽  
Mariusz Dąbkowski ◽  
Piotr Niklas ◽  
Grzegorz Redlarski

Robotics is actually one of the most important technologies; which in the last year has been developing very fast. It’s interdisciplinary since combine mechanics, automatic, electric, and computer since. This paper presents generally the problems in navigation which should be solved by all who wants to build such educational and mobile robots [1, 3, 5, and 7]. For this reason the mechanical structure, module of communication, digital sonar and main board of autonomous and mobile robot for laboratory tasks has been described [2, 4]. After that the operation principle was given and some of the realized tests have been shown. A next possibility of development of the robot has been presented. Moreover the way, which satisfies the need of the large computational power on usually small platforms of mobile robots and the most advantageous sorts of the power supply from the point of view of the robot which realizes specify functions has been described. At the end of paper the conclusions has been extracted. Wide laboratory tests of the robot presented in this paper fully confirm all guideliness. In spite of fact that the device is very simple and very cheep, succesfully meet tasks in teaching, and enables students to meet fascinating subject of the mobile robots. Possibility of extension this subject can be continued during realization of other programs.


Quantum ◽  
2020 ◽  
Vol 4 ◽  
pp. 318 ◽  
Author(s):  
Kyungjoo Noh ◽  
Liang Jiang ◽  
Bill Fefferman

Understanding the computational power of noisy intermediate-scale quantum (NISQ) devices is of both fundamental and practical importance to quantum information science. Here, we address the question of whether error-uncorrected noisy quantum computers can provide computational advantage over classical computers. Specifically, we study noisy random circuit sampling in one dimension (or 1D noisy RCS) as a simple model for exploring the effects of noise on the computational power of a noisy quantum device. In particular, we simulate the real-time dynamics of 1D noisy random quantum circuits via matrix product operators (MPOs) and characterize the computational power of the 1D noisy quantum system by using a metric we call MPO entanglement entropy. The latter metric is chosen because it determines the cost of classical MPO simulation. We numerically demonstrate that for the two-qubit gate error rates we considered, there exists a characteristic system size above which adding more qubits does not bring about an exponential growth of the cost of classical MPO simulation of 1D noisy systems. Specifically, we show that above the characteristic system size, there is an optimal circuit depth, independent of the system size, where the MPO entanglement entropy is maximized. Most importantly, the maximum achievable MPO entanglement entropy is bounded by a constant that depends only on the gate error rate, not on the system size. We also provide a heuristic analysis to get the scaling of the maximum achievable MPO entanglement entropy as a function of the gate error rate. The obtained scaling suggests that although the cost of MPO simulation does not increase exponentially in the system size above a certain characteristic system size, it does increase exponentially as the gate error rate decreases, possibly making classical simulation practically not feasible even with state-of-the-art supercomputers.


Sign in / Sign up

Export Citation Format

Share Document