scholarly journals Stabilizer extent is not multiplicative

Quantum ◽  
2021 ◽  
Vol 5 ◽  
pp. 400
Author(s):  
Arne Heimendahl ◽  
Felipe Montealegre-Mora ◽  
Frank Vallentin ◽  
David Gross

The Gottesman-Knill theorem states that a Clifford circuit acting on stabilizer states can be simulated efficiently on a classical computer. Recently, this result has been generalized to cover inputs that are close to a coherent superposition of logarithmically many stabilizer states. The runtime of the classical simulation is governed by the stabilizer extent, which roughly measures how many stabilizer states are needed to approximate the state. An important open problem is to decide whether the extent is multiplicative under tensor products. An affirmative answer would yield an efficient algorithm for computing the extent of product inputs, while a negative result implies the existence of more efficient classical algorithms for simulating largescale quantum circuits. Here, we answer this question in the negative. Our result follows from very general properties of the set of stabilizer states, such as having a size that scales subexponentially in the dimension, and can thus be readily adapted to similar constructions for other resource theories.

2010 ◽  
Vol 10 (1&2) ◽  
pp. 97-108
Author(s):  
Z.-F. Ji ◽  
J.-X. Chen ◽  
Z.-H. Wei ◽  
M.-S. Ying

The LU-LC conjecture is an important open problem concerning the structure of entanglement of states described in the stabilizer formalism. It states that two local unitary equivalent stabilizer states are also local Clifford equivalent. If this conjecture were true, the local equivalence of stabilizer states would be extremely easy to characterize. Unfortunately, however, based on the recent progress made by Gross and Van den Nest, we find that the conjecture is false.


Quantum ◽  
2020 ◽  
Vol 4 ◽  
pp. 223 ◽  
Author(s):  
Hakop Pashayan ◽  
Stephen D. Bartlett ◽  
David Gross

Investigating the classical simulability of quantum circuits provides a promising avenue towards understanding the computational power of quantum systems. Whether a class of quantum circuits can be efficiently simulated with a probabilistic classical computer, or is provably hard to simulate, depends quite critically on the precise notion of ``classical simulation'' and in particular on the required accuracy. We argue that a notion of classical simulation, which we call EPSILON-simulation (or ϵ-simulation for short), captures the essence of possessing ``equivalent computational power'' as the quantum system it simulates: It is statistically impossible to distinguish an agent with access to an ϵ-simulator from one possessing the simulated quantum system. We relate ϵ-simulation to various alternative notions of simulation predominantly focusing on a simulator we call a poly-box. A poly-box outputs 1/poly precision additive estimates of Born probabilities and marginals. This notion of simulation has gained prominence through a number of recent simulability results. Accepting some plausible computational theoretic assumptions, we show that ϵ-simulation is strictly stronger than a poly-box by showing that IQP circuits and unconditioned magic-state injected Clifford circuits are both hard to ϵ-simulate and yet admit a poly-box. In contrast, we also show that these two notions are equivalent under an additional assumption on the sparsity of the output distribution (poly-sparsity).


2020 ◽  
Vol 2019 (4) ◽  
pp. 277-294
Author(s):  
Yong Huang

AbstractIt has been widely observed that virtue ethics, regarded as an ethics of the ancient, in contrast to deontology and consequentialism, seen as an ethics of the modern (Larmore 1996: 19–23), is experiencing an impressive revival and is becoming a strong rival to utilitarianism and deontology in the English-speaking world in the last a few decades. Despite this, it has been perceived as having an obvious weakness in comparison with its two major rivals. While both utilitarianism and deontology can at the same time serve as an ethical theory, providing guidance for individual persons and a political philosophy, offering ways to structure social institutions, virtue ethics, as it is concerned with character traits of individual persons, seems to be ill-equipped to be politically useful. In recent years, some attempts have been made to develop the so-called virtue politics, but most of them, including my own (see Huang 2014: Chapter 5), are limited to arguing for the perfectionist view that the state has the obligation to do things to help its members develop their virtues, and so the focus is still on the character traits of individual persons. However important those attempts are, such a notion of virtue politics is clearly too narrow, unless one thinks that the only job the state is supposed to do is to cultivate its people’s virtues. Yet obviously the government has many other jobs to do such as making laws and social policies, many if not most of which are not for the purpose of making people virtuous. The question is then in what sense such laws and social policies are moral in general and just in particular. Utilitarianism and deontology have their ready answers in the light of utility or moral principles respectively. Can virtue ethics provide its own answer? This paper attempts to argue for an affirmative answer to this question from the Confucian point of view, as represented by Mencius. It does so with a focus on the virtue of justice, as it is a central concept in both virtue ethics and political philosophy.


1995 ◽  
Vol 06 (03) ◽  
pp. 509-538 ◽  
Author(s):  
BERNHARD M. RIESS ◽  
ANDREAS A. SCHOENE

A new layout design system for multichip modules (MCMs) consisting of three components is described. It includes a k-way partitioning approach, an algorithm for pin assignment, and a placement package. For partitioning, we propose an analytical technique combined with a problem-specific multi-way ratio cut method. This method considers fixed module-level pad positions and assigns the cells to regularly arranged chips on the MCM substrate. In the subsequent pin assignment step the chip-level pads resulting from cut nets are positioned on the chip borders. Pin assignment is performed by an efficient algorithm, which profits from the cell coordinates generated by the analytical technique. Global and final placement for each chip is computed by the state-of-the-art placement tools GORDIANL and DOMINO. For the first time, results for MCM layout designs of benchmark circuits with up to 100,000 cells are presented. They show a small number of required chip-level pads, which is the most restricted resource in MCM design, and short total wire lengths.


2002 ◽  
Vol 13 (07) ◽  
pp. 931-945 ◽  
Author(s):  
KURT FISCHER ◽  
HANS-GEORG MATUTTIS ◽  
NOBUYASU ITO ◽  
MASAMICHI ISHIKAWA

Using a Hubbard–Stratonovich like decomposition technique, we implemented simulations for the quantum circuits of Simon's algorithm for the detection of the periodicity of a function and Shor's algorithm for the factoring of prime numbers on a classical computer. Our approach has the advantage that the dimension of the problem does not grow exponentially with the number of qubits.


2022 ◽  
Vol 18 (1) ◽  
pp. 1-26
Author(s):  
Mario Simoni ◽  
Giovanni Amedeo Cirillo ◽  
Giovanna Turvani ◽  
Mariagrazia Graziano ◽  
Maurizio Zamboni

Classical simulation of Noisy Intermediate Scale Quantum computers is a crucial task for testing the expected performance of real hardware. The standard approach, based on solving Schrödinger and Lindblad equations, is demanding when scaling the number of qubits in terms of both execution time and memory. In this article, attempts in defining compact models for the simulation of quantum hardware are proposed, ensuring results close to those obtained with standard formalism. Molecular Nuclear Magnetic Resonance quantum hardware is the target technology, where three non-ideality phenomena—common to other quantum technologies—are taken into account: decoherence, off-resonance qubit evolution, and undesired qubit-qubit residual interaction. A model for each non-ideality phenomenon is embedded into a MATLAB simulation infrastructure of noisy quantum computers. The accuracy of the models is tested on a benchmark of quantum circuits, in the expected operating ranges of quantum hardware. The corresponding outcomes are compared with those obtained via numeric integration of the Schrödinger equation and the Qiskit’s QASMSimulator. The achieved results give evidence that this work is a step forward towards the definition of compact models able to provide fast results close to those obtained with the traditional physical simulation strategies, thus paving the way for their integration into a classical simulator of quantum computers.


2019 ◽  
Vol 17 (05) ◽  
pp. 1950043
Author(s):  
Panchi Li ◽  
Jiahui Guo ◽  
Bing Wang ◽  
Mengqi Hao

In this paper, we propose a quantum circuit for calculating the squared sum of the inner product of quantum states. The circuit is designed by the multi-qubits controlled-swapping gates, in which the initial state of each control qubit is [Formula: see text] and they are in the equilibrium superposition state after passing through some Hadamard gates. Then, according to the control rules, each basis state in the superposition state controls the corresponding quantum states pair to swap. Finally, the Hadamard gates are applied to the control qubits again, and the squared sum of the inner product of many pairs of quantum states can be obtained simultaneously by measuring only one control qubit. We investigate the application of this method in quantum images matching on a classical computer, and the experimental results verify the correctness of the proposed method.


2020 ◽  
Vol 34 (04) ◽  
pp. 6664-6671 ◽  
Author(s):  
Quanming Yao ◽  
Ju Xu ◽  
Wei-Wei Tu ◽  
Zhanxing Zhu

Neural architecture search (NAS) attracts much research attention because of its ability to identify better architectures than handcrafted ones. Recently, differentiable search methods become the state-of-the-arts on NAS, which can obtain high-performance architectures in several days. However, they still suffer from huge computation costs and inferior performance due to the construction of the supernet. In this paper, we propose an efficient NAS method based on proximal iterations (denoted as NASP). Different from previous works, NASP reformulates the search process as an optimization problem with a discrete constraint on architectures and a regularizer on model complexity. As the new objective is hard to solve, we further propose an efficient algorithm inspired by proximal iterations for optimization. In this way, NASP is not only much faster than existing differentiable search methods, but also can find better architectures and balance the model complexity. Finally, extensive experiments on various tasks demonstrate that NASP can obtain high-performance architectures with more than 10 times speedup over the state-of-the-arts.


Sign in / Sign up

Export Citation Format

Share Document