Polyestimate: a library for near-instantaneous surface code analysis

2015 ◽  
Vol 15 (1&2) ◽  
pp. 1034-1444
Author(s):  
Austin G. Fowler

The surface code is highly practical, enabling arbitrarily reliable quantum computation given a 2-D nearest-neighbor coupled array of qubits with gate error rates below approximately 1\%. We describe an open source library, Polyestimate, enabling a user with no knowledge of the surface code to specify realistic physical quantum gate error models and obtain logical error rate estimates. Functions allowing the user to specify simple depolarizing error rates for each gate have also been included. Every effort has been made to make this library user-friendly. Polyestimate provides data essentially instantaneously that previously required hundreds to thousands of hours of simulation, statements which we discuss and make precise. This advance has been made possible through careful analysis of the error structure of the surface code and extensive pre-simulation.

2010 ◽  
Vol 10 (9&10) ◽  
pp. 780-802
Author(s):  
David S. Wang ◽  
Austin G. Fowler ◽  
Charles D. Hill ◽  
Lloyd C.L. Hollenberg

Recent work on fault-tolerant quantum computation making use of topological error correction shows great potential, with the 2d surface code possessing a threshold error rate approaching 1\%. However, the 2d surface code requires the use of a complex state distillation procedure to achieve universal quantum computation. The color code of is a related scheme partially solving the problem, providing a means to perform all Clifford group gates transversally. We review the color code and its error correcting methodology, discussing one approximate technique based on graph matching. We derive an analytic lower bound to the threshold error rate of 6.25\% under error-free syndrome extraction, while numerical simulations indicate it may be as high as 13.3\%. Inclusion of faulty syndrome extraction circuits drops the threshold to approximately 0.10 \pm 0.01\%.


2021 ◽  
Vol 20 (7) ◽  
Author(s):  
Jonghyun Lee ◽  
Jooyoun Park ◽  
Jun Heo

AbstractTo date, the surface code has become a promising candidate for quantum error correcting codes because it achieves a high threshold and is composed of only the nearest gate operations and low-weight stabilizers. Here, we have exhibited that the logical failure rate can be enhanced by manipulating the lattice size of surface codes that they can show an enormous improvement in the number of physical qubits for a noise model where dephasing errors dominate over relaxation errors. We estimated the logical error rate in terms of the lattice size and physical error rate. When the physical error rate was high, the parameter estimation method was applied, and when it was low, the most frequently occurring logical error cases were considered. By using the minimum weight perfect matching decoding algorithm, we obtained the optimal lattice size by minimizing the number of qubits to achieve the required failure rates when physical error rates and bias are provided .


2011 ◽  
Vol 11 (1&2) ◽  
pp. 8-18
Author(s):  
Austin G. Fowler ◽  
David S. Wang ◽  
Lloyd C. L. Hollenberg

The surface code is a powerful quantum error correcting code that can be defined on a 2-D square lattice of qubits with only nearest neighbor interactions. Syndrome and data qubits form a checkerboard pattern. Information about errors is obtained by repeatedly measuring each syndrome qubit after appropriate interaction with its four nearest neighbor data qubits. Changes in the measurement value indicate the presence of chains of errors in space and time. The standard method of determining operations likely to return the code to its error-free state is to use the minimum weight matching algorithm to connect pairs of measurement changes with chains of corrections such that the minimum total number of corrections is used. Prior work has not taken into account the propagation of errors in space and time by the two-qubit interactions. We show that taking this into account leads to a quadratic improvement of the logical error rate.


Quantum ◽  
2020 ◽  
Vol 4 ◽  
pp. 352
Author(s):  
Rui Chao ◽  
Michael E. Beverland ◽  
Nicolas Delfosse ◽  
Jeongwan Haah

The surface code is a prominent topological error-correcting code exhibiting high fault-tolerance accuracy thresholds. Conventional schemes for error correction with the surface code place qubits on a planar grid and assume native CNOT gates between the data qubits with nearest-neighbor ancilla qubits.Here, we present surface code error-correction schemes using only Pauli measurements on single qubits and on pairs of nearest-neighbor qubits. In particular, we provide several qubit layouts that offer favorable trade-offs between qubit overhead, circuit depth and connectivity degree. We also develop minimized measurement sequences for syndrome extraction, enabling reduced logical error rates and improved fault-tolerance thresholds.Our work applies to topologically protected qubits realized with Majorana zero modes and to similar systems in which multi-qubit Pauli measurements rather than CNOT gates are the native operations.


2008 ◽  
Vol 8 (3&4) ◽  
pp. 330-344
Author(s):  
A.M. Stephens ◽  
A.G. Fowler ◽  
L.C.L. Hollenberg

Assuming an array that consists of two parallel lines of qubits and that permits only nearest neighbor interactions, we construct physical and logical circuitry to enable universal fault tolerant quantum computation under the $[[7,1,3]]$ quantum code. A rigorous lower bound to the fault tolerant threshold for this array is determined in a number of physical settings. Adversarial memory errors, two-qubit gate errors and readout errors are included in our analysis. In the setting where the physical memory failure rate is equal to one-tenth of the physical gate error rate, the physical readout error rate is equal to the physical gate error rate, and the duration of physical readout is ten times the duration of a physical gate, we obtain a lower bound to the asymptotic threshold of $1.96\times10^{-6}$.


2020 ◽  
Vol 8 (6) ◽  
Author(s):  
Alan Tran ◽  
Alex Bocharov ◽  
Bela Bauer ◽  
Parsa Bonderson

One of the main challenges for quantum computation is that while the number of gates required to perform a non-trivial quantum computation may be very large, decoherence and errors in realistic quantum architectures limit the number of physical gate operations that can be performed coherently. Therefore, an optimal mapping of the quantum algorithm into the physically available set of operations is of crucial importance. We examine this problem for a measurement-only topological quantum computer based on Majorana zero modes, where gates are performed through sequences of measurements. Such a scheme has been proposed as a practical, scalable approach to process quantum information in an array of topological qubits built using Majorana zero modes. Building on previous work that has shown that multi-qubit Clifford gates can be enacted in a topologically protected fashion in such qubit networks, we discuss methods to obtain the optimal measurement sequence for a given Clifford gate under the constraints imposed by the physical architecture, such as layout and the relative difficulty of implementing different types of measurements. Our methods also provide tools for comparative analysis of different architectures and strategies, given experimental characterizations of particular aspects of the systems under consideration. As a further non-trivial demonstration, we discuss an implementation of the surface code in Majorana-based topological qubits. We use the techniques developed here to obtain an optimized measurement sequence that implements the stabilizer measurements using only fermionic parity measurements on nearest-neighbor topological qubit islands.


2019 ◽  
Vol 28 (4) ◽  
pp. 1411-1431 ◽  
Author(s):  
Lauren Bislick ◽  
William D. Hula

Purpose This retrospective analysis examined group differences in error rate across 4 contextual variables (clusters vs. singletons, syllable position, number of syllables, and articulatory phonetic features) in adults with apraxia of speech (AOS) and adults with aphasia only. Group differences in the distribution of error type across contextual variables were also examined. Method Ten individuals with acquired AOS and aphasia and 11 individuals with aphasia participated in this study. In the context of a 2-group experimental design, the influence of 4 contextual variables on error rate and error type distribution was examined via repetition of 29 multisyllabic words. Error rates were analyzed using Bayesian methods, whereas distribution of error type was examined via descriptive statistics. Results There were 4 findings of robust differences between the 2 groups. These differences were found for syllable position, number of syllables, manner of articulation, and voicing. Group differences were less robust for clusters versus singletons and place of articulation. Results of error type distribution show a high proportion of distortion and substitution errors in speakers with AOS and a high proportion of substitution and omission errors in speakers with aphasia. Conclusion Findings add to the continued effort to improve the understanding and assessment of AOS and aphasia. Several contextual variables more consistently influenced breakdown in participants with AOS compared to participants with aphasia and should be considered during the diagnostic process. Supplemental Material https://doi.org/10.23641/asha.9701690


Author(s):  
S. Vijaya Rani ◽  
G. N. K. Suresh Babu

The illegal hackers  penetrate the servers and networks of corporate and financial institutions to gain money and extract vital information. The hacking varies from one computing system to many system. They gain access by sending malicious packets in the network through virus, worms, Trojan horses etc. The hackers scan a network through various tools and collect information of network and host. Hence it is very much essential to detect the attacks as they enter into a network. The methods  available for intrusion detection are Naive Bayes, Decision tree, Support Vector Machine, K-Nearest Neighbor, Artificial Neural Networks. A neural network consists of processing units in complex manner and able to store information and make it functional for use. It acts like human brain and takes knowledge from the environment through training and learning process. Many algorithms are available for learning process This work carry out research on analysis of malicious packets and predicting the error rate in detection of injured packets through artificial neural network algorithms.


2014 ◽  
Vol 53 (05) ◽  
pp. 343-343

We have to report marginal changes in the empirical type I error rates for the cut-offs 2/3 and 4/7 of Table 4, Table 5 and Table 6 of the paper “Influence of Selection Bias on the Test Decision – A Simulation Study” by M. Tamm, E. Cramer, L. N. Kennes, N. Heussen (Methods Inf Med 2012; 51: 138 –143). In a small number of cases the kind of representation of numeric values in SAS has resulted in wrong categorization due to a numeric representation error of differences. We corrected the simulation by using the round function of SAS in the calculation process with the same seeds as before. For Table 4 the value for the cut-off 2/3 changes from 0.180323 to 0.153494. For Table 5 the value for the cut-off 4/7 changes from 0.144729 to 0.139626 and the value for the cut-off 2/3 changes from 0.114885 to 0.101773. For Table 6 the value for the cut-off 4/7 changes from 0.125528 to 0.122144 and the value for the cut-off 2/3 changes from 0.099488 to 0.090828. The sentence on p. 141 “E.g. for block size 4 and q = 2/3 the type I error rate is 18% (Table 4).” has to be replaced by “E.g. for block size 4 and q = 2/3 the type I error rate is 15.3% (Table 4).”. There were only minor changes smaller than 0.03. These changes do not affect the interpretation of the results or our recommendations.


Sign in / Sign up

Export Citation Format

Share Document