scholarly journals Optimization of the surface code design for Majorana-based qubits

Quantum ◽  
2020 ◽  
Vol 4 ◽  
pp. 352
Author(s):  
Rui Chao ◽  
Michael E. Beverland ◽  
Nicolas Delfosse ◽  
Jeongwan Haah

The surface code is a prominent topological error-correcting code exhibiting high fault-tolerance accuracy thresholds. Conventional schemes for error correction with the surface code place qubits on a planar grid and assume native CNOT gates between the data qubits with nearest-neighbor ancilla qubits.Here, we present surface code error-correction schemes using only Pauli measurements on single qubits and on pairs of nearest-neighbor qubits. In particular, we provide several qubit layouts that offer favorable trade-offs between qubit overhead, circuit depth and connectivity degree. We also develop minimized measurement sequences for syndrome extraction, enabling reduced logical error rates and improved fault-tolerance thresholds.Our work applies to topologically protected qubits realized with Majorana zero modes and to similar systems in which multi-qubit Pauli measurements rather than CNOT gates are the native operations.

2011 ◽  
Vol 11 (1&2) ◽  
pp. 8-18
Author(s):  
Austin G. Fowler ◽  
David S. Wang ◽  
Lloyd C. L. Hollenberg

The surface code is a powerful quantum error correcting code that can be defined on a 2-D square lattice of qubits with only nearest neighbor interactions. Syndrome and data qubits form a checkerboard pattern. Information about errors is obtained by repeatedly measuring each syndrome qubit after appropriate interaction with its four nearest neighbor data qubits. Changes in the measurement value indicate the presence of chains of errors in space and time. The standard method of determining operations likely to return the code to its error-free state is to use the minimum weight matching algorithm to connect pairs of measurement changes with chains of corrections such that the minimum total number of corrections is used. Prior work has not taken into account the propagation of errors in space and time by the two-qubit interactions. We show that taking this into account leads to a quadratic improvement of the logical error rate.


2015 ◽  
Vol 15 (1&2) ◽  
pp. 1034-1444
Author(s):  
Austin G. Fowler

The surface code is highly practical, enabling arbitrarily reliable quantum computation given a 2-D nearest-neighbor coupled array of qubits with gate error rates below approximately 1\%. We describe an open source library, Polyestimate, enabling a user with no knowledge of the surface code to specify realistic physical quantum gate error models and obtain logical error rate estimates. Functions allowing the user to specify simple depolarizing error rates for each gate have also been included. Every effort has been made to make this library user-friendly. Polyestimate provides data essentially instantaneously that previously required hundreds to thousands of hours of simulation, statements which we discuss and make precise. This advance has been made possible through careful analysis of the error structure of the surface code and extensive pre-simulation.


2020 ◽  
Vol 6 (1) ◽  
Author(s):  
Christopher Chamberland ◽  
Kyungjoo Noh

Abstract Fault-tolerant quantum computing promises significant computational speedup over classical computing for a variety of important problems. One of the biggest challenges for realizing fault-tolerant quantum computing is preparing magic states with sufficiently low error rates. Magic state distillation is one of the most efficient schemes for preparing high-quality magic states. However, since magic state distillation circuits are not fault-tolerant, all the operations in the distillation circuits must be encoded in a large distance error-correcting code, resulting in a significant resource overhead. Here, we propose a fault-tolerant scheme for directly preparing high-quality magic states, which makes magic state distillation unnecessary. In particular, we introduce a concept that we call redundant ancilla encoding. The latter combined with flag qubits allows for circuits to both measure stabilizer generators of some code, while also being able to measure global operators to fault-tolerantly prepare magic states, all using nearest neighbor interactions. We apply such schemes to a planar architecture of the triangular color code family and demonstrate that our scheme requires at least an order of magnitude fewer qubits and space–time overhead compared to the most competitive magic state distillation schemes. Since our scheme requires only nearest-neighbor interactions in a planar architecture, it is suitable for various quantum computing platforms currently under development.


Quantum ◽  
2018 ◽  
Vol 2 ◽  
pp. 102 ◽  
Author(s):  
Ben Criger ◽  
Imran Ashraf

Fault tolerance is a prerequisite for scalable quantum computing. Architectures based on 2D topological codes are effective for near-term implementations of fault tolerance. To obtain high performance with these architectures, we require a decoder which can adapt to the wide variety of error models present in experiments. The typical approach to the problem of decoding the surface code is to reduce it to minimum-weight perfect matching in a way that provides a suboptimal threshold error rate, and is specialized to correct a specific error model. Recently, optimal threshold error rates for a variety of error models have been obtained by methods which do not use minimum-weight perfect matching, showing that such thresholds can be achieved in polynomial time. It is an open question whether these results can also be achieved by minimum-weight perfect matching. In this work, we use belief propagation and a novel algorithm for producing edge weights to increase the utility of minimum-weight perfect matching for decoding surface codes. This allows us to correct depolarizing errors using the rotated surface code, obtaining a threshold of 17.76±0.02%. This is larger than the threshold achieved by previous matching-based decoders (14.88±0.02%), though still below the known upper bound of ∼18.9%.


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
M. McEwen ◽  
D. Kafri ◽  
Z. Chen ◽  
J. Atalaya ◽  
K. J. Satzinger ◽  
...  

AbstractQuantum computing can become scalable through error correction, but logical error rates only decrease with system size when physical errors are sufficiently uncorrelated. During computation, unused high energy levels of the qubits can become excited, creating leakage states that are long-lived and mobile. Particularly for superconducting transmon qubits, this leakage opens a path to errors that are correlated in space and time. Here, we report a reset protocol that returns a qubit to the ground state from all relevant higher level states. We test its performance with the bit-flip stabilizer code, a simplified version of the surface code for quantum error correction. We investigate the accumulation and dynamics of leakage during error correction. Using this protocol, we find lower rates of logical errors and an improved scaling and stability of error suppression with increasing qubit number. This demonstration provides a key step on the path towards scalable quantum computing.


Nature ◽  
2021 ◽  
Vol 595 (7867) ◽  
pp. 383-387
Author(s):  
◽  
Zijun Chen ◽  
Kevin J. Satzinger ◽  
Juan Atalaya ◽  
Alexander N. Korotkov ◽  
...  

AbstractRealizing the potential of quantum computing requires sufficiently low logical error rates1. Many applications call for error rates as low as 10−15 (refs. 2–9), but state-of-the-art quantum platforms typically have physical error rates near 10−3 (refs. 10–14). Quantum error correction15–17 promises to bridge this divide by distributing quantum logical information across many physical qubits in such a way that errors can be detected and corrected. Errors on the encoded logical qubit state can be exponentially suppressed as the number of physical qubits grows, provided that the physical error rates are below a certain threshold and stable over the course of a computation. Here we implement one-dimensional repetition codes embedded in a two-dimensional grid of superconducting qubits that demonstrate exponential suppression of bit-flip or phase-flip errors, reducing logical error per round more than 100-fold when increasing the number of qubits from 5 to 21. Crucially, this error suppression is stable over 50 rounds of error correction. We also introduce a method for analysing error correlations with high precision, allowing us to characterize error locality while performing quantum error correction. Finally, we perform error detection with a small logical qubit using the 2D surface code on the same device18,19 and show that the results from both one- and two-dimensional codes agree with numerical simulations that use a simple depolarizing error model. These experimental demonstrations provide a foundation for building a scalable fault-tolerant quantum computer with superconducting qubits.


2021 ◽  
Vol 14 (1) ◽  
pp. 10-16
Author(s):  
Aleksandr Kozyukov ◽  
Vladimir Zolnikov ◽  
Svetlana Evdokimova ◽  
Oleg Kvasov ◽  
Konstantin Yakovlev ◽  
...  

The article discusses algorithmic methods for ensuring the fault tolerance of the electronic component base (ECB). The protection methods used in regular and irregular structures are described. The essence of Hamming code algorithms, composite code, error correction and detection codes is revealed. The advantages and disadvantages of using arithmetic residual code, the method of redundancy at the level of program code fragments, are shown.


AI Magazine ◽  
2009 ◽  
Vol 30 (4) ◽  
pp. 85 ◽  
Author(s):  
Per Ola Kristensson

For text entry methods to be useful they have to deliver high entry rates and low error rates. At the same time they need to be easy-to-learn and provide effective means of correcting mistakes. Intelligent text entry methods combine AI techniques with HCI theory to enable users to enter text as efficiently and effortlessly as possible. Here I sample a selection of such techniques from the research literature and set them into their historical context. I then highlight five challenges for text entry methods that aspire to make an impact in our society: localization, error correction, editor support, feedback, and context of use.


F1000Research ◽  
2018 ◽  
Vol 7 ◽  
pp. 233
Author(s):  
Jonathan Z.L. Zhao ◽  
Eliseos J. Mucaki ◽  
Peter K. Rogan

Background: Gene signatures derived from transcriptomic data using machine learning methods have shown promise for biodosimetry testing. These signatures may not be sufficiently robust for large scale testing, as their performance has not been adequately validated on external, independent datasets. The present study develops human and murine signatures with biochemically-inspired machine learning that are strictly validated using k-fold and traditional approaches. Methods: Gene Expression Omnibus (GEO) datasets of exposed human and murine lymphocytes were preprocessed via nearest neighbor imputation and expression of genes implicated in the literature to be responsive to radiation exposure (n=998) were then ranked by Minimum Redundancy Maximum Relevance (mRMR). Optimal signatures were derived by backward, complete, and forward sequential feature selection using Support Vector Machines (SVM), and validated using k-fold or traditional validation on independent datasets. Results: The best human signatures we derived exhibit k-fold validation accuracies of up to 98% (DDB2,  PRKDC, TPP2, PTPRE, and GADD45A) when validated over 209 samples and traditional validation accuracies of up to 92% (DDB2,  CD8A,  TALDO1,  PCNA,  EIF4G2,  LCN2,  CDKN1A,  PRKCH,  ENO1,  and PPM1D) when validated over 85 samples. Some human signatures are specific enough to differentiate between chemotherapy and radiotherapy. Certain multi-class murine signatures have sufficient granularity in dose estimation to inform eligibility for cytokine therapy (assuming these signatures could be translated to humans). We compiled a list of the most frequently appearing genes in the top 20 human and mouse signatures. More frequently appearing genes among an ensemble of signatures may indicate greater impact of these genes on the performance of individual signatures. Several genes in the signatures we derived are present in previously proposed signatures. Conclusions: Gene signatures for ionizing radiation exposure derived by machine learning have low error rates in externally validated, independent datasets, and exhibit high specificity and granularity for dose estimation.


Sign in / Sign up

Export Citation Format

Share Document