scholarly journals Optimal local unitary encoding circuits for the surface code

Quantum ◽  
2021 ◽  
Vol 5 ◽  
pp. 517
Author(s):  
Oscar Higgott ◽  
Matthew Wilson ◽  
James Hefford ◽  
James Dborin ◽  
Farhan Hanif ◽  
...  

The surface code is a leading candidate quantum error correcting code, owing to its high threshold, and compatibility with existing experimental architectures. Bravyi et al. (2006) showed that encoding a state in the surface code using local unitary operations requires time at least linear in the lattice size L, however the most efficient known method for encoding an unknown state, introduced by Dennis et al. (2002), has O(L2) time complexity. Here, we present an optimal local unitary encoding circuit for the planar surface code that uses exactly 2L time steps to encode an unknown state in a distance L planar code. We further show how an O(L) complexity local unitary encoder for the toric code can be found by enforcing locality in the O(log⁡L)-depth non-local renormalisation encoder. We relate these techniques by providing an O(L) local unitary circuit to convert between a toric code and a planar code, and also provide optimal encoders for the rectangular, rotated and 3D surface codes. Furthermore, we show how our encoding circuit for the planar code can be used to prepare fermionic states in the compact mapping, a recently introduced fermion to qubit mapping that has a stabiliser structure similar to that of the surface code and is particularly efficient for simulating the Fermi-Hubbard model.

2011 ◽  
Vol 11 (1&2) ◽  
pp. 8-18
Author(s):  
Austin G. Fowler ◽  
David S. Wang ◽  
Lloyd C. L. Hollenberg

The surface code is a powerful quantum error correcting code that can be defined on a 2-D square lattice of qubits with only nearest neighbor interactions. Syndrome and data qubits form a checkerboard pattern. Information about errors is obtained by repeatedly measuring each syndrome qubit after appropriate interaction with its four nearest neighbor data qubits. Changes in the measurement value indicate the presence of chains of errors in space and time. The standard method of determining operations likely to return the code to its error-free state is to use the minimum weight matching algorithm to connect pairs of measurement changes with chains of corrections such that the minimum total number of corrections is used. Prior work has not taken into account the propagation of errors in space and time by the two-qubit interactions. We show that taking this into account leads to a quadratic improvement of the logical error rate.


Author(s):  
Shiroman Prakash

The ternary Golay code—one of the first and most beautiful classical error-correcting codes discovered—naturally gives rise to an 11-qutrit quantum error correcting code. We apply this code to magic state distillation, a leading approach to fault-tolerant quantum computing. We find that the 11-qutrit Golay code can distil the ‘most magic’ qutrit state—an eigenstate of the qutrit Fourier transform known as the strange state —with cubic error suppression and a remarkably high threshold. It also distils the ‘second-most magic’ qutrit state, the Norell state, with quadratic error suppression and an equally high threshold to depolarizing noise.


2021 ◽  
Vol 20 (7) ◽  
Author(s):  
Jonghyun Lee ◽  
Jooyoun Park ◽  
Jun Heo

AbstractTo date, the surface code has become a promising candidate for quantum error correcting codes because it achieves a high threshold and is composed of only the nearest gate operations and low-weight stabilizers. Here, we have exhibited that the logical failure rate can be enhanced by manipulating the lattice size of surface codes that they can show an enormous improvement in the number of physical qubits for a noise model where dephasing errors dominate over relaxation errors. We estimated the logical error rate in terms of the lattice size and physical error rate. When the physical error rate was high, the parameter estimation method was applied, and when it was low, the most frequently occurring logical error cases were considered. By using the minimum weight perfect matching decoding algorithm, we obtained the optimal lattice size by minimizing the number of qubits to achieve the required failure rates when physical error rates and bias are provided .


Quantum ◽  
2018 ◽  
Vol 2 ◽  
pp. 88 ◽  
Author(s):  
Christina Knapp ◽  
Michael Beverland ◽  
Dmitry I. Pikulin ◽  
Torsten Karzig

Majorana-based quantum computing seeks to use the non-local nature of Majorana zero modes to store and manipulate quantum information in a topologically protected way. While noise is anticipated to be significantly suppressed in such systems, finite temperature and system size result in residual errors. In this work, we connect the underlying physical error processes in Majorana-based systems to the noise models used in a fault tolerance analysis. Standard qubit-based noise models built from Pauli operators do not capture leading order noise processes arising from quasiparticle poisoning events, thus it is not obviousa priorithat such noise models can be usefully applied to a Majorana-based system. We develop stochastic Majorana noise models that are generalizations of the standard qubit-based models and connect the error probabilities defining these models to parameters of the physical system. Using these models, we compute pseudo-thresholds for thed=5Bacon-Shor subsystem code. Our results emphasize the importance of correlated errors induced in multi-qubit measurements. Moreover, we find that for sufficiently fast quasiparticle relaxation the errors are well described by Pauli operators. This work bridges the divide between physical errors in Majorana-based quantum computing architectures and the significance of these errors in a quantum error correcting code.


2014 ◽  
Vol 12 (01) ◽  
pp. 1430001 ◽  
Author(s):  
Martin Leslie

We introduce a new type of sparse CSS quantum error correcting code based on the homology of hypermaps. Sparse quantum error correcting codes are of interest in the building of quantum computers due to their ease of implementation and the possibility of developing fast decoders for them. Codes based on the homology of embeddings of graphs, such as Kitaev's toric code, have been discussed widely in the literature and our class of codes generalize these. We use embedded hypergraphs, which are a generalization of graphs that can have edges connected to more than two vertices. We develop theorems and examples of our hypermap-homology codes, especially in the case that we choose a special type of basis in our homology chain complex. In particular the most straightforward generalization of the m × m toric code to hypermap-homology codes gives us a [(3/2)m2, 2, m] code as compared to the toric code which is a [2m2, 2, m] code. Thus we can protect the same amount of quantum information, with the same error-correcting capability, using less physical qubits.


Quantum ◽  
2021 ◽  
Vol 5 ◽  
pp. 564
Author(s):  
Matthew B. Hastings ◽  
Jeongwan Haah

We present a quantum error correcting code with dynamically generated logical qubits. When viewed as a subsystem code, the code has no logical qubits. Nevertheless, our measurement patterns generate logical qubits, allowing the code to act as a fault-tolerant quantum memory. Our particular code gives a model very similar to the two-dimensional toric code, but each measurement is a two-qubit Pauli measurement.


2018 ◽  
Vol 57 (10) ◽  
pp. 3190-3199
Author(s):  
Cheng-Yang Zhang ◽  
Zhi-Hua Guo ◽  
Huai-Xin Cao ◽  
Ling Lu

2020 ◽  
Vol 2 (1) ◽  
Author(s):  
Savvas Varsamopoulos ◽  
Koen Bertels ◽  
Carmen G. Almudever

Abstract There has been a rise in decoding quantum error correction codes with neural network–based decoders, due to the good decoding performance achieved and adaptability to any noise model. However, the main challenge is scalability to larger code distances due to an exponential increase of the error syndrome space. Note that successfully decoding the surface code under realistic noise assumptions will limit the size of the code to less than 100 qubits with current neural network–based decoders. Such a problem can be tackled by a distributed way of decoding, similar to the renormalization group (RG) decoders. In this paper, we introduce a decoding algorithm that combines the concept of RG decoding and neural network–based decoders. We tested the decoding performance under depolarizing noise with noiseless error syndrome measurements for the rotated surface code and compared against the blossom algorithm and a neural network–based decoder. We show that a similar level of decoding performance can be achieved between all tested decoders while providing a solution to the scalability issues of neural network–based decoders.


Sign in / Sign up

Export Citation Format

Share Document