scholarly journals Algorithmic Error Mitigation Scheme for Current Quantum Processors

Quantum ◽  
2021 ◽  
Vol 5 ◽  
pp. 492
Author(s):  
Philippe Suchsland ◽  
Francesco Tacchino ◽  
Mark H. Fischer ◽  
Titus Neupert ◽  
Panagiotis Kl. Barkoutsos ◽  
...  

We present a hardware agnostic error mitigation algorithm for near term quantum processors inspired by the classical Lanczos method. This technique can reduce the impact of different sources of noise at the sole cost of an increase in the number of measurements to be performed on the target quantum circuit, without additional experimental overhead. We demonstrate through numerical simulations and experiments on IBM Quantum hardware that the proposed scheme significantly increases the accuracy of cost functions evaluations within the framework of variational quantum algorithms, thus leading to improved ground-state calculations for quantum chemistry and physics problems beyond state-of-the-art results.

Quantum ◽  
2021 ◽  
Vol 5 ◽  
pp. 592
Author(s):  
Piotr Czarnik ◽  
Andrew Arrasmith ◽  
Patrick J. Coles ◽  
Lukasz Cincio

Achieving near-term quantum advantage will require accurate estimation of quantum observables despite significant hardware noise. For this purpose, we propose a novel, scalable error-mitigation method that applies to gate-based quantum computers. The method generates training data {Xinoisy,Xiexact} via quantum circuits composed largely of Clifford gates, which can be efficiently simulated classically, where Xinoisy and Xiexact are noisy and noiseless observables respectively. Fitting a linear ansatz to this data then allows for the prediction of noise-free observables for arbitrary circuits. We analyze the performance of our method versus the number of qubits, circuit depth, and number of non-Clifford gates. We obtain an order-of-magnitude error reduction for a ground-state energy problem on 16 qubits in an IBMQ quantum computer and on a 64-qubit noisy simulator.


Quantum ◽  
2021 ◽  
Vol 5 ◽  
pp. 380
Author(s):  
Kianna Wan

We present a simple but general framework for constructing quantum circuits that implement the multiply-controlled unitary Select(H):=∑ℓ|ℓ⟩⟨ℓ|⊗Hℓ, where H=∑ℓHℓ is the Jordan-Wigner transform of an arbitrary second-quantised fermionic Hamiltonian. Select(H) is one of the main subroutines of several quantum algorithms, including state-of-the-art techniques for Hamiltonian simulation. If each term in the second-quantised Hamiltonian involves at most k spin-orbitals and k is a constant independent of the total number of spin-orbitals n (as is the case for the majority of quantum chemistry and condensed matter models considered in the literature, for which k is typically 2 or 4), our implementation of Select(H) requires no ancilla qubits and uses O(n) Clifford+T gates, with the Clifford gates applied in O(log2n) layers and the T gates in O(logn) layers. This achieves an exponential improvement in both Clifford- and T-depth over previous work, while maintaining linear gate count and reducing the number of ancillae to zero.


Quantum ◽  
2019 ◽  
Vol 3 ◽  
pp. 156 ◽  
Author(s):  
Oscar Higgott ◽  
Daochen Wang ◽  
Stephen Brierley

The calculation of excited state energies of electronic structure Hamiltonians has many important applications, such as the calculation of optical spectra and reaction rates. While low-depth quantum algorithms, such as the variational quantum eigenvalue solver (VQE), have been used to determine ground state energies, methods for calculating excited states currently involve the implementation of high-depth controlled-unitaries or a large number of additional samples. Here we show how overlap estimation can be used to deflate eigenstates once they are found, enabling the calculation of excited state energies and their degeneracies. We propose an implementation that requires the same number of qubits as VQE and at most twice the circuit depth. Our method is robust to control errors, is compatible with error-mitigation strategies and can be implemented on near-term quantum computers.


2015 ◽  
Vol 8 (3) ◽  
pp. 2807-2845 ◽  
Author(s):  
T. Sauter ◽  
F. Obleitner

Abstract. State of the art numerical snow models essentially rely on observational data for initialization, forcing, parametrization and validation. Such data are available in increasing amount, but the inherent propagation of related uncertainties on the simulation results has received rather limited attention so far. Depending on their complexity, even small errors can have a profound effect on simulations, which dilutes our confidence in the results. This paper quantifies the fractional contributions of some archetypical measurement uncertainties on key simulation results in a high Arctic environment. The contribution of individual factors on the model variance, either alone or by interaction, is decomposed using Global Sensitivity Analysis. The work focuses on the temporal evolution of the fractional contribution of different sources on the model uncertainty, which provides a more detailed understanding of the model's sensitivity pattern. The decompositions demonstrate, that the impact of measurement errors on calculated snow depth and the surface energy balance components varies significantly throughout the year. Some factors show episodically strong impacts, although there overall mean contribution is low while others constantly affect the results. However, these results are not yet to be generalized imposing the need to further investigate the issue for e.g. other glaciological and meteorological settings.


2019 ◽  
Vol 5 (1) ◽  
Author(s):  
Alexander J. McCaskey ◽  
Zachary P. Parks ◽  
Jacek Jakowski ◽  
Shirley V. Moore ◽  
Titus D. Morris ◽  
...  

AbstractWe present a quantum chemistry benchmark for noisy intermediate-scale quantum computers that leverages the variational quantum eigensolver, active-space reduction, a reduced unitary coupled cluster ansatz, and reduced density purification as error mitigation. We demonstrate this benchmark using 4 of the available qubits on the 20-qubit IBM Tokyo and 16-qubit Rigetti Aspen processors via the simulation of alkali metal hydrides (NaH, KH, RbH), with accuracy of the computed ground state energy serving as the primary benchmark metric. We further parameterize this benchmark suite on the trial circuit type, the level of symmetry reduction, and error mitigation strategies. Our results demonstrate the characteristically high noise level present in near-term superconducting hardware, but provide a relevant baseline for future improvement of the underlying hardware, and a means for comparison across near-term hardware types. We also demonstrate how to reduce the noise in post processing with specific error mitigation techniques. Particularly, the adaptation of McWeeny purification of noisy density matrices dramatically improves accuracy of quantum computations, which, along with adjustable active space, significantly extends the range of accessible molecular systems. We demonstrate that for specific benchmark settings and a selected range of problems, the accuracy metric can reach chemical accuracy when computing over the cloud on certain quantum computers.


Quantum ◽  
2020 ◽  
Vol 4 ◽  
pp. 257 ◽  
Author(s):  
Filip B. Maciejewski ◽  
Zoltán Zimborás ◽  
Michał Oszmaniec

We propose a simple scheme to reduce readout errors in experiments on quantum systems with finite number of measurement outcomes. Our method relies on performing classical post-processing which is preceded by Quantum Detector Tomography, i.e., the reconstruction of a Positive-Operator Valued Measure (POVM) describing the given quantum measurement device. If the measurement device is affected only by an invertible classical noise, it is possible to correct the outcome statistics of future experiments performed on the same device. To support the practical applicability of this scheme for near-term quantum devices, we characterize measurements implemented in IBM's and Rigetti's quantum processors. We find that for these devices, based on superconducting transmon qubits, classical noise is indeed the dominant source of readout errors. Moreover, we analyze the influence of the presence of coherent errors and finite statistics on the performance of our error-mitigation procedure. Applying our scheme on the IBM's 5-qubit device, we observe a significant improvement of the results of a number of single- and two-qubit tasks including Quantum State Tomography (QST), Quantum Process Tomography (QPT), the implementation of non-projective measurements, and certain quantum algorithms (Grover's search and the Bernstein-Vazirani algorithm). Finally, we present results showing improvement for the implementation of certain probability distributions in the case of five qubits.


Quantum ◽  
2019 ◽  
Vol 3 ◽  
pp. 170
Author(s):  
Hammam Qassim ◽  
Joel J. Wallman ◽  
Joseph Emerson

Simulating quantum circuits classically is an important area of research in quantum information, with applications in computational complexity and validation of quantum devices. One of the state-of-the-art simulators, that of Bravyi et al, utilizes a randomized sparsification technique to approximate the output state of a quantum circuit by a stabilizer sum with a reduced number of terms. In this paper, we describe an improved Monte Carlo algorithm for performing randomized sparsification. This algorithm reduces the runtime of computing the approximate state by the factorℓ/m, whereℓandmare respectively the total and non-Clifford gate counts. The main technique is a circuit recompilation routine based on manipulating exponentiated Pauli operators. The recompilation routine also facilitates numerical search for Clifford decompositions of products of non-Clifford gates, which can further reduce the runtime in certain cases by reducing the 1-norm of the vector of expansion,‖a‖1. It may additionally lead to a framework for optimizing circuit implementations over a gate-set, reducing the overhead for state-injection in fault-tolerant implementations. We provide a concise exposition of randomized sparsification, and describe how to use it to estimate circuit amplitudes in a way which can be generalized to a broader class of gates and states. This latter method can be used to obtain additive error estimates of circuit probabilities with a faster runtime than the full techniques of Bravyi et al. Such estimates are useful for validating near-term quantum devices provided that the target probability is not exponentially small.


2021 ◽  
Vol 7 (1) ◽  
Author(s):  
William J. Huggins ◽  
Jarrod R. McClean ◽  
Nicholas C. Rubin ◽  
Zhang Jiang ◽  
Nathan Wiebe ◽  
...  

AbstractVariational algorithms are a promising paradigm for utilizing near-term quantum devices for modeling electronic states of molecular systems. However, previous bounds on the measurement time required have suggested that the application of these techniques to larger molecules might be infeasible. We present a measurement strategy based on a low-rank factorization of the two-electron integral tensor. Our approach provides a cubic reduction in term groupings over prior state-of-the-art and enables measurement times three orders of magnitude smaller than those suggested by commonly referenced bounds for the largest systems we consider. Although our technique requires execution of a linear-depth circuit prior to measurement, this is compensated for by eliminating challenges associated with sampling nonlocal Jordan–Wigner transformed operators in the presence of measurement error, while enabling a powerful form of error mitigation based on efficient postselection. We numerically characterize these benefits with noisy quantum circuit simulations for ground-state energies of strongly correlated electronic systems.


2020 ◽  
Vol 20 (9&10) ◽  
pp. 787-806 ◽  
Author(s):  
Steven Herbert

This paper addresses the problem of finding the depth overhead that will be incurred when running quantum circuits on near-term quantum computers. Specifically, it is envisaged that near-term quantum computers will have low qubit connectivity: each qubit will only be able to interact with a subset of the other qubits, a reality typically represented by a qubit interaction graph in which a vertex represents a qubit and an edge represents a possible direct 2-qubit interaction (gate). Thus the depth overhead is unavoidably incurred by introducing swap gates into the quantum circuit to enable general qubit interactions. This paper proves that there exist quantum circuits where a depth overhead in Omega(\log n) must necessarily be incurred when running quantum circuits with n qubits on quantum computers whose qubit interaction graph has finite degree, but that such a logarithmic depth overhead is achievable. The latter is shown by the construction of a 4-regular qubit interaction graph and associated compilation algorithm that can execute any quantum circuit with only a logarithmic depth overhead.


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Laura Clinton ◽  
Johannes Bausch ◽  
Toby Cubitt

AbstractThe quantum circuit model is the de-facto way of designing quantum algorithms. Yet any level of abstraction away from the underlying hardware incurs overhead. In this work, we develop quantum algorithms for Hamiltonian simulation "one level below” the circuit model, exploiting the underlying control over qubit interactions available in most quantum hardware and deriving analytic circuit identities for synthesising multi-qubit evolutions from two-qubit interactions. We then analyse the impact of these techniques under the standard error model where errors occur per gate, and an error model with a constant error rate per unit time. To quantify the benefits of this approach, we apply it to time-dynamics simulation of the 2D spin Fermi-Hubbard model. Combined with new error bounds for Trotter product formulas tailored to the non-asymptotic regime and an analysis of error propagation, we find that e.g. for a 5 × 5 Fermi-Hubbard lattice we reduce the circuit depth from 1, 243, 586 using the best previous fermion encoding and error bounds in the literature, to 3, 209 in the per-gate error model, or the circuit-depth-equivalent to 259 in the per-time error model. This brings Hamiltonian simulation, previously beyond reach of current hardware for non-trivial examples, significantly closer to being feasible in the NISQ era.


Sign in / Sign up

Export Citation Format

Share Document