scholarly journals Demonstrating a Continuous Set of Two-qubit Gates for Near-term Quantum Algorithms

2020 ◽  
Vol 125 (12) ◽  
Author(s):  
B. Foxen ◽  
C. Neill ◽  
A. Dunsworth ◽  
P. Roushan ◽  
B. Chiaro ◽  
...  
Keyword(s):  
Electronics ◽  
2021 ◽  
Vol 10 (14) ◽  
pp. 1690
Author(s):  
Teague Tomesh ◽  
Pranav Gokhale ◽  
Eric R. Anschuetz ◽  
Frederic T. Chong

Many quantum algorithms for machine learning require access to classical data in superposition. However, for many natural data sets and algorithms, the overhead required to load the data set in superposition can erase any potential quantum speedup over classical algorithms. Recent work by Harrow introduces a new paradigm in hybrid quantum-classical computing to address this issue, relying on coresets to minimize the data loading overhead of quantum algorithms. We investigated using this paradigm to perform k-means clustering on near-term quantum computers, by casting it as a QAOA optimization instance over a small coreset. We used numerical simulations to compare the performance of this approach to classical k-means clustering. We were able to find data sets with which coresets work well relative to random sampling and where QAOA could potentially outperform standard k-means on a coreset. However, finding data sets where both coresets and QAOA work well—which is necessary for a quantum advantage over k-means on the entire data set—appears to be challenging.


2020 ◽  
Vol 8 ◽  
Author(s):  
Hai-Ping Cheng ◽  
Erik Deumens ◽  
James K. Freericks ◽  
Chenglong Li ◽  
Beverly A. Sanders

Chemistry is considered as one of the more promising applications to science of near-term quantum computing. Recent work in transitioning classical algorithms to a quantum computer has led to great strides in improving quantum algorithms and illustrating their quantum advantage. Because of the limitations of near-term quantum computers, the most effective strategies split the work over classical and quantum computers. There is a proven set of methods in computational chemistry and materials physics that has used this same idea of splitting a complex physical system into parts that are treated at different levels of theory to obtain solutions for the complete physical system for which a brute force solution with a single method is not feasible. These methods are variously known as embedding, multi-scale, and fragment techniques and methods. We review these methods and then propose the embedding approach as a method for describing complex biochemical systems, with the parts not only treated with different levels of theory, but computed with hybrid classical and quantum algorithms. Such strategies are critical if one wants to expand the focus to biochemical molecules that contain active regions that cannot be properly explained with traditional algorithms on classical computers. While we do not solve this problem here, we provide an overview of where the field is going to enable such problems to be tackled in the future.


Quantum ◽  
2021 ◽  
Vol 5 ◽  
pp. 492
Author(s):  
Philippe Suchsland ◽  
Francesco Tacchino ◽  
Mark H. Fischer ◽  
Titus Neupert ◽  
Panagiotis Kl. Barkoutsos ◽  
...  

We present a hardware agnostic error mitigation algorithm for near term quantum processors inspired by the classical Lanczos method. This technique can reduce the impact of different sources of noise at the sole cost of an increase in the number of measurements to be performed on the target quantum circuit, without additional experimental overhead. We demonstrate through numerical simulations and experiments on IBM Quantum hardware that the proposed scheme significantly increases the accuracy of cost functions evaluations within the framework of variational quantum algorithms, thus leading to improved ground-state calculations for quantum chemistry and physics problems beyond state-of-the-art results.


Quantum ◽  
2021 ◽  
Vol 5 ◽  
pp. 496
Author(s):  
Ulysse Chabaud ◽  
Damian Markham ◽  
Adel Sohbi

We study supervised learning algorithms in which a quantum device is used to perform a computational subroutine – either for prediction via probability estimation, or to compute a kernel via estimation of quantum states overlap. We design implementations of these quantum subroutines using Boson Sampling architectures in linear optics, supplemented by adaptive measurements. We then challenge these quantum algorithms by deriving classical simulation algorithms for the tasks of output probability estimation and overlap estimation. We obtain different classical simulability regimes for these two computational tasks in terms of the number of adaptive measurements and input photons. In both cases, our results set explicit limits to the range of parameters for which a quantum advantage can be envisaged with adaptive linear optics compared to classical machine learning algorithms: we show that the number of input photons and the number of adaptive measurements cannot be simultaneously small compared to the number of modes. Interestingly, our analysis leaves open the possibility of a near-term quantum advantage with a single adaptive measurement.


Quantum ◽  
2019 ◽  
Vol 3 ◽  
pp. 156 ◽  
Author(s):  
Oscar Higgott ◽  
Daochen Wang ◽  
Stephen Brierley

The calculation of excited state energies of electronic structure Hamiltonians has many important applications, such as the calculation of optical spectra and reaction rates. While low-depth quantum algorithms, such as the variational quantum eigenvalue solver (VQE), have been used to determine ground state energies, methods for calculating excited states currently involve the implementation of high-depth controlled-unitaries or a large number of additional samples. Here we show how overlap estimation can be used to deflate eigenstates once they are found, enabling the calculation of excited state energies and their degeneracies. We propose an implementation that requires the same number of qubits as VQE and at most twice the circuit depth. Our method is robust to control errors, is compatible with error-mitigation strategies and can be implemented on near-term quantum computers.


2021 ◽  
Vol 3 (1) ◽  
Author(s):  
Max Wilson ◽  
Rachel Stromswold ◽  
Filip Wudarski ◽  
Stuart Hadfield ◽  
Norm M. Tubman ◽  
...  

AbstractVariational quantum algorithms, a class of quantum heuristics, are promising candidates for the demonstration of useful quantum computation. Finding the best way to amplify the performance of these methods on hardware is an important task. Here, we evaluate the optimization of quantum heuristics with an existing class of techniques called “meta-learners.” We compare the performance of a meta-learner to evolutionary strategies, L-BFGS-B and Nelder-Mead approaches, for two quantum heuristics (quantum alternating operator ansatz and variational quantum eigensolver), on three problems, in three simulation environments. We show that the meta-learner comes near to the global optima more frequently than all other optimizers we tested in a noisy parameter setting environment. We also find that the meta-learner is generally more resistant to noise, for example, seeing a smaller reduction in performance in Noisy and Sampling environments, and performs better on average by a “gain” metric than its closest comparable competitor L-BFGS-B. Finally, we present evidence that indicates the meta-learner trained on small problems will generalize to larger problems. These results are an important indication that meta-learning and associated machine learning methods will be integral to the useful application of noisy near-term quantum computers.


Quantum ◽  
2021 ◽  
Vol 5 ◽  
pp. 539
Author(s):  
Johannes Jakob Meyer

The recent advent of noisy intermediate-scale quantum devices, especially near-term quantum computers, has sparked extensive research efforts concerned with their possible applications. At the forefront of the considered approaches are variational methods that use parametrized quantum circuits. The classical and quantum Fisher information are firmly rooted in the field of quantum sensing and have proven to be versatile tools to study such parametrized quantum systems. Their utility in the study of other applications of noisy intermediate-scale quantum devices, however, has only been discovered recently. Hoping to stimulate more such applications, this article aims to further popularize classical and quantum Fisher information as useful tools for near-term applications beyond quantum sensing. We start with a tutorial that builds an intuitive understanding of classical and quantum Fisher information and outlines how both quantities can be calculated on near-term devices. We also elucidate their relationship and how they are influenced by noise processes. Next, we give an overview of the core results of the quantum sensing literature and proceed to a comprehensive review of recent applications in variational quantum algorithms and quantum machine learning.


Quantum ◽  
2020 ◽  
Vol 4 ◽  
pp. 257 ◽  
Author(s):  
Filip B. Maciejewski ◽  
Zoltán Zimborás ◽  
Michał Oszmaniec

We propose a simple scheme to reduce readout errors in experiments on quantum systems with finite number of measurement outcomes. Our method relies on performing classical post-processing which is preceded by Quantum Detector Tomography, i.e., the reconstruction of a Positive-Operator Valued Measure (POVM) describing the given quantum measurement device. If the measurement device is affected only by an invertible classical noise, it is possible to correct the outcome statistics of future experiments performed on the same device. To support the practical applicability of this scheme for near-term quantum devices, we characterize measurements implemented in IBM's and Rigetti's quantum processors. We find that for these devices, based on superconducting transmon qubits, classical noise is indeed the dominant source of readout errors. Moreover, we analyze the influence of the presence of coherent errors and finite statistics on the performance of our error-mitigation procedure. Applying our scheme on the IBM's 5-qubit device, we observe a significant improvement of the results of a number of single- and two-qubit tasks including Quantum State Tomography (QST), Quantum Process Tomography (QPT), the implementation of non-projective measurements, and certain quantum algorithms (Grover's search and the Bernstein-Vazirani algorithm). Finally, we present results showing improvement for the implementation of certain probability distributions in the case of five qubits.


Science ◽  
2018 ◽  
Vol 362 (6412) ◽  
pp. 308-311 ◽  
Author(s):  
Sergey Bravyi ◽  
David Gosset ◽  
Robert König

Quantum effects can enhance information-processing capabilities and speed up the solution of certain computational problems. Whether a quantum advantage can be rigorously proven in some setting or demonstrated experimentally using near-term devices is the subject of active debate. We show that parallel quantum algorithms running in a constant time period are strictly more powerful than their classical counterparts; they are provably better at solving certain linear algebra problems associated with binary quadratic forms. Our work gives an unconditional proof of a computational quantum advantage and simultaneously pinpoints its origin: It is a consequence of quantum nonlocality. The proposed quantum algorithm is a suitable candidate for near-future experimental realizations, as it requires only constant-depth quantum circuits with nearest-neighbor gates on a two-dimensional grid of qubits (quantum bits).


Sign in / Sign up

Export Citation Format

Share Document