scholarly journals Variational quantum Boltzmann machines

2021 ◽  
Vol 3 (1) ◽  
Author(s):  
Christa Zoufal ◽  
Aurélien Lucchi ◽  
Stefan Woerner

AbstractThis work presents a novel realization approach to quantum Boltzmann machines (QBMs). The preparation of the required Gibbs states, as well as the evaluation of the loss function’s analytic gradient, is based on variational quantum imaginary time evolution, a technique that is typically used for ground-state computation. In contrast to existing methods, this implementation facilitates near-term compatible QBM training with gradients of the actual loss function for arbitrary parameterized Hamiltonians which do not necessarily have to be fully visible but may also include hidden units. The variational Gibbs state approximation is demonstrated with numerical simulations and experiments run on real quantum hardware provided by IBM Quantum. Furthermore, we illustrate the application of this variational QBM approach to generative and discriminative learning tasks using numerical simulation.

2007 ◽  
Vol 85 (4) ◽  
pp. 393-399
Author(s):  
V S Kulhar

Cross sections for antihydrogen formation in the ground state for the process [Formula: see text] + Ps(nlm) → [Formula: see text](1s) + e– have been calculated using charge conjugation and time reversal invariance. Calculations are based on a two-state approximation method, used by the author earlier for positron–hydrogen charge -exchange process (e+ – H → Ps(nlm) + p). Cross-section results are reported in the intermediate- and high-energy region (20 keV – 500 keV). PACS No.: 36.10.Dr


1978 ◽  
Vol 56 (5) ◽  
pp. 565-570 ◽  
Author(s):  
V. S. Kulhar ◽  
C. S. Shastry

The two state approximation method for the study of the rearrangement collisions is applied to the process of positronium formation in excited states for positron–hydrogen charge exchange collisions. Differential and integrated cross sections are computed for positronium formation in 2S, 2P, and 3S excited states. The results obtained in the energy region 2 to 10 Ry are compared with positronium formation cross sections in ground state. Total positronium formation cross sections including the contributions of capture into all the higher excited states of positronium are also computed in the first Born approximation and the two state approximation in the energy region considered.


Quantum ◽  
2021 ◽  
Vol 5 ◽  
pp. 492
Author(s):  
Philippe Suchsland ◽  
Francesco Tacchino ◽  
Mark H. Fischer ◽  
Titus Neupert ◽  
Panagiotis Kl. Barkoutsos ◽  
...  

We present a hardware agnostic error mitigation algorithm for near term quantum processors inspired by the classical Lanczos method. This technique can reduce the impact of different sources of noise at the sole cost of an increase in the number of measurements to be performed on the target quantum circuit, without additional experimental overhead. We demonstrate through numerical simulations and experiments on IBM Quantum hardware that the proposed scheme significantly increases the accuracy of cost functions evaluations within the framework of variational quantum algorithms, thus leading to improved ground-state calculations for quantum chemistry and physics problems beyond state-of-the-art results.


Quantum ◽  
2020 ◽  
Vol 4 ◽  
pp. 314 ◽  
Author(s):  
Ryan Sweke ◽  
Frederik Wilde ◽  
Johannes Jakob Meyer ◽  
Maria Schuld ◽  
Paul K. Fährmann ◽  
...  

Within the context of hybrid quantum-classical optimization, gradient descent based optimizers typically require the evaluation of expectation values with respect to the outcome of parameterized quantum circuits. In this work, we explore the consequences of the prior observation that estimation of these quantities on quantum hardware results in a form of stochastic gradient descent optimization. We formalize this notion, which allows us to show that in many relevant cases, including VQE, QAOA and certain quantum classifiers, estimating expectation values with k measurement outcomes results in optimization algorithms whose convergence properties can be rigorously well understood, for any value of k. In fact, even using single measurement outcomes for the estimation of expectation values is sufficient. Moreover, in many settings the required gradients can be expressed as linear combinations of expectation values -- originating, e.g., from a sum over local terms of a Hamiltonian, a parameter shift rule, or a sum over data-set instances -- and we show that in these cases k-shot expectation value estimation can be combined with sampling over terms of the linear combination, to obtain ``doubly stochastic'' gradient descent optimizers. For all algorithms we prove convergence guarantees, providing a framework for the derivation of rigorous optimization results in the context of near-term quantum devices. Additionally, we explore numerically these methods on benchmark VQE, QAOA and quantum-enhanced machine learning tasks and show that treating the stochastic settings as hyper-parameters allows for state-of-the-art results with significantly fewer circuit executions and measurements.


2018 ◽  
Vol 18 (1&2) ◽  
pp. 51-74 ◽  
Author(s):  
Daniel Crawford ◽  
Anna Levit ◽  
Navid Ghadermarzy ◽  
Jaspreet S. Oberoi ◽  
Pooya Ronagh

We investigate whether quantum annealers with select chip layouts can outperform classical computers in reinforcement learning tasks. We associate a transverse field Ising spin Hamiltonian with a layout of qubits similar to that of a deep Boltzmann machine (DBM) and use simulated quantum annealing (SQA) to numerically simulate quantum sampling from this system. We design a reinforcement learning algorithm in which the set of visible nodes representing the states and actions of an optimal policy are the first and last layers of the deep network. In absence of a transverse field, our simulations show that DBMs are trained more effectively than restricted Boltzmann machines (RBM) with the same number of nodes. We then develop a framework for training the network as a quantum Boltzmann machine (QBM) in the presence of a significant transverse field for reinforcement learning. This method also outperforms the reinforcement learning method that uses RBMs.


2019 ◽  
Vol 123 (13) ◽  
Author(s):  
Guglielmo Mazzola ◽  
Pauline J. Ollitrault ◽  
Panagiotis Kl. Barkoutsos ◽  
Ivano Tavernelli

Sign in / Sign up

Export Citation Format

Share Document