scholarly journals Quantum-enhanced analysis of discrete stochastic processes

2021 ◽  
Vol 7 (1) ◽  
Author(s):  
Carsten Blank ◽  
Daniel K. Park ◽  
Francesco Petruccione

AbstractDiscrete stochastic processes (DSP) are instrumental for modeling the dynamics of probabilistic systems and have a wide spectrum of applications in science and engineering. DSPs are usually analyzed via Monte-Carlo methods since the number of realizations increases exponentially with the number of time steps, and importance sampling is often required to reduce the variance. We propose a quantum algorithm for calculating the characteristic function of a DSP, which completely defines its probability distribution, using the number of quantum circuit elements that grows only linearly with the number of time steps. The quantum algorithm reduces the Monte-Carlo sampling to a Bernoulli trial while taking all stochastic trajectories into account. This approach guarantees the optimal variance without the need for importance sampling. The algorithm can be further furnished with the quantum amplitude estimation algorithm to provide quadratic speed-up in sampling. The Fourier approximation can be used to estimate an expectation value of any integrable function of the random variable. Applications in finance and correlated random walks are presented. Proof-of-principle experiments are performed using the IBM quantum cloud platform.

2010 ◽  
Vol 104 (25) ◽  
Author(s):  
Nicolas Destainville ◽  
Bertrand Georgeot ◽  
Olivier Giraud

Quantum ◽  
2021 ◽  
Vol 5 ◽  
pp. 559
Author(s):  
Yasunari Suzuki ◽  
Yoshiaki Kawase ◽  
Yuya Masumura ◽  
Yuria Hiraga ◽  
Masahiro Nakadai ◽  
...  

To explore the possibilities of a near-term intermediate-scale quantum algorithm and long-term fault-tolerant quantum computing, a fast and versatile quantum circuit simulator is needed. Here, we introduce Qulacs, a fast simulator for quantum circuits intended for research purpose. We show the main concepts of Qulacs, explain how to use its features via examples, describe numerical techniques to speed-up simulation, and demonstrate its performance with numerical benchmarks.


Author(s):  
Phillip Kaye ◽  
Raymond Laflamme ◽  
Michele Mosca

In this chapter we examine one of two main classes of algorithms: quantum algorithms that solve problems with a complexity that is superpolynomially less than the complexity of the best-known classical algorithm for the same problem. That is, the complexity of the best-known classical algorithm cannot be bounded above by any polynomial in the complexity of the quantum algorithm. The algorithms we will detail all make use of the quantum Fourier transform (QFT). We start off the chapter by studying the problem of quantum phase estimation, which leads us naturally to the QFT. Section 7.1 also looks at using the QFT to find the period of periodic states, and introduces some elementary number theory that is needed in order to post-process the quantum algorithm. In Section 7.2, we apply phase estimation in order to estimate eigenvalues of unitary operators. Then in Section 7.3, we apply the eigenvalue estimation algorithm in order to derive the quantum factoring algorithm, and in Section 7.4 to solve the discrete logarithm problem. In Section 7.5, we introduce the hidden subgroup problem which encompasses both the order finding and discrete logarithm problem as well as many others. This chapter by no means exhaustively covers the quantum algorithms that are superpolynomially faster than any known classical algorithm, but it does cover the most well-known such algorithms. In Section 7.6, we briefly discuss other quantum algorithms that appear to provide a superpolynomial advantage. To introduce the idea of phase estimation, we begin by noting that the final Hadamard gate in the Deutsch algorithm, and the Deutsch–Jozsa algorithm, was used to get at information encoded in the relative phases of a state. The Hadamard gate is self-inverse and thus does the opposite as well, namely it can be used to encode information into the phases. To make this concrete, first consider H acting on the basis state |x⟩ (where x ∊ {0, 1}). It is easy to see that You can think about the Hadamard gate as having encoded information about the value of x into the relative phases between the basis states |0⟩ and |1⟩.


Open Physics ◽  
2019 ◽  
Vol 17 (1) ◽  
pp. 839-849
Author(s):  
Theerapat Tansuwannont ◽  
Surachate Limkumnerd ◽  
Sujin Suwanna ◽  
Pruet Kalasuwan

AbstractQuantum algorithm is an algorithm for solving mathematical problems using quantum systems encoded as information, which is found to outperform classical algorithms in some specific cases. The objective of this study is to develop a quantum algorithm for finding the roots of nth degree polynomials where n is any positive integer. In classical algorithm, the resources required for solving this problem increase drastically when n increases and it would be impossible to practically solve the problem when n is large. It was found that any polynomial can be rearranged into a corresponding companion matrix, whose eigenvalues are roots of the polynomial. This leads to a possibility to perform a quantum algorithm where the number of computational resources increase as a polynomial of n. In this study, we construct a quantum circuit representing the companion matrix and use eigenvalue estimation technique to find roots of polynomial.


Author(s):  
K. PALVANNAN ◽  
YAACOB IBRAHIM

Tolerances in component values will affect a product manufacturing yield. The yield can be maximized by selecting component nominal values judiciously. Several yield optimization routines have been developed. A simple algorithm known as the center of gravity (CoG) method makes use of a simple Monte Carlo sampling to estimate the yield and to generate a search direction for the optimal nominal values. This technique is known to be able to identify the region of high yield in a small number of iterations. The use of the importance sampling technique is investigated. The objective is to reduce the number of samples needed to reach the optimal region. A uniform distribution centered at the mean is studied as the importance sampling density. The results show that a savings of about 40% as compared to Monte Carlo sampling can be achieved using importance sampling when the starting yield is low. The importance sampling density also helped the search process to identify the high yield region quickly and the region identified is generally better than that of Monte Carlo sampling.


Proceedings ◽  
2019 ◽  
Vol 12 (1) ◽  
pp. 26
Author(s):  
Gianluca Passarelli ◽  
Giulio Filippis ◽  
Vittorio Cataudella ◽  
Procolo Lucignano

We discuss the quantum annealing of the fully-connected ferromagnetic p-spin model in a dissipative environment at low temperature. This model, in the large p limit, encodes in its ground state the solution to the Grover’s problem of searching in unsorted databases. In the framework of the quantum circuit model, a quantum algorithm is known for this task, providing a quadratic speed-up with respect to its best classical counterpart. This improvement is not recovered in adiabatic quantum computation for an isolated quantum processor. We analyze the same problem in the presence of a low-temperature reservoir, using a Markovian quantum master equation in Lindblad form, and we show that a thermal enhancement is achieved in the presence of a zero temperature environment moderately coupled to the quantum annealer.


2021 ◽  
Author(s):  
Josiah P. Hanna ◽  
Scott Niekum ◽  
Peter Stone

AbstractIn reinforcement learning, importance sampling is a widely used method for evaluating an expectation under the distribution of data of one policy when the data has in fact been generated by a different policy. Importance sampling requires computing the likelihood ratio between the action probabilities of a target policy and those of the data-producing behavior policy. In this article, we study importance sampling where the behavior policy action probabilities are replaced by their maximum likelihood estimate of these probabilities under the observed data. We show this general technique reduces variance due to sampling error in Monte Carlo style estimators. We introduce two novel estimators that use this technique to estimate expected values that arise in the RL literature. We find that these general estimators reduce the variance of Monte Carlo sampling methods, leading to faster learning for policy gradient algorithms and more accurate off-policy policy evaluation. We also provide theoretical analysis showing that our new estimators are consistent and have asymptotically lower variance than Monte Carlo estimators.


Author(s):  
Harry R. Millwater ◽  
Graham G. Chell ◽  
David S. Riha

This paper describes a computer program which can determine the probability of failure of gas turbine structures as a function of time due to creep fatigue crack growth. The probability of failure is computed by combining stress analysis and creep fatigue analysis with probabilistic analysis methods. The creep fatigue analysis is based on a reference stress approach which provides a simple, accurate, and efficient method for determining the steady state component, C*, of the time dependent fracture mechanics parameter C(t). Stress intensity factors are computed from stress distributions derived from a linear elastic finite element analysis of the uncracked structure and weight functions. Several probabilistic methods are available such as efficient approximate methods, importance sampling and Monte Carlo sampling. Efficient approximate methods and importance sampling methods are typically one to two orders of magnitude more efficient than Monte Carlo sampling. Probabilistic sensitivity measures are generated as a byproduct of the probabilistic analysis and indicate the importance of the random variables to the reliability of the structure. The theoretical background, computer code and an example problem are presented.


1998 ◽  
Vol 37 (03) ◽  
pp. 235-238 ◽  
Author(s):  
M. El-Taha ◽  
D. E. Clark

AbstractA Logistic-Normal random variable (Y) is obtained from a Normal random variable (X) by the relation Y = (ex)/(1 + ex). In Monte-Carlo analysis of decision trees, Logistic-Normal random variates may be used to model the branching probabilities. In some cases, the probabilities to be modeled may not be independent, and a method for generating correlated Logistic-Normal random variates would be useful. A technique for generating correlated Normal random variates has been previously described. Using Taylor Series approximations and the algebraic definitions of variance and covariance, we describe methods for estimating the means, variances, and covariances of Normal random variates which, after translation using the above formula, will result in Logistic-Normal random variates having approximately the desired means, variances, and covariances. Multiple simulations of the method using the Mathematica computer algebra system show satisfactory agreement with the theoretical results.


Sign in / Sign up

Export Citation Format

Share Document