scholarly journals Weakly measured while loops: peeking at quantum states

Author(s):  
Pablo Andres-Martinez ◽  
Chris Heunen

Abstract A while loop tests a termination condition on every iteration. On a quantum computer, such measurements perturb the evolution of the algorithm. We define a while loop primitive using weak measurements, offering a trade-off between the perturbation caused and the amount of information gained per iteration. This trade-off is adjusted with a parameter set by the programmer. We provide sufficient conditions that let us determine, with arbitrarily high probability, a worst-case estimate of the number of iterations the loop will run for. As an example, we solve Grover's search problem using a while loop and prove the quadratic quantum speed-up is maintained.

Author(s):  
Olga Ivancova ◽  
Vladimir Korenkov ◽  
Olga Tyatyushkina ◽  
Sergey Ulyanov ◽  
Toshio Fukuda

Several paradigms of quantum computing are considered. Quantum computer simulators are de-scribed. Models of learning quantum systems from experiments are considered. Quantum speed-up limitation in two-level systems (qubit) is discussed. The approaches to the formation of a quantum variational intrinsic solver are considered.


Entropy ◽  
2021 ◽  
Vol 23 (1) ◽  
pp. 101
Author(s):  
Luca Oneto ◽  
Sandro Ridella

In this paper, we deal with the classical Statistical Learning Theory’s problem of bounding, with high probability, the true risk R(h) of a hypothesis h chosen from a set H of m hypotheses. The Union Bound (UB) allows one to state that PLR^(h),δqh≤R(h)≤UR^(h),δph≥1−δ where R^(h) is the empirical errors, if it is possible to prove that P{R(h)≥L(R^(h),δ)}≥1−δ and P{R(h)≤U(R^(h),δ)}≥1−δ, when h, qh, and ph are chosen before seeing the data such that qh,ph∈[0,1] and ∑h∈H(qh+ph)=1. If no a priori information is available qh and ph are set to 12m, namely equally distributed. This approach gives poor results since, as a matter of fact, a learning procedure targets just particular hypotheses, namely hypotheses with small empirical error, disregarding the others. In this work we set the qh and ph in a distribution-dependent way increasing the probability of being chosen to function with small true risk. We will call this proposal Distribution-Dependent Weighted UB (DDWUB) and we will retrieve the sufficient conditions on the choice of qh and ph that state that DDWUB outperforms or, in the worst case, degenerates into UB. Furthermore, theoretical and numerical results will show the applicability, the validity, and the potentiality of DDWUB.


Author(s):  
Kho Hie Kwee ◽  
Hardiansyah .

This paper addresses the design problem of robust H2 output feedback controller design for damping power system oscillations. Sufficient conditions for the existence of output feedback controllers with norm-bounded parameter uncertainties are given in terms of linear matrix inequalities (LMIs). Furthermore, a convex optimization problem with LMI constraints is formulated to design the output feedback controller which minimizes an upper bound on the worst-case H2 norm for a range of admissible plant perturbations. The technique is illustrated with applications to the design of stabilizer for a single-machine infinite-bus (SMIB) power system. The LMI based control ensures adequate damping for widely varying system operating.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Valentin Gebhart ◽  
Luca Pezzè ◽  
Augusto Smerzi

AbstractDespite intensive research, the physical origin of the speed-up offered by quantum algorithms remains mysterious. No general physical quantity, like, for instance, entanglement, can be singled out as the essential useful resource. Here we report a close connection between the trace speed and the quantum speed-up in Grover’s search algorithm implemented with pure and pseudo-pure states. For a noiseless algorithm, we find a one-to-one correspondence between the quantum speed-up and the polarization of the pseudo-pure state, which can be connected to a wide class of quantum statistical speeds. For time-dependent partial depolarization and for interrupted Grover searches, the speed-up is specifically bounded by the maximal trace speed that occurs during the algorithm operations. Our results quantify the quantum speed-up with a physical resource that is experimentally measurable and related to multipartite entanglement and quantum coherence.


Processes ◽  
2018 ◽  
Vol 6 (8) ◽  
pp. 130 ◽  
Author(s):  
Pavel Praks ◽  
Dejan Brkić

The Colebrook equation is implicitly given in respect to the unknown flow friction factor λ; λ = ζ ( R e , ε * , λ ) which cannot be expressed explicitly in exact way without simplifications and use of approximate calculus. A common approach to solve it is through the Newton–Raphson iterative procedure or through the fixed-point iterative procedure. Both require in some cases, up to seven iterations. On the other hand, numerous more powerful iterative methods such as three- or two-point methods, etc. are available. The purpose is to choose optimal iterative method in order to solve the implicit Colebrook equation for flow friction accurately using the least possible number of iterations. The methods are thoroughly tested and those which require the least possible number of iterations to reach the accurate solution are identified. The most powerful three-point methods require, in the worst case, only two iterations to reach the final solution. The recommended representatives are Sharma–Guha–Gupta, Sharma–Sharma, Sharma–Arora, Džunić–Petković–Petković; Bi–Ren–Wu, Chun–Neta based on Kung–Traub, Neta, and the Jain method based on the Steffensen scheme. The recommended iterative methods can reach the final accurate solution with the least possible number of iterations. The approach is hybrid between the iterative procedure and one-step explicit approximations and can be used in engineering design for initial rough, but also for final fine calculations.


2021 ◽  
Vol 67 (1) ◽  
pp. 241-252
Author(s):  
Wenbin Yu ◽  
Hao Feng ◽  
Yinsong Xu ◽  
Na Yin ◽  
Yadang Chen ◽  
...  

Electronics ◽  
2018 ◽  
Vol 7 (10) ◽  
pp. 224 ◽  
Author(s):  
Zhensen Tang ◽  
Yao Wang ◽  
Yaqing Chi ◽  
Liang Fang

In this paper, the dependence of sensing currents on various device parameters is comprehensively studied by simulating the complete crossbar array rather than its equivalent analytical model. The worst-case scenario for read operation is strictly analyzed and defined in terms of selected location and data pattern, respectively, based on the effect of parasitic sneak paths and interconnection resistance. It is shown that the worst-case data pattern depends on the trade-off between the shunting effect of the parasitic sneak paths and the current injection effect of the parasitic sneak leakage, thus requiring specific analysis in practical simulations. In dealing with that, we propose a concept of the threshold array size incorporating the trade-off to define the parameter-dependent worst-case data pattern. This figure-of-merit provides guidelines for the worst-case scenario analysis of the crossbar array read operations.


2020 ◽  
Author(s):  
B Espen Eckbo ◽  
Michael Kisser

Abstract We test whether high-frequency net-debt issuers (HFIs)—public industrial companies with relatively low issuance costs and high debt-financing benefits—manage leverage toward long-run targets. Our answer is they do not: (1) the leverage–profitability correlation is negative even in quarters with leverage rebalancing; (2) the speed-of-adjustment to target leverage deviations is no higher for HFIs than for low-frequency net-debt issuers; and (3) under-leveraged HFIs do not speed up rebalancing activity in significant investment periods. Thus, even in the subset of firms most likely to follow dynamic trade-off theory, the theory does not appear to hold.


2019 ◽  
Vol 9 (1) ◽  
Author(s):  
G. G. Guerreschi ◽  
A. Y. Matsuura
Keyword(s):  
Speed Up ◽  

2005 ◽  
Vol 15 (02) ◽  
pp. 151-166
Author(s):  
TAKESHI KANDA ◽  
KOKICHI SUGIHARA

This paper studies the two-dimensional range search problem, and constructs a simple and efficient algorithm based on the Voronoi diagram. In this problem, a set of points and a query range are given, and we want to enumerate all the points which are inside the query range as quickly as possible. In most of the previous researches on this problem, the shape of the query range is restricted to particular ones such as circles, rectangles and triangles, and the improvement on the worst-case performance has been pursued. On the other hand, the algorithm proposed in this paper is designed for a general shape of the query range in the two-dimensional space, and is intended to accomplish a good average-case performance. This performance is actually observed by numerical experiments. In these experiments, we compare the execution time of the proposed algorithm with those of other representative algorithms such as those based on the bucketing technique and the k-d tree. We can observe that our algorithm shows the better performance in almost all the cases.


Sign in / Sign up

Export Citation Format

Share Document