scholarly journals Pseudo-dimension of quantum circuits

2020 ◽  
Vol 2 (2) ◽  
Author(s):  
Matthias C. Caro ◽  
Ishaun Datta

AbstractWe characterize the expressive power of quantum circuits with the pseudo-dimension, a measure of complexity for probabilistic concept classes. We prove pseudo-dimension bounds on the output probability distributions of quantum circuits; the upper bounds are polynomial in circuit depth and number of gates. Using these bounds, we exhibit a class of circuit output states out of which at least one has exponential gate complexity of state preparation, and moreover demonstrate that quantum circuits of known polynomial size and depth are PAC-learnable.

Quantum ◽  
2018 ◽  
Vol 2 ◽  
pp. 106 ◽  
Author(s):  
Tomoyuki Morimae ◽  
Yuki Takeuchi ◽  
Harumichi Nishimura

We introduce a simple sub-universal quantum computing model, which we call the Hadamard-classical circuit with one-qubit (HC1Q) model. It consists of a classical reversible circuit sandwiched by two layers of Hadamard gates, and therefore it is in the second level of the Fourier hierarchy. We show that output probability distributions of the HC1Q model cannot be classically efficiently sampled within a multiplicative error unless the polynomial-time hierarchy collapses to the second level. The proof technique is different from those used for previous sub-universal models, such as IQP, Boson Sampling, and DQC1, and therefore the technique itself might be useful for finding other sub-universal models that are hard to classically simulate. We also study the classical verification of quantum computing in the second level of the Fourier hierarchy. To this end, we define a promise problem, which we call the probability distribution distinguishability with maximum norm (PDD-Max). It is a promise problem to decide whether output probability distributions of two quantum circuits are far apart or close. We show that PDD-Max is BQP-complete, but if the two circuits are restricted to some types in the second level of the Fourier hierarchy, such as the HC1Q model or the IQP model, PDD-Max has a Merlin-Arthur system with quantum polynomial-time Merlin and classical probabilistic polynomial-time Arthur.


2014 ◽  
Vol 14 (13&14) ◽  
pp. 1149-1164
Author(s):  
Yasuhiro Takahashi ◽  
Takeshi Yamazaki ◽  
Kazuyuki Tanaka

We study the classical simulatability of constant-depth polynomial-size quantum circuits followed by only one single-qubit measurement, where the circuits consist of universal gates on at most two qubits and additional gates on an unbounded number of qubits. First, we consider unbounded Toffoli gates as additional gates and deal with the weak simulation, i.e., sampling the output probability distribution. We show that there exists a constant-depth quantum circuit with only one unbounded Toffoli gate that is not weakly simulatable, unless $\bqp \subseteq \postbpp \cap \am$. Then, we consider unbounded fan-out gates as additional gates and deal with the strong simulation, i.e., computing the output probability. We show that there exists a constant-depth quantum circuit with only two unbounded fan-out gates that is not strongly simulatable, unless $\p = \pp$. These results are in contrast to the fact that any constant-depth quantum circuit without additional gates on an unbounded number of qubits is strongly and weakly simulatable.


2016 ◽  
Vol 12 (S325) ◽  
pp. 39-45 ◽  
Author(s):  
Maria Süveges ◽  
Sotiria Fotopoulou ◽  
Jean Coupon ◽  
Stéphane Paltani ◽  
Laurent Eyer ◽  
...  

AbstractThroughout the processing and analysis of survey data, a ubiquitous issue nowadays is that we are spoilt for choice when we need to select a methodology for some of its steps. The alternative methods usually fail and excel in different data regions, and have various advantages and drawbacks, so a combination that unites the strengths of all while suppressing the weaknesses is desirable. We propose to use a two-level hierarchy of learners. Its first level consists of training and applying the possible base methods on the first part of a known set. At the second level, we feed the output probability distributions from all base methods to a second learner trained on the remaining known objects. Using classification of variable stars and photometric redshift estimation as examples, we show that the hierarchical combination is capable of achieving general improvement over averaging-type combination methods, correcting systematics present in all base methods, is easy to train and apply, and thus, it is a promising tool in the astronomical “Big Data” era.


Author(s):  
MONICA KRISTIANSEN ◽  
RUNE WINTHER ◽  
BENT NATVIG

Predicting the reliability of software systems based on a component-based approach is inherently difficult, in particular due to failure dependencies between software components. One possible way to assess and include dependency aspects in software reliability models is to find upper bounds for probabilities that software components fail simultaneously and then include these into the reliability models. In earlier research, it has been shown that including partial dependency information may give substantial improvements in predicting the reliability of compound software compared to assuming independence between all software components. Furthermore, it has been shown that including dependencies between pairs of data-parallel components may give predictions close to the system's true reliability. In this paper, a Bayesian hypothesis testing approach for finding upper bounds for probabilities that pairs of software components fail simultaneously is described. This approach consists of two main steps: (1) establishing prior probability distributions for probabilities that pairs of software components fail simultaneously and (2) updating these prior probability distributions by performing statistical testing. In this paper, the focus is on the first step in the Bayesian hypothesis testing approach, and two possible procedures for establishing a prior probability distribution for the probability that a pair of software components fails simultaneously are proposed.


Quantum ◽  
2019 ◽  
Vol 3 ◽  
pp. 162 ◽  
Author(s):  
Ryan L. Mann ◽  
Michael J. Bremner

We study the problem of approximating the Ising model partition function with complex parameters on bounded degree graphs. We establish a deterministic polynomial-time approximation scheme for the partition function when the interactions and external fields are absolutely bounded close to zero. Furthermore, we prove that for this class of Ising models the partition function does not vanish. Our algorithm is based on an approach due to Barvinok for approximating evaluations of a polynomial based on the location of the complex zeros and a technique due to Patel and Regts for efficiently computing the leading coefficients of graph polynomials on bounded degree graphs. Finally, we show how our algorithm can be extended to approximate certain output probability amplitudes of quantum circuits.


Quantum ◽  
2021 ◽  
Vol 5 ◽  
pp. 422
Author(s):  
Lena Funcke ◽  
Tobias Hartung ◽  
Karl Jansen ◽  
Stefan Kühn ◽  
Paolo Stornati

Parametric quantum circuits play a crucial role in the performance of many variational quantum algorithms. To successfully implement such algorithms, one must design efficient quantum circuits that sufficiently approximate the solution space while maintaining a low parameter count and circuit depth. In this paper, develop a method to analyze the dimensional expressivity of parametric quantum circuits. Our technique allows for identifying superfluous parameters in the circuit layout and for obtaining a maximally expressive ansatz with a minimum number of parameters. Using a hybrid quantum-classical approach, we show how to efficiently implement the expressivity analysis using quantum hardware, and we provide a proof of principle demonstration of this procedure on IBM's quantum hardware. We also discuss the effect of symmetries and demonstrate how to incorporate or remove symmetries from the parametrized ansatz.


1988 ◽  
Vol 17 (239) ◽  
Author(s):  
Joan Boyar ◽  
Gudmund Skovbjerg Frandsen ◽  
Carl Sturtivant

We define a new structured and general model of computation: circuits using arbitrary fan- in arithmetic gates over the characteristic two finite fields (<strong>F</strong>_2n). These circuits have only one input and one output. We show how they correspond naturally to boolean computations with n inputs and n outputs. We show that if circuit sizes are polynomially related then the arithmetic circuit depth and the threshold circuit depth to compute a given function differ by at most a constant factor. We use threshold circuits that allow arbitrary integer weights; however, we show that when compared to the usual threshold model, the depth measure of this generalised model only differs by at most a constant factor (at polynomial size). The fan-in of our arithmetic model is also unbounded in the most generous sense: circuit size is measured as the number of Sum and ½ gates; there is no bound on the number of ''wires'' . We show that these results are provable for any ''reasonable'' correspondance between bit strings of n-bits and elements of <strong>F</strong>_ 2n. And, we find two distinct characterizations of ''reasonable''. Thus, we have shown that arbitrary fan-in arithmetic computations over <strong>F</strong>_ 2n constitute a precise abstraction of boolean threshold computations with the pleasant property that various algebraic laws have been recovered.


2019 ◽  
Vol 19 (13&14) ◽  
pp. 1089-1115
Author(s):  
Tomoyuki Morimae ◽  
Suguru Tamaki

(pp1089-1115) Tomoyuki Morimae and Suguru Tamaki doi: https://doi.org/10.26421/QIC19.13-14-2 Abstracts: Output probability distributions of several sub-universal quantum computing models cannot be classically efficiently sampled unless some unlikely consequences occur in classical complexity theory, such as the collapse of the polynomial-time hierarchy. These results, so called quantum supremacy, however, do not rule out possibilities of super-polynomial-time classical simulations. In this paper, we study ``fine-grained" version of quantum supremacy that excludes some exponential-time classical simulations. First, we focus on two sub-universal models, namely, the one-clean-qubit model (or the DQC1 model) and the HC1Q model. Assuming certain conjectures in fine-grained complexity theory, we show that for any a>0 output probability distributions of these models cannot be classically sampled within a constant multiplicative error and in 2^{(1-a)N+o(N)} time, where N is the number of qubits. Next, we consider universal quantum computing. For example, we consider quantum computing over Clifford and T gates, and show that under another fine-grained complexity conjecture, output probability distributions of Clifford-T quantum computing cannot be classically sampled in 2^{o(t)} time within a constant multiplicative error, where t is the number of T gates.


Quantum ◽  
2020 ◽  
Vol 4 ◽  
pp. 322
Author(s):  
Ewout van den Berg ◽  
Kristan Temme

Many applications of practical interest rely on time evolution of Hamiltonians that are given by a sum of Pauli operators. Quantum circuits for exact time evolution of single Pauli operators are well known, and can be extended trivially to sums of commuting Paulis by concatenating the circuits of individual terms. In this paper we reduce the circuit complexity of Hamiltonian simulation by partitioning the Pauli operators into mutually commuting clusters and exponentiating the elements within each cluster after applying simultaneous diagonalization. We provide a practical algorithm for partitioning sets of Paulis into commuting subsets, and show that the proposed approach can help to significantly reduce both the number of CNOT operations and circuit depth for Hamiltonians arising in quantum chemistry. The algorithms for simultaneous diagonalization are also applicable in the context of stabilizer states; in particular we provide novel four- and five-stage representations, each containing only a single stage of conditional gates.


Sign in / Sign up

Export Citation Format

Share Document