scholarly journals Additive-error fine-grained quantum supremacy

Quantum ◽  
2020 ◽  
Vol 4 ◽  
pp. 329
Author(s):  
Tomoyuki Morimae ◽  
Suguru Tamaki

It is known that several sub-universal quantum computing models, such as the IQP model, the Boson sampling model, the one-clean qubit model, and the random circuit model, cannot be classically simulated in polynomial time under certain conjectures in classical complexity theory. Recently, these results have been improved to ``fine-grained" versions where even exponential-time classical simulations are excluded assuming certain classical fine-grained complexity conjectures. All these fine-grained results are, however, about the hardness of strong simulations or multiplicative-error sampling. It was open whether any fine-grained quantum supremacy result can be shown for a more realistic setup, namely, additive-error sampling. In this paper, we show the additive-error fine-grained quantum supremacy (under certain complexity assumptions). As examples, we consider the IQP model, a mixture of the IQP model and log-depth Boolean circuits, and Clifford+T circuits. Similar results should hold for other sub-universal models.

2019 ◽  
Vol 19 (13&14) ◽  
pp. 1089-1115
Author(s):  
Tomoyuki Morimae ◽  
Suguru Tamaki

(pp1089-1115) Tomoyuki Morimae and Suguru Tamaki doi: https://doi.org/10.26421/QIC19.13-14-2 Abstracts: Output probability distributions of several sub-universal quantum computing models cannot be classically efficiently sampled unless some unlikely consequences occur in classical complexity theory, such as the collapse of the polynomial-time hierarchy. These results, so called quantum supremacy, however, do not rule out possibilities of super-polynomial-time classical simulations. In this paper, we study ``fine-grained" version of quantum supremacy that excludes some exponential-time classical simulations. First, we focus on two sub-universal models, namely, the one-clean-qubit model (or the DQC1 model) and the HC1Q model. Assuming certain conjectures in fine-grained complexity theory, we show that for any a>0 output probability distributions of these models cannot be classically sampled within a constant multiplicative error and in 2^{(1-a)N+o(N)} time, where N is the number of qubits. Next, we consider universal quantum computing. For example, we consider quantum computing over Clifford and T gates, and show that under another fine-grained complexity conjecture, output probability distributions of Clifford-T quantum computing cannot be classically sampled in 2^{o(t)} time within a constant multiplicative error, where t is the number of T gates.


Quantum ◽  
2018 ◽  
Vol 2 ◽  
pp. 106 ◽  
Author(s):  
Tomoyuki Morimae ◽  
Yuki Takeuchi ◽  
Harumichi Nishimura

We introduce a simple sub-universal quantum computing model, which we call the Hadamard-classical circuit with one-qubit (HC1Q) model. It consists of a classical reversible circuit sandwiched by two layers of Hadamard gates, and therefore it is in the second level of the Fourier hierarchy. We show that output probability distributions of the HC1Q model cannot be classically efficiently sampled within a multiplicative error unless the polynomial-time hierarchy collapses to the second level. The proof technique is different from those used for previous sub-universal models, such as IQP, Boson Sampling, and DQC1, and therefore the technique itself might be useful for finding other sub-universal models that are hard to classically simulate. We also study the classical verification of quantum computing in the second level of the Fourier hierarchy. To this end, we define a promise problem, which we call the probability distribution distinguishability with maximum norm (PDD-Max). It is a promise problem to decide whether output probability distributions of two quantum circuits are far apart or close. We show that PDD-Max is BQP-complete, but if the two circuits are restricted to some types in the second level of the Fourier hierarchy, such as the HC1Q model or the IQP model, PDD-Max has a Merlin-Arthur system with quantum polynomial-time Merlin and classical probabilistic polynomial-time Arthur.


2019 ◽  
Vol 19 (9&10) ◽  
pp. 793-806
Author(s):  
Tomoyuki Morimae ◽  
Harumichi Nishimura ◽  
Yuki Takeuch ◽  
Seiichiro Tani

Blind quantum computing enables a client, who can only generate or measure single-qubit states, to delegate quantum computing to a remote quantum server in such a way that the input, output, and program are hidden from the server. It is an open problem whether a completely classical client can delegate quantum computing blindly (in the information theoretic sense). In this paper, we show that if a completely classical client can blindly delegate sampling of subuniversal models, such as the DQC1 model and the IQP model, then the polynomial-time hierarchy collapses to the third level. Our delegation protocol is the one where the client first sends a polynomial-length bit string to the server and then the server returns a single bit to the client. Generalizing the no-go result to more general setups is an open problem.


2019 ◽  
Vol 9 (1) ◽  
Author(s):  
Yuki Takeuchi ◽  
Tomoyuki Morimae ◽  
Masahito Hayashi

Abstract Measurement-based quantum computing is one of the most promising quantum computing models. Although various universal resource states have been proposed so far, it was open whether only two Pauli bases are enough for both of universal measurement-based quantum computing and its verification. In this paper, we construct a universal hypergraph state that only requires X and Z-basis measurements for universal measurement-based quantum computing. We also show that universal measurement-based quantum computing on our hypergraph state can be verified in polynomial time using only X and Z-basis measurements. Furthermore, in order to demonstrate an advantage of our hypergraph state, we construct a verifiable blind quantum computing protocol that requires only X and Z-basis measurements for the client.


2014 ◽  
Vol 79 (3) ◽  
pp. 859-881 ◽  
Author(s):  
EGOR IANOVSKI ◽  
RUSSELL MILLER ◽  
KENG MENG NG ◽  
ANDRÉ NIES

AbstractWe study the relative complexity of equivalence relations and preorders from computability theory and complexity theory. Given binary relationsR,S, a componentwise reducibility is defined byR≤S⇔ ∃f∀x, y[x R y↔f(x)S f(y)].Here,fis taken from a suitable class of effective functions. For us the relations will be on natural numbers, andfmust be computable. We show that there is a${\rm{\Pi }}_1^0$-complete equivalence relation, but no${\rm{\Pi }}_k^0$-complete fork≥ 2. We show that${\rm{\Sigma }}_k^0$preorders arising naturally in the above-mentioned areas are${\rm{\Sigma }}_k^0$-complete. This includes polynomial timem-reducibility on exponential time sets, which is${\rm{\Sigma }}_2^0$, almost inclusion on r.e. sets, which is${\rm{\Sigma }}_3^0$, and Turing reducibility on r.e. sets, which is${\rm{\Sigma }}_4^0$.


2021 ◽  
Vol 64 (5) ◽  
pp. 98-105
Author(s):  
Martin Grohe ◽  
Daniel Neuen

We investigate the interplay between the graph isomorphism problem, logical definability, and structural graph theory on a rich family of dense graph classes: graph classes of bounded rank width. We prove that the combinatorial Weisfeiler-Leman algorithm of dimension (3 k + 4) is a complete isomorphism test for the class of all graphs of rank width at most k. A consequence of our result is the first polynomial time canonization algorithm for graphs of bounded rank width. Our second main result addresses an open problem in descriptive complexity theory: we show that fixed-point logic with counting expresses precisely the polynomial time properties of graphs of bounded rank width.


Minerals ◽  
2021 ◽  
Vol 11 (7) ◽  
pp. 693
Author(s):  
Argyrios Papadopoulos ◽  
Stylianos Lazaridis ◽  
Afroditi Kipourou-Panagiotou ◽  
Nikolaos Kantiranis ◽  
Antonios Koroneos ◽  
...  

Beach sands from Aggelochori coast line are investigated for their geochemistry and REE content, mineralogy and their provenance. These fluvial sands bear heavy minerals enriched horizons (containing minerals such as magnetite, zircon, ilmenite, hematite, rutile and titanite) that can be distinguished due to their black color and are formed usually due to the action of sea waves that deposit the heavy minerals and remove the lighter ones. After a suitable processing (washing, sieving, drying and magnetic separation) of the samples, the mineral constituents and their presence (wt.%) were estimated by XRD. Among the samples, the one being simultaneously the more fine grained and the more zircon-enriched (as suggested by XRPD data and optical microscopy analysis) has been selected for further geochemical analyses. The major and trace elements contents were compared to previously studied REE enriched beach sands from Kavala and Sithonia. Beach sands from Aggelochori area appear to have relatively low REE contents. Considering the provenance of these sediments, we suggest that these sands, are a product of the erosion of multi-sources, including the near-by Monopigado granite, as well as metamorphic rocks, as indicated by the presence of rutile and both ilmenite and magnetite in some samples. Therefore, there are indications of a complex flow pattern that existed at the paleo-catchment area of the deposition.


2020 ◽  
Vol 20 (9&10) ◽  
pp. 747-765
Author(s):  
F. Orts ◽  
G. Ortega ◽  
E.M. E.M. Garzon

Despite the great interest that the scientific community has in quantum computing, the scarcity and high cost of resources prevent to advance in this field. Specifically, qubits are very expensive to build, causing the few available quantum computers are tremendously limited in their number of qubits and delaying their progress. This work presents new reversible circuits that optimize the necessary resources for the conversion of a sign binary number into two's complement of N digits. The benefits of our work are two: on the one hand, the proposed two's complement converters are fault tolerant circuits and also are more efficient in terms of resources (essentially, quantum cost, number of qubits, and T-count) than the described in the literature. On the other hand, valuable information about available converters and, what is more, quantum adders, is summarized in tables for interested researchers. The converters have been measured using robust metrics and have been compared with the state-of-the-art circuits. The code to build them in a real quantum computer is given.


2006 ◽  
Vol 503-504 ◽  
pp. 865-870 ◽  
Author(s):  
Yongjun Chen ◽  
Qu Dong Wang ◽  
Jianguo Peng ◽  
Chun Quan Zhai

Experiments were conducted both to evaluate the potential for grain refinement, the subsequent mechanical properties at room temperature in samples of AZ31 Mg alloy and also to investigate the relationship between one-step and two-step high ratio extrusion (HRE). The one-step HRE was undertaken using a high extrusion ratio of 70:1 at 250, 300 and 350°C. And the two-step HRE was conducted with an extrusion ratio of 7 for the first step at 250, 300 and 350°C, followed by a second-step extrusion with an extrusion ratio of 10 at 250, 300 and 350°C. The initial grain size in the AZ31 ingot was 100μm and that after one-step HRE became similar to 5μm, after two-step HRE at 250, 300 and 350°C were 2, 4, 7μm, respectively, resulting in superior mechanical properties at ambient temperature. The microstructure of two-step HRE was finer and uniformer than that of one-step HRE and the strength of one-step and two-step HRE were similar, moreover, the elongation of one-step HRE was improved markedly than that of two-step HRE. Dynamic recrystallization and adjacent grain broking during HRE is introduced to explain the effects of one-step and two-step HRE on the microstructure and mechanical properties of AZ31 Mg alloy. The current results imply that the simple HRE method might be a feasible processing method for industry applications, and the multiply steps extrusion are effective to fabricate high strength of fine grained hcp metals.


Sign in / Sign up

Export Citation Format

Share Document