scholarly journals Quantum Computational Advantage via 60-Qubit 24-Cycle Random Circuit Sampling

2021 ◽  
Author(s):  
Qingling Zhu ◽  
Sirui Cao ◽  
Fusheng Chen ◽  
Ming-Cheng Chen ◽  
Xiawei Chen ◽  
...  
2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Valentin Gebhart ◽  
Luca Pezzè ◽  
Augusto Smerzi

AbstractDespite intensive research, the physical origin of the speed-up offered by quantum algorithms remains mysterious. No general physical quantity, like, for instance, entanglement, can be singled out as the essential useful resource. Here we report a close connection between the trace speed and the quantum speed-up in Grover’s search algorithm implemented with pure and pseudo-pure states. For a noiseless algorithm, we find a one-to-one correspondence between the quantum speed-up and the polarization of the pseudo-pure state, which can be connected to a wide class of quantum statistical speeds. For time-dependent partial depolarization and for interrupted Grover searches, the speed-up is specifically bounded by the maximal trace speed that occurs during the algorithm operations. Our results quantify the quantum speed-up with a physical resource that is experimentally measurable and related to multipartite entanglement and quantum coherence.


2020 ◽  
Vol 34 (03) ◽  
pp. 2518-2526
Author(s):  
Sarath Sreedharan ◽  
Tathagata Chakraborti ◽  
Christian Muise ◽  
Subbarao Kambhampati

In this work, we present a new planning formalism called Expectation-Aware planning for decision making with humans in the loop where the human's expectations about an agent may differ from the agent's own model. We show how this formulation allows agents to not only leverage existing strategies for handling model differences like explanations (Chakraborti et al. 2017) and explicability (Kulkarni et al. 2019), but can also exhibit novel behaviors that are generated through the combination of these different strategies. Our formulation also reveals a deep connection to existing approaches in epistemic planning. Specifically, we show how we can leverage classical planning compilations for epistemic planning to solve Expectation-Aware planning problems. To the best of our knowledge, the proposed formulation is the first complete solution to planning with diverging user expectations that is amenable to a classical planning compilation while successfully combining previous works on explanation and explicability. We empirically show how our approach provides a computational advantage over our earlier approaches that rely on search in the space of models.


Science ◽  
2020 ◽  
Vol 370 (6523) ◽  
pp. 1460-1463 ◽  
Author(s):  
Han-Sen Zhong ◽  
Hui Wang ◽  
Yu-Hao Deng ◽  
Ming-Cheng Chen ◽  
Li-Chao Peng ◽  
...  

1985 ◽  
Vol 107 (2) ◽  
pp. 245-255 ◽  
Author(s):  
M. Cwiakala ◽  
T. W. Lee

This paper presents an algorithm using an optimization technique to outline the boundary profile of a manipulator workspace and perform quantitative evaluation of the workspace volume. The algorithm is applicable to general N-link manipulators with not only the revolute joints, but also joints of other types, such as, the prismatic and cylindrical joints. It is a partial-scanning technique which offers significant reduction on the number of scanning points to generate the workspace and the method is particularly efficient in dealing with complicated manipulator geometry. The [3 × 3] dual-number matrix method is used as the basis for analytical formulations, and consequently, computational advantage is gained. A comparative study is given with a previously used algorithm. Several specific examples involving industrial robots of various kinds are given to demonstrate the capability of the algorithm.


Geophysics ◽  
1998 ◽  
Vol 63 (5) ◽  
pp. 1532-1541 ◽  
Author(s):  
Jon Claerbout

Wind a wire onto a cylinder to create a helix. I show that a filter on the 1-D space of the wire mimics a 2-D filter on the cylindrical surface. Thus 2-D convolution can be done with a 1-D convolution program. I show some examples of 2-D recursive filtering (also called 2-D deconvolution or 2-D polynomial division). In 2-D as in 1-D, the computational advantage of recursive filters is the speed with which they propagate information over long distances. We can estimate 2-D prediction‐error filters (PEFs) that are assured of being stable for 2-D recursion. Such 2-D and 3-D recursions are general‐purpose preconditioners that vastly speed the solution of a wide class of geophysical estimation problems. The helix transformation also enables use of the partial‐differential equation of wave extrapolation as though it were an ordinary‐differential equation.


2009 ◽  
Vol 26 (1) ◽  
pp. 187-230 ◽  
Author(s):  
Lung-fei Lee ◽  
Xiaodong Liu

In this paper, we extend the GMM framework for the estimation of the mixed-regressive spatial autoregressive model by Lee(2007a) to estimate a high order mixed-regressive spatial autoregressive model with spatial autoregressive disturbances. Identification of such a general model is considered. The GMM approach has computational advantage over the conventional ML method. The proposed GMM estimators are shown to be consistent and asymptotically normal. The best GMM estimator is derived, within the class of GMM estimators based on linear and quadratic moment conditions of the disturbances. The best GMM estimator is asymptotically as efficient as the ML estimator under normality, more efficient than the QML estimator otherwise, and is efficient relative to the G2SLS estimator.


2019 ◽  
Author(s):  
David R. Mandel ◽  
Mandeep K. Dhami ◽  
Serena Tran ◽  
Daniel Irwin

Probability information is regularly communicated to experts who must fuse multiple estimates to support decision-making. Such information is often communicated verbally (e.g., “likely”) rather than with precise numeric (point) values (e.g., “.75”), yet people are not taught to perform arithmetic on verbal probabilities. We hypothesized that the accuracy and logical coherence of averaging and multiplying probabilities will be poorer when individuals receive probability information in verbal rather than numerical point format. In four experiments (N = 213, 201, 26, and 343, respectively), we manipulated probability communication format between-subjects. Participants averaged and multiplied sets of four probabilities. Across experiments, arithmetic accuracy and coherence was significantly better with point than with verbal probabilities. These findings generalized between expert (intelligence analysts) and non-expert samples and when controlling for calculator use. Experiment 4 revealed an important qualification: whereas accuracy and coherence were better among participants presented with point probabilities than with verbal probabilities, imprecise numeric probability ranges (e.g., “.70 to .80”) afforded no computational advantage over verbal probabilities. Experiment 4 also revealed that the advantage of the point over the verbal format is partially mediated by strategy use. Participants presented with point estimates are more likely to use mental computation than guesswork, and mental computation was found to be associated with better accuracy. Our findings suggest that where computation is important, probability information should be communicated to end users with precise numeric probabilities.


2001 ◽  
Vol 7 (S2) ◽  
pp. 1034-1035
Author(s):  
J.F. Hainfeld ◽  
F.R. Furuya ◽  
R.D. Powell ◽  
W. Liu

Current computer chip technology is based on lithographic methods that limit components to ∼0.3 microns in size, due to the wavelength of light, and the photoresist/coating/etching processes. The size directly determines computer speed, complexity and cost, and advances in computers over the years have mostly been due to reduction in component size. It is here proposed to construct nanowires that are approximately 2 nm in diameter, or 150 times smaller than currently available. For 2 dimensions, this translates into a 1502 = 22,500-fold computational advantage. Additionally, 3 dimensional construction is proposed, bringing the potential improvement factor to 3,375,000. While it is probably unrealistic that this factor of packing density can be fully achieved, even several orders of magnitude improvement over current technology would be significant.A wire width 2 nm may be achieved by placing gold quantum dots along a DNA template. Ends of the DNA-nanowire may be designed with sequences to attach by hybridization to complementary sequences on target connection pads, so that the two ends will seek and automatically wire correctly in solution. This strategy is easily adaptable to 3-dimensional wiring. Conduction between gold quantum dots may be studied as a function of spacing, size and coatings.


2014 ◽  
Vol 2014 ◽  
pp. 1-7
Author(s):  
Wei Wang ◽  
Shanghua Li ◽  
Jingjing Gao

For constrained minimization problem of maximum eigenvalue functions, since the objective function is nonsmooth, we can use the approximate inexact accelerated proximal gradient (AIAPG) method (Wang et al., 2013) to solve its smooth approximation minimization problem. When we take the functiong(X)=δΩ(X)  (Ω∶={X∈Sn:F(X)=b,X⪰0})in the problemmin{λmax(X)+g(X):X∈Sn}, whereλmax(X)is the maximum eigenvalue function,g(X)is a proper lower semicontinuous convex function (possibly nonsmooth) andδΩ(X)denotes the indicator function. But the approximate minimizer generated by AIAPG method must be contained inΩotherwise the method will be invalid. In this paper, we will consider the case where the approximate minimizer cannot be guaranteed inΩ. Thus we will propose two different strategies, respectively, constructing the feasible solution and designing a new method named relax inexact accelerated proximal gradient (RIAPG) method. It is worth mentioning that one advantage when compared to the former is that the latter strategy can overcome the drawback. The drawback is that the required conditions are too strict. Furthermore, the RIAPG method inherits the global iteration complexity and attractive computational advantage of AIAPG method.


2021 ◽  
Vol 2 (4) ◽  
pp. 448-461
Author(s):  
Teresa Alcamo ◽  
Alfredo Cuzzocrea ◽  
Giovanni Pilato ◽  
Daniele Schicchi

We analyze and compare five deep-learning neural architectures to manage the problem of irony and sarcasm detection for the Italian language. We briefly analyze the model architectures to choose the best compromise between performances and complexity. The obtained results show the effectiveness of such systems to handle the problem by achieving 93\% of F1-Score in the best case. As a case study, we also illustrate a possible embedding of the neural systems in a cloud computing infrastructure to exploit the computational advantage of using such an approach in tackling big data.


Sign in / Sign up

Export Citation Format

Share Document