scholarly journals Classical and Quantum Algorithms for Generic Syndrome Decoding Problems and Applications to the Lee Metric

Author(s):  
André Chailloux ◽  
Thomas Debris-Alazard ◽  
Simona Etinski
2018 ◽  
Author(s):  
Rajendra K. Bera

It now appears that quantum computers are poised to enter the world of computing and establish its dominance, especially, in the cloud. Turing machines (classical computers) tied to the laws of classical physics will not vanish from our lives but begin to play a subordinate role to quantum computers tied to the enigmatic laws of quantum physics that deal with such non-intuitive phenomena as superposition, entanglement, collapse of the wave function, and teleportation, all occurring in Hilbert space. The aim of this 3-part paper is to introduce the readers to a core set of quantum algorithms based on the postulates of quantum mechanics, and reveal the amazing power of quantum computing.


Author(s):  
Lee Braine ◽  
Daniel Egger ◽  
Jennifer Glick ◽  
Stefan Woerner

2021 ◽  
Vol 3 (1) ◽  
Author(s):  
Davide Pastorello ◽  
Enrico Blanzieri ◽  
Valter Cavecchia

2021 ◽  
Vol 3 (1) ◽  
Author(s):  
Zhikuan Zhao ◽  
Jack K. Fitzsimons ◽  
Patrick Rebentrost ◽  
Vedran Dunjko ◽  
Joseph F. Fitzsimons

AbstractMachine learning has recently emerged as a fruitful area for finding potential quantum computational advantage. Many of the quantum-enhanced machine learning algorithms critically hinge upon the ability to efficiently produce states proportional to high-dimensional data points stored in a quantum accessible memory. Even given query access to exponentially many entries stored in a database, the construction of which is considered a one-off overhead, it has been argued that the cost of preparing such amplitude-encoded states may offset any exponential quantum advantage. Here we prove using smoothed analysis that if the data analysis algorithm is robust against small entry-wise input perturbation, state preparation can always be achieved with constant queries. This criterion is typically satisfied in realistic machine learning applications, where input data is subjective to moderate noise. Our results are equally applicable to the recent seminal progress in quantum-inspired algorithms, where specially constructed databases suffice for polylogarithmic classical algorithm in low-rank cases. The consequence of our finding is that for the purpose of practical machine learning, polylogarithmic processing time is possible under a general and flexible input model with quantum algorithms or quantum-inspired classical algorithms in the low-rank cases.


Author(s):  
Kai Li ◽  
Qing-yu Cai

AbstractQuantum algorithms can greatly speed up computation in solving some classical problems, while the computational power of quantum computers should also be restricted by laws of physics. Due to quantum time-energy uncertainty relation, there is a lower limit of the evolution time for a given quantum operation, and therefore the time complexity must be considered when the number of serial quantum operations is particularly large. When the key length is about at the level of KB (encryption and decryption can be completed in a few minutes by using standard programs), it will take at least 50-100 years for NTC (Neighbor-only, Two-qubit gate, Concurrent) architecture ion-trap quantum computers to execute Shor’s algorithm. For NTC architecture superconducting quantum computers with a code distance 27 for error-correcting, when the key length increased to 16 KB, the cracking time will also increase to 100 years that far exceeds the coherence time. This shows the robustness of the updated RSA against practical quantum computing attacks.


2021 ◽  
Vol 2 (1) ◽  
pp. 1-35
Author(s):  
Adrien Suau ◽  
Gabriel Staffelbach ◽  
Henri Calandra

In the last few years, several quantum algorithms that try to address the problem of partial differential equation solving have been devised: on the one hand, “direct” quantum algorithms that aim at encoding the solution of the PDE by executing one large quantum circuit; on the other hand, variational algorithms that approximate the solution of the PDE by executing several small quantum circuits and making profit of classical optimisers. In this work, we propose an experimental study of the costs (in terms of gate number and execution time on a idealised hardware created from realistic gate data) associated with one of the “direct” quantum algorithm: the wave equation solver devised in [32]. We show that our implementation of the quantum wave equation solver agrees with the theoretical big-O complexity of the algorithm. We also explain in great detail the implementation steps and discuss some possibilities of improvements. Finally, our implementation proves experimentally that some PDE can be solved on a quantum computer, even if the direct quantum algorithm chosen will require error-corrected quantum chips, which are not believed to be available in the short-term.


Author(s):  
Giovanni Acampora ◽  
Roberto Schiattarella

AbstractQuantum computers have become reality thanks to the effort of some majors in developing innovative technologies that enable the usage of quantum effects in computation, so as to pave the way towards the design of efficient quantum algorithms to use in different applications domains, from finance and chemistry to artificial and computational intelligence. However, there are still some technological limitations that do not allow a correct design of quantum algorithms, compromising the achievement of the so-called quantum advantage. Specifically, a major limitation in the design of a quantum algorithm is related to its proper mapping to a specific quantum processor so that the underlying physical constraints are satisfied. This hard problem, known as circuit mapping, is a critical task to face in quantum world, and it needs to be efficiently addressed to allow quantum computers to work correctly and productively. In order to bridge above gap, this paper introduces a very first circuit mapping approach based on deep neural networks, which opens a completely new scenario in which the correct execution of quantum algorithms is supported by classical machine learning techniques. As shown in experimental section, the proposed approach speeds up current state-of-the-art mapping algorithms when used on 5-qubits IBM Q processors, maintaining suitable mapping accuracy.


Electronics ◽  
2021 ◽  
Vol 10 (14) ◽  
pp. 1690
Author(s):  
Teague Tomesh ◽  
Pranav Gokhale ◽  
Eric R. Anschuetz ◽  
Frederic T. Chong

Many quantum algorithms for machine learning require access to classical data in superposition. However, for many natural data sets and algorithms, the overhead required to load the data set in superposition can erase any potential quantum speedup over classical algorithms. Recent work by Harrow introduces a new paradigm in hybrid quantum-classical computing to address this issue, relying on coresets to minimize the data loading overhead of quantum algorithms. We investigated using this paradigm to perform k-means clustering on near-term quantum computers, by casting it as a QAOA optimization instance over a small coreset. We used numerical simulations to compare the performance of this approach to classical k-means clustering. We were able to find data sets with which coresets work well relative to random sampling and where QAOA could potentially outperform standard k-means on a coreset. However, finding data sets where both coresets and QAOA work well—which is necessary for a quantum advantage over k-means on the entire data set—appears to be challenging.


Sign in / Sign up

Export Citation Format

Share Document