scholarly journals A hardware based technique to reduce the timing complexity of number factoring problem

Author(s):  
Pirouz Pourdowlat

Most digital circuits which have been developed to implement algorithms, can benefit from an increase in clock speed, but do not completely map the problem to all available silicon resources. We have introduced a hardware based scheme capable of effectively using technology, specifically the increase in silicon area, to improve the computational time of complicated applications. In this thesis, we applied this scheme to solve the factoring problem, which requires exponential time (with respect to the number of bits in n) in conventional computers and could be only solved in polynomial time with quantum computers. The scheme successfully mapped the problem to most of the silicon area of Altera Stratix FPGA. The results show that the scheme is capable of reducing the time complexity to a polynomial rate with respect to the number of bits of the number n. The results also show an exponential rate of use for silicon with respect to the number of bits of n. Our analysis shows that the new scheme is scalable with technology speed and available space, could be applied to other applications to solve the performance limitations of conventional systems.

2021 ◽  
Author(s):  
Pirouz Pourdowlat

Most digital circuits which have been developed to implement algorithms, can benefit from an increase in clock speed, but do not completely map the problem to all available silicon resources. We have introduced a hardware based scheme capable of effectively using technology, specifically the increase in silicon area, to improve the computational time of complicated applications. In this thesis, we applied this scheme to solve the factoring problem, which requires exponential time (with respect to the number of bits in n) in conventional computers and could be only solved in polynomial time with quantum computers. The scheme successfully mapped the problem to most of the silicon area of Altera Stratix FPGA. The results show that the scheme is capable of reducing the time complexity to a polynomial rate with respect to the number of bits of the number n. The results also show an exponential rate of use for silicon with respect to the number of bits of n. Our analysis shows that the new scheme is scalable with technology speed and available space, could be applied to other applications to solve the performance limitations of conventional systems.


2007 ◽  
Vol 18 (04) ◽  
pp. 715-725
Author(s):  
CÉDRIC BASTIEN ◽  
JUREK CZYZOWICZ ◽  
WOJCIECH FRACZAK ◽  
WOJCIECH RYTTER

Simple grammar reduction is an important component in the implementation of Concatenation State Machines (a hardware version of stateless push-down automata designed for wire-speed network packet classification). We present a comparison and experimental analysis of the best-known algorithms for grammar reduction. There are two approaches to this problem: one processing compressed strings without decompression and another one which processes strings explicitly. It turns out that the second approach is more efficient in the considered practical scenario despite having worst-case exponential time complexity (while the first one is polynomial). The study has been conducted in the context of network packet classification, where simple grammars are used for representing the classification policies.


2011 ◽  
Vol 22 (02) ◽  
pp. 395-409 ◽  
Author(s):  
HOLGER PETERSEN

We investigate the efficiency of simulations of storages by several counters. A simulation of a pushdown store is described which is optimal in the sense that reducing the number of counters of a simulator leads to an increase in time complexity. The lower bound also establishes a tight counter hierarchy in exponential time. Then we turn to simulations of a set of counters by a different number of counters. We improve and generalize a known simulation in polynomial time. Greibach has shown that adding s + 1 counters increases the power of machines working in time ns. Using a new family of languages we show here a tight hierarchy result for machines with the same polynomial time-bound. We also prove hierarchies for machines with a fixed number of counters and with growing polynomial time-bounds. For machines with one counter and an additional "store zero" instruction we establish the equivalence of real-time and linear time. If at least two counters are available, the classes of languages accepted in real-time and linear time can be separated.


2016 ◽  
Vol 14 (07) ◽  
pp. 1650036 ◽  
Author(s):  
Suzhen Yuan ◽  
Xia Mao ◽  
Lijiang Chen ◽  
Xiaofa Wang

To reduce the time complexity of quantum morphology operations, two kinds of improved quantum dilation and erosion operations are proposed. Quantum parallelism is well used in the design of these operations. Consequently, the time complexity is greatly reduced compared with the previous quantum dilation and erosion operations. The neighborhood information of each pixel is needed in the process of designing quantum dilation and erosion operations. In order to get the neighborhood information, quantum position shifting transformation is utilized, which can make the neighborhood information store in a quantum image set. In this image set, the neighborhood information of pixel at location ([Formula: see text], [Formula: see text]) is stored at the same location ([Formula: see text], [Formula: see text]) of other images in the image set. All the pixels will be processed simultaneously, which is the performance of quantum parallelism. The time complexity analysis shows that these quantum operations have polynomial-time complexity which is much lower than the exponential-time complexity of the previous version.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Israel F. Araujo ◽  
Daniel K. Park ◽  
Francesco Petruccione ◽  
Adenilton J. da Silva

AbstractAdvantages in several fields of research and industry are expected with the rise of quantum computers. However, the computational cost to load classical data in quantum computers can impose restrictions on possible quantum speedups. Known algorithms to create arbitrary quantum states require quantum circuits with depth O(N) to load an N-dimensional vector. Here, we show that it is possible to load an N-dimensional vector with exponential time advantage using a quantum circuit with polylogarithmic depth and entangled information in ancillary qubits. Results show that we can efficiently load data in quantum devices using a divide-and-conquer strategy to exchange computational time for space. We demonstrate a proof of concept on a real quantum device and present two applications for quantum machine learning. We expect that this new loading strategy allows the quantum speedup of tasks that require to load a significant volume of information to quantum devices.


2021 ◽  
Vol 2 (2) ◽  
Author(s):  
Daniel Vert ◽  
Renaud Sirdey ◽  
Stéphane Louise

AbstractThis paper experimentally investigates the behavior of analog quantum computers as commercialized by D-Wave when confronted to instances of the maximum cardinality matching problem which is specifically designed to be hard to solve by means of simulated annealing. We benchmark a D-Wave “Washington” (2X) with 1098 operational qubits on various sizes of such instances and observe that for all but the most trivially small of these it fails to obtain an optimal solution. Thus, our results suggest that quantum annealing, at least as implemented in a D-Wave device, falls in the same pitfalls as simulated annealing and hence provides additional evidences suggesting that there exist polynomial-time problems that such a machine cannot solve efficiently to optimality. Additionally, we investigate the extent to which the qubits interconnection topologies explains these latter experimental results. In particular, we provide evidences that the sparsity of these topologies which, as such, lead to QUBO problems of artificially inflated sizes can partly explain the aforementioned disappointing observations. Therefore, this paper hints that denser interconnection topologies are necessary to unleash the potential of the quantum annealing approach.


Quantum ◽  
2020 ◽  
Vol 4 ◽  
pp. 329
Author(s):  
Tomoyuki Morimae ◽  
Suguru Tamaki

It is known that several sub-universal quantum computing models, such as the IQP model, the Boson sampling model, the one-clean qubit model, and the random circuit model, cannot be classically simulated in polynomial time under certain conjectures in classical complexity theory. Recently, these results have been improved to ``fine-grained" versions where even exponential-time classical simulations are excluded assuming certain classical fine-grained complexity conjectures. All these fine-grained results are, however, about the hardness of strong simulations or multiplicative-error sampling. It was open whether any fine-grained quantum supremacy result can be shown for a more realistic setup, namely, additive-error sampling. In this paper, we show the additive-error fine-grained quantum supremacy (under certain complexity assumptions). As examples, we consider the IQP model, a mixture of the IQP model and log-depth Boolean circuits, and Clifford+T circuits. Similar results should hold for other sub-universal models.


2011 ◽  
Vol 21 (07) ◽  
pp. 1217-1235 ◽  
Author(s):  
VÍCTOR BLANCO ◽  
PEDRO A. GARCÍA-SÁNCHEZ ◽  
JUSTO PUERTO

This paper presents a new methodology to compute the number of numerical semigroups of given genus or Frobenius number. We apply generating function tools to the bounded polyhedron that classifies the semigroups with given genus (or Frobenius number) and multiplicity. First, we give theoretical results about the polynomial-time complexity of counting these semigroups. We also illustrate the methodology analyzing the cases of multiplicity 3 and 4 where some formulas for the number of numerical semigroups for any genus and Frobenius number are obtained.


2013 ◽  
Vol 23 (06) ◽  
pp. 1521-1531 ◽  
Author(s):  
JONAH HOROWITZ

This paper examines the computational complexity of determining whether or not an algebra satisfies a certain Mal'Cev condition. First, we define a class of Mal'Cev conditions whose satisfaction can be determined in polynomial time (special cube term satisfying the DCP) when the algebra in question is idempotent and provide an algorithm through which this determination may be made. The aforementioned class notably includes near unanimity terms and edge terms of fixed arity. Second, we define a different class of Mal'Cev conditions whose satisfaction, in general, requires exponential time to determine (Mal'Cev conditions satisfiable by CPB0 operations).


Sign in / Sign up

Export Citation Format

Share Document