scholarly journals A divide-and-conquer algorithm for quantum state preparation

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Israel F. Araujo ◽  
Daniel K. Park ◽  
Francesco Petruccione ◽  
Adenilton J. da Silva

AbstractAdvantages in several fields of research and industry are expected with the rise of quantum computers. However, the computational cost to load classical data in quantum computers can impose restrictions on possible quantum speedups. Known algorithms to create arbitrary quantum states require quantum circuits with depth O(N) to load an N-dimensional vector. Here, we show that it is possible to load an N-dimensional vector with exponential time advantage using a quantum circuit with polylogarithmic depth and entangled information in ancillary qubits. Results show that we can efficiently load data in quantum devices using a divide-and-conquer strategy to exchange computational time for space. We demonstrate a proof of concept on a real quantum device and present two applications for quantum machine learning. We expect that this new loading strategy allows the quantum speedup of tasks that require to load a significant volume of information to quantum devices.

2021 ◽  
Vol 2099 (1) ◽  
pp. 012062
Author(s):  
Andrew V Terekhov

Abstract An algorithm of the Laguerre transform for approximating functions on large intervals is proposed. The idea of the considered approach is that the calculation of improper integrals of rapidly oscillating functions is replaced by a solution of an initial boundary value problem for the one-dimensional transport equation. It allows one to successfully avoid the problems associated with the stable implementation of the Laguerre transform. A divide-and-conquer algorithm based on shift operations made it possible to significantly reduce the computational cost of the proposed method. Numerical experiments have shown that the methods are economical in the number of operations, stable, and have satisfactory accuracy for seismic data approximation.


2019 ◽  
Vol 5 (1) ◽  
Author(s):  
D. T. Lennon ◽  
H. Moon ◽  
L. C. Camenzind ◽  
Liuqi Yu ◽  
D. M. Zumbühl ◽  
...  

Abstract Scalable quantum technologies such as quantum computers will require very large numbers of quantum devices to be characterised and tuned. As the number of devices on chip increases, this task becomes ever more time-consuming, and will be intractable on a large scale without efficient automation. We present measurements on a quantum dot device performed by a machine learning algorithm in real time. The algorithm selects the most informative measurements to perform next by combining information theory with a probabilistic deep-generative model that can generate full-resolution reconstructions from scattered partial measurements. We demonstrate, for two different current map configurations that the algorithm outperforms standard grid scan techniques, reducing the number of measurements required by up to 4 times and the measurement time by 3.7 times. Our contribution goes beyond the use of machine learning for data search and analysis, and instead demonstrates the use of algorithms to automate measurements. This works lays the foundation for learning-based automated measurement of quantum devices.


Author(s):  
Mohammad Poursina ◽  
Imad Khan ◽  
Kurt S. Anderson

This paper presents an efficient algorithm for the simulation of multi-flexible-body systems undergoing discontinuous changes in model definition. The equations governing the dynamics of the transitions from a higher to a lower fidelity model and vice versa are formulated through imposing/removing certain constraints on/from the system. Furthermore, the issue of the non-uniqueness of the results associated with the transition from a lower to a higher fidelity model is dealt with as an optimization problem. This optimization problem is subjected to the satisfaction of the impulse-momentum equations. The divide and conquer algorithm (DCA) is applied to formulate the dynamics of the transition. The DCA formulation in its basic form is time optimal and results in linear and logarithmic complexity when implemented in serial and parallel, respectively. As such, it reduces the computational cost of formulating and solving the optimization problem in the transitions to the finer models. Necessary mathematics for the algorithm implementation is developed and a numerical example is given to validate the method.


Quantum ◽  
2019 ◽  
Vol 3 ◽  
pp. 207
Author(s):  
Charlie Nation ◽  
Diego Porras

Quantum devices, such as quantum simulators, quantum annealers, and quantum computers, may be exploited to solve problems beyond what is tractable with classical computers. This may be achieved as the Hilbert space available to perform such `calculations' is far larger than that which may be classically simulated. In practice, however, quantum devices have imperfections, which may limit the accessibility to the whole Hilbert space. We thus determine that the dimension of the space of quantum states that are available to a quantum device is a meaningful measure of its functionality, though unfortunately this quantity cannot be directly experimentally determined. Here we outline an experimentally realisable approach to obtaining the required Hilbert space dimension of such a device to compute its time evolution, by exploiting the thermalization dynamics of a probe qubit. This is achieved by obtaining a fluctuation-dissipation theorem for high-temperature chaotic quantum systems, which facilitates the extraction of information on the Hilbert space dimension via measurements of the decay rate, and time-fluctuations.


Author(s):  
Tu Huynh-Kha ◽  
Thuong Le-Tien ◽  
Synh Ha ◽  
Khoa Huynh-Van

This research work develops a new method to detect the forgery in image by combining the Wavelet transform and modified Zernike Moments (MZMs) in which the features are defined from more pixels than in traditional Zernike Moments. The tested image is firstly converted to grayscale and applied one level Discrete Wavelet Transform (DWT) to reduce the size of image by a half in both sides. The approximation sub-band (LL), which is used for processing, is then divided into overlapping blocks and modified Zernike moments are calculated in each block as feature vectors. More pixels are considered, more sufficient features are extracted. Lexicographical sorting and correlation coefficients computation on feature vectors are next steps to find the similar blocks. The purpose of applying DWT to reduce the dimension of the image before using Zernike moments with updated coefficients is to improve the computational time and increase exactness in detection. Copied or duplicated parts will be detected as traces of copy-move forgery manipulation based on a threshold of correlation coefficients and confirmed exactly from the constraint of Euclidean distance. Comparisons results between proposed method and related ones prove the feasibility and efficiency of the proposed algorithm.


Symmetry ◽  
2021 ◽  
Vol 13 (4) ◽  
pp. 645
Author(s):  
Muhammad Farooq ◽  
Sehrish Sarfraz ◽  
Christophe Chesneau ◽  
Mahmood Ul Hassan ◽  
Muhammad Ali Raza ◽  
...  

Expectiles have gained considerable attention in recent years due to wide applications in many areas. In this study, the k-nearest neighbours approach, together with the asymmetric least squares loss function, called ex-kNN, is proposed for computing expectiles. Firstly, the effect of various distance measures on ex-kNN in terms of test error and computational time is evaluated. It is found that Canberra, Lorentzian, and Soergel distance measures lead to minimum test error, whereas Euclidean, Canberra, and Average of (L1,L∞) lead to a low computational cost. Secondly, the performance of ex-kNN is compared with existing packages er-boost and ex-svm for computing expectiles that are based on nine real life examples. Depending on the nature of data, the ex-kNN showed two to 10 times better performance than er-boost and comparable performance with ex-svm regarding test error. Computationally, the ex-kNN is found two to five times faster than ex-svm and much faster than er-boost, particularly, in the case of high dimensional data.


2021 ◽  
Vol 11 (2) ◽  
pp. 813
Author(s):  
Shuai Teng ◽  
Zongchao Liu ◽  
Gongfa Chen ◽  
Li Cheng

This paper compares the crack detection performance (in terms of precision and computational cost) of the YOLO_v2 using 11 feature extractors, which provides a base for realizing fast and accurate crack detection on concrete structures. Cracks on concrete structures are an important indicator for assessing their durability and safety, and real-time crack detection is an essential task in structural maintenance. The object detection algorithm, especially the YOLO series network, has significant potential in crack detection, while the feature extractor is the most important component of the YOLO_v2. Hence, this paper employs 11 well-known CNN models as the feature extractor of the YOLO_v2 for crack detection. The results confirm that a different feature extractor model of the YOLO_v2 network leads to a different detection result, among which the AP value is 0.89, 0, and 0 for ‘resnet18’, ‘alexnet’, and ‘vgg16’, respectively meanwhile, the ‘googlenet’ (AP = 0.84) and ‘mobilenetv2’ (AP = 0.87) also demonstrate comparable AP values. In terms of computing speed, the ‘alexnet’ takes the least computational time, the ‘squeezenet’ and ‘resnet18’ are ranked second and third respectively; therefore, the ‘resnet18’ is the best feature extractor model in terms of precision and computational cost. Additionally, through the parametric study (influence on detection results of the training epoch, feature extraction layer, and testing image size), the associated parameters indeed have an impact on the detection results. It is demonstrated that: excellent crack detection results can be achieved by the YOLO_v2 detector, in which an appropriate feature extractor model, training epoch, feature extraction layer, and testing image size play an important role.


Author(s):  
Giovanni Acampora ◽  
Roberto Schiattarella

AbstractQuantum computers have become reality thanks to the effort of some majors in developing innovative technologies that enable the usage of quantum effects in computation, so as to pave the way towards the design of efficient quantum algorithms to use in different applications domains, from finance and chemistry to artificial and computational intelligence. However, there are still some technological limitations that do not allow a correct design of quantum algorithms, compromising the achievement of the so-called quantum advantage. Specifically, a major limitation in the design of a quantum algorithm is related to its proper mapping to a specific quantum processor so that the underlying physical constraints are satisfied. This hard problem, known as circuit mapping, is a critical task to face in quantum world, and it needs to be efficiently addressed to allow quantum computers to work correctly and productively. In order to bridge above gap, this paper introduces a very first circuit mapping approach based on deep neural networks, which opens a completely new scenario in which the correct execution of quantum algorithms is supported by classical machine learning techniques. As shown in experimental section, the proposed approach speeds up current state-of-the-art mapping algorithms when used on 5-qubits IBM Q processors, maintaining suitable mapping accuracy.


Sign in / Sign up

Export Citation Format

Share Document