Energy Cost of Quantum Circuit Optimisation: Predicting That Optimising Shor’s Algorithm Circuit Uses 1 GWh

2022 ◽  
Vol 3 (1) ◽  
pp. 1-14
Author(s):  
Alexandru Paler ◽  
Robert Basmadjian

Quantum circuits are difficult to simulate, and their automated optimisation is complex as well. Significant optimisations have been achieved manually (pen and paper) and not by software. This is the first in-depth study on the cost of compiling and optimising large-scale quantum circuits with state-of-the-art quantum software. We propose a hierarchy of cost metrics covering the quantum software stack and use energy as the long-term cost of operating hardware. We are going to quantify optimisation costs by estimating the energy consumed by a CPU doing the quantum circuit optimisation. We use QUANTIFY, a tool based on Google Cirq, to optimise bucket brigade QRAM and multiplication circuits having between 32 and 8,192 qubits. Although our classical optimisation methods have polynomial complexity, we observe that their energy cost grows extremely fast with the number of qubits. We profile the methods and software and provide evidence that there are high constant costs associated to the operations performed during optimisation. The costs are the result of dynamically typed programming languages and the generic data structures used in the background. We conclude that state-of-the-art quantum software frameworks have to massively improve their scalability to be practical for large circuits.

Quantum ◽  
2019 ◽  
Vol 3 ◽  
pp. 170
Author(s):  
Hammam Qassim ◽  
Joel J. Wallman ◽  
Joseph Emerson

Simulating quantum circuits classically is an important area of research in quantum information, with applications in computational complexity and validation of quantum devices. One of the state-of-the-art simulators, that of Bravyi et al, utilizes a randomized sparsification technique to approximate the output state of a quantum circuit by a stabilizer sum with a reduced number of terms. In this paper, we describe an improved Monte Carlo algorithm for performing randomized sparsification. This algorithm reduces the runtime of computing the approximate state by the factorℓ/m, whereℓandmare respectively the total and non-Clifford gate counts. The main technique is a circuit recompilation routine based on manipulating exponentiated Pauli operators. The recompilation routine also facilitates numerical search for Clifford decompositions of products of non-Clifford gates, which can further reduce the runtime in certain cases by reducing the 1-norm of the vector of expansion,‖a‖1. It may additionally lead to a framework for optimizing circuit implementations over a gate-set, reducing the overhead for state-injection in fault-tolerant implementations. We provide a concise exposition of randomized sparsification, and describe how to use it to estimate circuit amplitudes in a way which can be generalized to a broader class of gates and states. This latter method can be used to obtain additive error estimates of circuit probabilities with a faster runtime than the full techniques of Bravyi et al. Such estimates are useful for validating near-term quantum devices provided that the target probability is not exponentially small.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Geng-Li Zhang ◽  
Di Liu ◽  
Man-Hong Yung

AbstractExceptional points (EPs), the degeneracy points of non-Hermitian systems, have recently attracted great attention because of their potential of enhancing the sensitivity of quantum sensors. Unlike the usual degeneracies in Hermitian systems, at EPs, both the eigenenergies and eigenvectors coalesce. Although EPs have been widely explored, the range of EPs studied is largely limited by the underlying systems, for instance, higher-order EPs are hard to achieve. Here we propose an extendable method to simulate non-Hermitian systems and study EPs with quantum circuits. The system is inherently parity-time (PT) broken due to the non-symmetric controlling effects of the circuit. Inspired by the quantum Zeno effect, the circuit structure guarantees the success rate of the post-selection. A sample circuit is implemented in a quantum programming framework, and the phase transition at EP is demonstrated. Considering the scalable and flexible nature of quantum circuits, our model is capable of simulating large-scale systems with higher-order EPs.


Author(s):  
Noboru Kunihiro

Abstract It is known that Shor’s algorithm can break many cryptosystems such as RSA encryption, provided that large-scale quantum computers are realized. Thus far, several experiments for the factorization of the small composites such as 15 and 21 have been conducted using small-scale quantum computers. In this study, we investigate the details of quantum circuits used in several factoring experiments. We then indicate that some of the circuits have been constructed under the condition that the order of an element modulo a target composite is known in advance. Because the order must be unknown in the experiments, they are inappropriate for designing the quantum circuit of Shor’s factoring algorithm. We also indicate that the circuits used in the other experiments are constructed by relying considerably on the target composite number to be factorized.


2018 ◽  
Vol 18 (13&14) ◽  
pp. 1095-1114
Author(s):  
Zongyuan Zhang ◽  
Zhijin Guan ◽  
Hong Zhang ◽  
Haiying Ma ◽  
Weiping Ding

In order to realize the linear nearest neighbor{(LNN)} of the quantum circuits and reduce the quantum cost of linear reversible quantum circuits, a method for synthesizing and optimizing linear reversible quantum circuits based on matrix multiplication of the structure of the quantum circuit is proposed. This method shows the matrix representation of linear quantum circuits by multiplying matrices of different parts of the whole circuit. The LNN realization by adding the SWAP gates is proposed and the equivalence of two ways of adding the SWAP gates is proved. The elimination rules of the SWAP gates between two overlapped adjacent quantum gates in different cases are proposed, which reduce the quantum cost of quantum circuits after realizing the LNN architecture. We propose an algorithm based on parallel processing in order to effectively reduce the time consumption for large-scale quantum circuits. Experiments show that the quantum cost can be improved by 34.31\% on average and the speed-up ratio of the GPU-based algorithm can reach 4 times compared with the CPU-based algorithm. The average time optimization ratio of the benchmark large-scale circuits in RevLib processed by the parallel algorithm is {95.57\%} comparing with the serial algorithm.


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
M. Cerezo ◽  
Akira Sone ◽  
Tyler Volkoff ◽  
Lukasz Cincio ◽  
Patrick J. Coles

AbstractVariational quantum algorithms (VQAs) optimize the parameters θ of a parametrized quantum circuit V(θ) to minimize a cost function C. While VQAs may enable practical applications of noisy quantum computers, they are nevertheless heuristic methods with unproven scaling. Here, we rigorously prove two results, assuming V(θ) is an alternating layered ansatz composed of blocks forming local 2-designs. Our first result states that defining C in terms of global observables leads to exponentially vanishing gradients (i.e., barren plateaus) even when V(θ) is shallow. Hence, several VQAs in the literature must revise their proposed costs. On the other hand, our second result states that defining C with local observables leads to at worst a polynomially vanishing gradient, so long as the depth of V(θ) is $${\mathcal{O}}(\mathrm{log}\,n)$$ O ( log n ) . Our results establish a connection between locality and trainability. We illustrate these ideas with large-scale simulations, up to 100 qubits, of a quantum autoencoder implementation.


2018 ◽  
Vol 14 (12) ◽  
pp. 1915-1960 ◽  
Author(s):  
Rudolf Brázdil ◽  
Andrea Kiss ◽  
Jürg Luterbacher ◽  
David J. Nash ◽  
Ladislava Řezníčková

Abstract. The use of documentary evidence to investigate past climatic trends and events has become a recognised approach in recent decades. This contribution presents the state of the art in its application to droughts. The range of documentary evidence is very wide, including general annals, chronicles, memoirs and diaries kept by missionaries, travellers and those specifically interested in the weather; records kept by administrators tasked with keeping accounts and other financial and economic records; legal-administrative evidence; religious sources; letters; songs; newspapers and journals; pictographic evidence; chronograms; epigraphic evidence; early instrumental observations; society commentaries; and compilations and books. These are available from many parts of the world. This variety of documentary information is evaluated with respect to the reconstruction of hydroclimatic conditions (precipitation, drought frequency and drought indices). Documentary-based drought reconstructions are then addressed in terms of long-term spatio-temporal fluctuations, major drought events, relationships with external forcing and large-scale climate drivers, socio-economic impacts and human responses. Documentary-based drought series are also considered from the viewpoint of spatio-temporal variability for certain continents, and their employment together with hydroclimate reconstructions from other proxies (in particular tree rings) is discussed. Finally, conclusions are drawn, and challenges for the future use of documentary evidence in the study of droughts are presented.


2021 ◽  
Vol 7 (3) ◽  
pp. 50
Author(s):  
Anselmo Ferreira ◽  
Ehsan Nowroozi ◽  
Mauro Barni

The possibility of carrying out a meaningful forensic analysis on printed and scanned images plays a major role in many applications. First of all, printed documents are often associated with criminal activities, such as terrorist plans, child pornography, and even fake packages. Additionally, printing and scanning can be used to hide the traces of image manipulation or the synthetic nature of images, since the artifacts commonly found in manipulated and synthetic images are gone after the images are printed and scanned. A problem hindering research in this area is the lack of large scale reference datasets to be used for algorithm development and benchmarking. Motivated by this issue, we present a new dataset composed of a large number of synthetic and natural printed face images. To highlight the difficulties associated with the analysis of the images of the dataset, we carried out an extensive set of experiments comparing several printer attribution methods. We also verified that state-of-the-art methods to distinguish natural and synthetic face images fail when applied to print and scanned images. We envision that the availability of the new dataset and the preliminary experiments we carried out will motivate and facilitate further research in this area.


2021 ◽  
Vol 20 (7) ◽  
Author(s):  
Ismail Ghodsollahee ◽  
Zohreh Davarzani ◽  
Mariam Zomorodi ◽  
Paweł Pławiak ◽  
Monireh Houshmand ◽  
...  

AbstractAs quantum computation grows, the number of qubits involved in a given quantum computer increases. But due to the physical limitations in the number of qubits of a single quantum device, the computation should be performed in a distributed system. In this paper, a new model of quantum computation based on the matrix representation of quantum circuits is proposed. Then, using this model, we propose a novel approach for reducing the number of teleportations in a distributed quantum circuit. The proposed method consists of two phases: the pre-processing phase and the optimization phase. In the pre-processing phase, it considers the bi-partitioning of quantum circuits by Non-Dominated Sorting Genetic Algorithm (NSGA-III) to minimize the number of global gates and to distribute the quantum circuit into two balanced parts with equal number of qubits and minimum number of global gates. In the optimization phase, two heuristics named Heuristic I and Heuristic II are proposed to optimize the number of teleportations according to the partitioning obtained from the pre-processing phase. Finally, the proposed approach is evaluated on many benchmark quantum circuits. The results of these evaluations show an average of 22.16% improvement in the teleportation cost of the proposed approach compared to the existing works in the literature.


2021 ◽  
Vol 40 (3) ◽  
pp. 1-13
Author(s):  
Lumin Yang ◽  
Jiajie Zhuang ◽  
Hongbo Fu ◽  
Xiangzhi Wei ◽  
Kun Zhou ◽  
...  

We introduce SketchGNN , a convolutional graph neural network for semantic segmentation and labeling of freehand vector sketches. We treat an input stroke-based sketch as a graph with nodes representing the sampled points along input strokes and edges encoding the stroke structure information. To predict the per-node labels, our SketchGNN uses graph convolution and a static-dynamic branching network architecture to extract the features at three levels, i.e., point-level, stroke-level, and sketch-level. SketchGNN significantly improves the accuracy of the state-of-the-art methods for semantic sketch segmentation (by 11.2% in the pixel-based metric and 18.2% in the component-based metric over a large-scale challenging SPG dataset) and has magnitudes fewer parameters than both image-based and sequence-based methods.


Author(s):  
Anil S. Baslamisli ◽  
Partha Das ◽  
Hoang-An Le ◽  
Sezer Karaoglu ◽  
Theo Gevers

AbstractIn general, intrinsic image decomposition algorithms interpret shading as one unified component including all photometric effects. As shading transitions are generally smoother than reflectance (albedo) changes, these methods may fail in distinguishing strong photometric effects from reflectance variations. Therefore, in this paper, we propose to decompose the shading component into direct (illumination) and indirect shading (ambient light and shadows) subcomponents. The aim is to distinguish strong photometric effects from reflectance variations. An end-to-end deep convolutional neural network (ShadingNet) is proposed that operates in a fine-to-coarse manner with a specialized fusion and refinement unit exploiting the fine-grained shading model. It is designed to learn specific reflectance cues separated from specific photometric effects to analyze the disentanglement capability. A large-scale dataset of scene-level synthetic images of outdoor natural environments is provided with fine-grained intrinsic image ground-truths. Large scale experiments show that our approach using fine-grained shading decompositions outperforms state-of-the-art algorithms utilizing unified shading on NED, MPI Sintel, GTA V, IIW, MIT Intrinsic Images, 3DRMS and SRD datasets.


Sign in / Sign up

Export Citation Format

Share Document