scholarly journals Classical variational simulation of the Quantum Approximate Optimization Algorithm

2021 ◽  
Vol 7 (1) ◽  
Author(s):  
Matija Medvidović ◽  
Giuseppe Carleo

AbstractA key open question in quantum computing is whether quantum algorithms can potentially offer a significant advantage over classical algorithms for tasks of practical interest. Understanding the limits of classical computing in simulating quantum systems is an important component of addressing this question. We introduce a method to simulate layered quantum circuits consisting of parametrized gates, an architecture behind many variational quantum algorithms suitable for near-term quantum computers. A neural-network parametrization of the many-qubit wavefunction is used, focusing on states relevant for the Quantum Approximate Optimization Algorithm (QAOA). For the largest circuits simulated, we reach 54 qubits at 4 QAOA layers, approximately implementing 324 RZZ gates and 216 RX gates without requiring large-scale computational resources. For larger systems, our approach can be used to provide accurate QAOA simulations at previously unexplored parameter values and to benchmark the next generation of experiments in the Noisy Intermediate-Scale Quantum (NISQ) era.

Electronics ◽  
2021 ◽  
Vol 10 (14) ◽  
pp. 1690
Author(s):  
Teague Tomesh ◽  
Pranav Gokhale ◽  
Eric R. Anschuetz ◽  
Frederic T. Chong

Many quantum algorithms for machine learning require access to classical data in superposition. However, for many natural data sets and algorithms, the overhead required to load the data set in superposition can erase any potential quantum speedup over classical algorithms. Recent work by Harrow introduces a new paradigm in hybrid quantum-classical computing to address this issue, relying on coresets to minimize the data loading overhead of quantum algorithms. We investigated using this paradigm to perform k-means clustering on near-term quantum computers, by casting it as a QAOA optimization instance over a small coreset. We used numerical simulations to compare the performance of this approach to classical k-means clustering. We were able to find data sets with which coresets work well relative to random sampling and where QAOA could potentially outperform standard k-means on a coreset. However, finding data sets where both coresets and QAOA work well—which is necessary for a quantum advantage over k-means on the entire data set—appears to be challenging.


2020 ◽  
Vol 19 (9) ◽  
Author(s):  
M. E. S. Morales ◽  
J. D. Biamonte ◽  
Z. Zimborás

Abstract The quantum approximate optimization algorithm (QAOA) is considered to be one of the most promising approaches towards using near-term quantum computers for practical application. In its original form, the algorithm applies two different Hamiltonians, called the mixer and the cost Hamiltonian, in alternation with the goal being to approach the ground state of the cost Hamiltonian. Recently, it has been suggested that one might use such a set-up as a parametric quantum circuit with possibly some other goal than reaching ground states. From this perspective, a recent work (Lloyd, arXiv:1812.11075) argued that for one-dimensional local cost Hamiltonians, composed of nearest neighbour ZZ terms, this set-up is quantum computationally universal and provides a universal gate set, i.e. all unitaries can be reached up to arbitrary precision. In the present paper, we complement this work by giving a complete proof and the precise conditions under which such a one-dimensional QAOA might produce a universal gate set. We further generalize this type of gate-set universality for certain cost Hamiltonians with ZZ and ZZZ terms arranged according to the adjacency structure of certain graphs and hypergraphs.


2020 ◽  
Vol 10 (2) ◽  
Author(s):  
Leo Zhou ◽  
Sheng-Tao Wang ◽  
Soonwon Choi ◽  
Hannes Pichler ◽  
Mikhail D. Lukin

Quantum ◽  
2021 ◽  
Vol 5 ◽  
pp. 491
Author(s):  
Stefan H. Sack ◽  
Maksym Serbyn

The quantum approximate optimization algorithm (QAOA) is a prospective near-term quantum algorithm due to its modest circuit depth and promising benchmarks. However, an external parameter optimization required in QAOA could become a performance bottleneck. This motivates studies of the optimization landscape and search for heuristic ways of parameter initialization. In this work we visualize the optimization landscape of the QAOA applied to the MaxCut problem on random graphs, demonstrating that random initialization of the QAOA is prone to converging to local minima with sub-optimal performance. We introduce the initialization of QAOA parameters based on the Trotterized quantum annealing (TQA) protocol, parameterized by the Trotter time step. We find that the TQA initialization allows to circumvent the issue of false minima for a broad range of time steps, yielding the same performance as the best result out of an exponentially scaling number of random initializations. Moreover, we demonstrate that the optimal value of the time step coincides with the point of proliferation of Trotter errors in quantum annealing. Our results suggest practical ways of initializing QAOA protocols on near-term quantum devices and reveals new connections between QAOA and quantum annealing.


1984 ◽  
Vol 16 (1-2) ◽  
pp. 281-295 ◽  
Author(s):  
Donald C Gordon

Large-scale tidal power development in the Bay of Fundy has been given serious consideration for over 60 years. There has been a long history of productive interaction between environmental scientists and engineers durinn the many feasibility studies undertaken. Up until recently, tidal power proposals were dropped on economic grounds. However, large-scale development in the upper reaches of the Bay of Fundy now appears to be economically viable and a pre-commitment design program is highly likely in the near future. A large number of basic scientific research studies have been and are being conducted by government and university scientists. Likely environmental impacts have been examined by scientists and engineers together in a preliminary fashion on several occasions. A full environmental assessment will be conducted before a final decision is made and the results will definately influence the outcome.


2020 ◽  
Author(s):  
Anusha Ampavathi ◽  
Vijaya Saradhi T

UNSTRUCTURED Big data and its approaches are generally helpful for healthcare and biomedical sectors for predicting the disease. For trivial symptoms, the difficulty is to meet the doctors at any time in the hospital. Thus, big data provides essential data regarding the diseases on the basis of the patient’s symptoms. For several medical organizations, disease prediction is important for making the best feasible health care decisions. Conversely, the conventional medical care model offers input as structured that requires more accurate and consistent prediction. This paper is planned to develop the multi-disease prediction using the improvised deep learning concept. Here, the different datasets pertain to “Diabetes, Hepatitis, lung cancer, liver tumor, heart disease, Parkinson’s disease, and Alzheimer’s disease”, from the benchmark UCI repository is gathered for conducting the experiment. The proposed model involves three phases (a) Data normalization (b) Weighted normalized feature extraction, and (c) prediction. Initially, the dataset is normalized in order to make the attribute's range at a certain level. Further, weighted feature extraction is performed, in which a weight function is multiplied with each attribute value for making large scale deviation. Here, the weight function is optimized using the combination of two meta-heuristic algorithms termed as Jaya Algorithm-based Multi-Verse Optimization algorithm (JA-MVO). The optimally extracted features are subjected to the hybrid deep learning algorithms like “Deep Belief Network (DBN) and Recurrent Neural Network (RNN)”. As a modification to hybrid deep learning architecture, the weight of both DBN and RNN is optimized using the same hybrid optimization algorithm. Further, the comparative evaluation of the proposed prediction over the existing models certifies its effectiveness through various performance measures.


2021 ◽  
Vol 13 (3) ◽  
pp. 355
Author(s):  
Weixian Tan ◽  
Borong Sun ◽  
Chenyu Xiao ◽  
Pingping Huang ◽  
Wei Xu ◽  
...  

Classification based on polarimetric synthetic aperture radar (PolSAR) images is an emerging technology, and recent years have seen the introduction of various classification methods that have been proven to be effective to identify typical features of many terrain types. Among the many regions of the study, the Hunshandake Sandy Land in Inner Mongolia, China stands out for its vast area of sandy land, variety of ground objects, and intricate structure, with more irregular characteristics than conventional land cover. Accounting for the particular surface features of the Hunshandake Sandy Land, an unsupervised classification method based on new decomposition and large-scale spectral clustering with superpixels (ND-LSC) is proposed in this study. Firstly, the polarization scattering parameters are extracted through a new decomposition, rather than other decomposition approaches, which gives rise to more accurate feature vector estimate. Secondly, a large-scale spectral clustering is applied as appropriate to meet the massive land and complex terrain. More specifically, this involves a beginning sub-step of superpixels generation via the Adaptive Simple Linear Iterative Clustering (ASLIC) algorithm when the feature vector combined with the spatial coordinate information are employed as input, and subsequently a sub-step of representative points selection as well as bipartite graph formation, followed by the spectral clustering algorithm to complete the classification task. Finally, testing and analysis are conducted on the RADARSAT-2 fully PolSAR dataset acquired over the Hunshandake Sandy Land in 2016. Both qualitative and quantitative experiments compared with several classification methods are conducted to show that proposed method can significantly improve performance on classification.


2021 ◽  
Author(s):  
Parsoa Khorsand ◽  
Fereydoun Hormozdiari

Abstract Large scale catalogs of common genetic variants (including indels and structural variants) are being created using data from second and third generation whole-genome sequencing technologies. However, the genotyping of these variants in newly sequenced samples is a nontrivial task that requires extensive computational resources. Furthermore, current approaches are mostly limited to only specific types of variants and are generally prone to various errors and ambiguities when genotyping complex events. We are proposing an ultra-efficient approach for genotyping any type of structural variation that is not limited by the shortcomings and complexities of current mapping-based approaches. Our method Nebula utilizes the changes in the count of k-mers to predict the genotype of structural variants. We have shown that not only Nebula is an order of magnitude faster than mapping based approaches for genotyping structural variants, but also has comparable accuracy to state-of-the-art approaches. Furthermore, Nebula is a generic framework not limited to any specific type of event. Nebula is publicly available at https://github.com/Parsoa/Nebula.


2021 ◽  
Vol 20 (2) ◽  
Author(s):  
Rebekah Herrman ◽  
James Ostrowski ◽  
Travis S. Humble ◽  
George Siopsis

Morphology ◽  
2021 ◽  
Author(s):  
Rossella Varvara ◽  
Gabriella Lapesa ◽  
Sebastian Padó

AbstractWe present the results of a large-scale corpus-based comparison of two German event nominalization patterns: deverbal nouns in -ung (e.g., die Evaluierung, ‘the evaluation’) and nominal infinitives (e.g., das Evaluieren, ‘the evaluating’). Among the many available event nominalization patterns for German, we selected these two because they are both highly productive and challenging from the semantic point of view. Both patterns are known to keep a tight relation with the event denoted by the base verb, but with different nuances. Our study targets a better understanding of the differences in their semantic import.The key notion of our comparison is that of semantic transparency, and we propose a usage-based characterization of the relationship between derived nominals and their bases. Using methods from distributional semantics, we bring to bear two concrete measures of transparency which highlight different nuances: the first one, cosine, detects nominalizations which are semantically similar to their bases; the second one, distributional inclusion, detects nominalizations which are used in a subset of the contexts of the base verb. We find that only the inclusion measure helps in characterizing the difference between the two types of nominalizations, in relation with the traditionally considered variable of relative frequency (Hay, 2001). Finally, the distributional analysis allows us to frame our comparison in the broader coordinates of the inflection vs. derivation cline.


Sign in / Sign up

Export Citation Format

Share Document