scholarly journals Using Machine Learning for Quantum Annealing Accuracy Prediction

Algorithms ◽  
2021 ◽  
Vol 14 (6) ◽  
pp. 187
Author(s):  
Aaron Barbosa ◽  
Elijah Pelofske ◽  
Georg Hahn ◽  
Hristo N. Djidjev

Quantum annealers, such as the device built by D-Wave Systems, Inc., offer a way to compute solutions of NP-hard problems that can be expressed in Ising or quadratic unconstrained binary optimization (QUBO) form. Although such solutions are typically of very high quality, problem instances are usually not solved to optimality due to imperfections of the current generations quantum annealers. In this contribution, we aim to understand some of the factors contributing to the hardness of a problem instance, and to use machine learning models to predict the accuracy of the D-Wave 2000Q annealer for solving specific problems. We focus on the maximum clique problem, a classic NP-hard problem with important applications in network analysis, bioinformatics, and computational chemistry. By training a machine learning classification model on basic problem characteristics such as the number of edges in the graph, or annealing parameters, such as the D-Wave’s chain strength, we are able to rank certain features in the order of their contribution to the solution hardness, and present a simple decision tree which allows to predict whether a problem will be solvable to optimality with the D-Wave 2000Q. We extend these results by training a machine learning regression model that predicts the clique size found by D-Wave.

2022 ◽  
Vol 13 (2) ◽  
pp. 0-0

The Maximum Clique Problem (MCP) is a classical NP-hard problem that has gained considerable attention due to its numerous real-world applications and theoretical complexity. It is inherently computationally complex, and so exact methods may require prohibitive computing time. Nature-inspired meta-heuristics have proven their utility in solving many NP-hard problems. In this research, we propose a simulated annealing-based algorithm that we call Clique Finder algorithm to solve the MCP. Our algorithm uses a logarithmic cooling schedule and two moves that are selected in an adaptive manner. The objective (error) function is the total number of missing links in the clique, which is to be minimized. The proposed algorithm was evaluated using benchmark graphs from the open-source library DIMACS, and results show that the proposed algorithm had a high success rate.


2022 ◽  
Vol 13 (2) ◽  
pp. 1-22
Author(s):  
Sarab Almuhaideb ◽  
Najwa Altwaijry ◽  
Shahad AlMansour ◽  
Ashwaq AlMklafi ◽  
AlBandery Khalid AlMojel ◽  
...  

The Maximum Clique Problem (MCP) is a classical NP-hard problem that has gained considerable attention due to its numerous real-world applications and theoretical complexity. It is inherently computationally complex, and so exact methods may require prohibitive computing time. Nature-inspired meta-heuristics have proven their utility in solving many NP-hard problems. In this research, we propose a simulated annealing-based algorithm that we call Clique Finder algorithm to solve the MCP. Our algorithm uses a logarithmic cooling schedule and two moves that are selected in an adaptive manner. The objective (error) function is the total number of missing links in the clique, which is to be minimized. The proposed algorithm was evaluated using benchmark graphs from the open-source library DIMACS, and results show that the proposed algorithm had a high success rate.


2021 ◽  
Vol 13 (11) ◽  
pp. 6376
Author(s):  
Junseo Bae ◽  
Sang-Guk Yum ◽  
Ji-Myong Kim

Given the highly visible nature, transportation infrastructure construction projects are often exposed to numerous unexpected events, compared to other types of construction projects. Despite the importance of predicting financial losses caused by risk, it is still difficult to determine which risk factors are generally critical and when these risks tend to occur, without benchmarkable references. Most of existing methods are prediction-focused, project type-specific, while ignoring the timing aspect of risk. This study filled these knowledge gaps by developing a neural network-driven machine-learning classification model that can categorize causes of financial losses depending on insurance claim payout proportions and risk occurrence timing, drawing on 625 transportation infrastructure construction projects including bridges, roads, and tunnels. The developed network model showed acceptable classification accuracy of 74.1%, 69.4%, and 71.8% in training, cross-validation, and test sets, respectively. This study is the first of its kind by providing benchmarkable classification references of economic damage trends in transportation infrastructure projects. The proposed holistic approach will help construction practitioners consider the uncertainty of project management and the potential impact of natural hazards proactively, with the risk occurrence timing trends. This study will also assist insurance companies with developing sustainable financial management plans for transportation infrastructure projects.


2020 ◽  
Author(s):  
Shalin Shah

<p>A clique in a graph is a set of vertices that are all directly connected</p><p>to each other i.e. a complete sub-graph. A clique of the largest size is</p><p>called a maximum clique. Finding the maximum clique in a graph is an</p><p>NP-hard problem and it cannot be solved by an approximation algorithm</p><p>that returns a solution within a constant factor of the optimum. In this</p><p>work, we present a simple and very fast randomized algorithm for the</p><p>maximum clique problem. We also provide Java code of the algorithm</p><p>in our git repository. Results show that the algorithm is able to find</p><p>reasonably good solutions to some randomly chosen DIMACS benchmark</p><p>graphs. Rather than aiming for optimality, we aim to find good solutions</p><p>very fast.</p>


BJGP Open ◽  
2018 ◽  
Vol 2 (2) ◽  
pp. bjgpopen18X101589 ◽  
Author(s):  
Emmanuel A Jammeh ◽  
Camille, B Carroll ◽  
Stephen, W Pearson ◽  
Javier Escudero ◽  
Athanasios Anastasiou ◽  
...  

BackgroundUp to half of patients with dementia may not receive a formal diagnosis, limiting access to appropriate services. It is hypothesised that it may be possible to identify undiagnosed dementia from a profile of symptoms recorded in routine clinical practice.AimThe aim of this study is to develop a machine learning-based model that could be used in general practice to detect dementia from routinely collected NHS data. The model would be a useful tool for identifying people who may be living with dementia but have not been formally diagnosed.Design & settingThe study involved a case-control design and analysis of primary care data routinely collected over a 2-year period. Dementia diagnosed during the study period was compared to no diagnosis of dementia during the same period using pseudonymised routinely collected primary care clinical data.MethodRoutinely collected Read-encoded data were obtained from 18 consenting GP surgeries across Devon, for 26 483 patients aged >65 years. The authors determined Read codes assigned to patients that may contribute to dementia risk. These codes were used as features to train a machine-learning classification model to identify patients that may have underlying dementia.ResultsThe model obtained sensitivity and specificity values of 84.47% and 86.67%, respectively.ConclusionThe results show that routinely collected primary care data may be used to identify undiagnosed dementia. The methodology is promising and, if successfully developed and deployed, may help to increase dementia diagnosis in primary care.


2020 ◽  
Author(s):  
Shalin Shah

<p>A clique in a graph is a set of vertices that are all directly connected</p><p>to each other i.e. a complete sub-graph. A clique of the largest size is</p><p>called a maximum clique. Finding the maximum clique in a graph is an</p><p>NP-hard problem and it cannot be solved by an approximation algorithm</p><p>that returns a solution within a constant factor of the optimum. In this</p><p>work, we present a simple and very fast randomized algorithm for the</p><p>maximum clique problem. We also provide Java code of the algorithm</p><p>in our git repository. Results show that the algorithm is able to find</p><p>reasonably good solutions to some randomly chosen DIMACS benchmark</p><p>graphs. Rather than aiming for optimality, we aim to find good solutions</p><p>very fast.</p>


Author(s):  
Lidong Wu

The No-Free-Lunch theorem is an interesting and important theoretical result in machine learning. Based on philosophy of No-Free-Lunch theorem, we discuss extensively on the limitation of a data-driven approach in solving NP-hard problems.


2019 ◽  
Vol 9 (1) ◽  
Author(s):  
William Cruz-Santos ◽  
Salvador E. Venegas-Andraca ◽  
Marco Lanzagorta

AbstractQuantum annealing algorithms were introduced to solve combinatorial optimization problems by taking advantage of quantum fluctuations to escape local minima in complex energy landscapes typical of NP − hard problems. In this work, we propose using quantum annealing for the theory of cuts, a field of paramount importance in theoretical computer science. We have proposed a method to formulate the Minimum Multicut Problem into the QUBO representation, and the technical difficulties faced when embedding and submitting a problem to the quantum annealer processor. We show two constructions of the quadratic unconstrained binary optimization functions for the Minimum Multicut Problem and we review several tradeoffs between the two mappings and provide numerical scaling analysis results from several classical approaches. Furthermore, we discuss some of the expected challenges and tradeoffs in the implementation of our mapping in the current generation of D-Wave machines.


2020 ◽  
Vol 9 (10) ◽  
pp. 580 ◽  
Author(s):  
Maria Antonia Brovelli ◽  
Yaru Sun ◽  
Vasil Yordanov

Deforestation causes diverse and profound consequences for the environment and species. Direct or indirect effects can be related to climate change, biodiversity loss, soil erosion, floods, landslides, etc. As such a significant process, timely and continuous monitoring of forest dynamics is important, to constantly follow existing policies and develop new mitigation measures. The present work had the aim of mapping and monitoring the forest change from 2000 to 2019 and of simulating the future forest development of a rainforest region located in the Pará state, Brazil. The land cover dynamics were mapped at five-year intervals based on a supervised classification model deployed on the cloud processing platform Google Earth Engine. Besides the benefits of reduced computational time, the service is coupled with a vast data catalogue providing useful access to global products, such as multispectral images of the missions Landsat five, seven, eight and Sentinel-2. The validation procedures were done through photointerpretation of high-resolution panchromatic images obtained from CBERS (China–Brazil Earth Resources Satellite). The more than satisfactory results allowed an estimation of peak deforestation rates for the period 2000–2006; for the period 2006–2015, a significant decrease and stabilization, followed by a slight increase till 2019. Based on the derived trends a forest dynamics was simulated for the period 2019–2028, estimating a decrease in the deforestation rate. These results demonstrate that such a fusion of satellite observations, machine learning, and cloud processing, benefits the analysis of the forest dynamics and can provide useful information for the development of forest policies.


Sign in / Sign up

Export Citation Format

Share Document