scholarly journals Adiabatic quantum linear regression

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Prasanna Date ◽  
Thomas Potok

AbstractA major challenge in machine learning is the computational expense of training these models. Model training can be viewed as a form of optimization used to fit a machine learning model to a set of data, which can take up significant amount of time on classical computers. Adiabatic quantum computers have been shown to excel at solving optimization problems, and therefore, we believe, present a promising alternative to improve machine learning training times. In this paper, we present an adiabatic quantum computing approach for training a linear regression model. In order to do this, we formulate the regression problem as a quadratic unconstrained binary optimization (QUBO) problem. We analyze our quantum approach theoretically, test it on the D-Wave adiabatic quantum computer and compare its performance to a classical approach that uses the Scikit-learn library in Python. Our analysis shows that the quantum approach attains up to $${2.8 \times }$$ 2.8 × speedup over the classical approach on larger datasets, and performs at par with the classical approach on the regression error metric. The quantum approach used the D-Wave 2000Q adiabatic quantum computer, whereas the classical approach used a desktop workstation with an 8-core Intel i9 processor. As such, the results obtained in this work must be interpreted within the context of the specific hardware and software implementations of these machines.

Author(s):  
Vyacheslav Korolyov ◽  
Oleksandr Khodzinskyi

Introduction. Quantum computers provide several times faster solutions to several NP-hard combinatorial optimization problems in comparison with computing clusters. The trend of doubling the number of qubits of quantum computers every year suggests the existence of an analog of Moore's law for quantum computers, which means that soon they will also be able to get a significant acceleration of solving many applied large-scale problems. The purpose of the article is to review methods for creating algorithms of quantum computer mathematics for combinatorial optimization problems and to analyze the influence of the qubit-to-qubit coupling and connections strength on the performance of quantum data processing. Results. The article offers approaches to the classification of algorithms for solving these problems from the perspective of quantum computer mathematics. It is shown that the number and strength of connections between qubits affect the dimensionality of problems solved by algorithms of quantum computer mathematics. It is proposed to consider two approaches to calculating combinatorial optimization problems on quantum computers: universal, using quantum gates, and specialized, based on a parameterization of physical processes. Examples of constructing a half-adder for two qubits of an IBM quantum processor and an example of solving the problem of finding the maximum independent set for the IBM and D-wave quantum computers are given. Conclusions. Today, quantum computers are available online through cloud services for research and commercial use. At present, quantum processors do not have enough qubits to replace semiconductor computers in universal computing. The search for a solution to a combinatorial optimization problem is performed by achieving the minimum energy of the system of coupled qubits, on which the task is mapped, and the data are the initial conditions. Approaches to solving combinatorial optimization problems on quantum computers are considered and the results of solving the problem of finding the maximum independent set on the IBM and D-wave quantum computers are given. Keywords: quantum computer, quantum computer mathematics, qubit, maximal independent set for a graph.


2021 ◽  
Vol 14 (6) ◽  
pp. 957-969
Author(s):  
Jinfei Liu ◽  
Jian Lou ◽  
Junxu Liu ◽  
Li Xiong ◽  
Jian Pei ◽  
...  

Data-driven machine learning has become ubiquitous. A marketplace for machine learning models connects data owners and model buyers, and can dramatically facilitate data-driven machine learning applications. In this paper, we take a formal data marketplace perspective and propose the first en<u> D </u>-to-end mod <u>e</u> l m <u>a</u> rketp <u>l</u> ace with diff <u>e</u> rential p <u>r</u> ivacy ( Dealer ) towards answering the following questions: How to formulate data owners' compensation functions and model buyers' price functions? How can the broker determine prices for a set of models to maximize the revenue with arbitrage-free guarantee, and train a set of models with maximum Shapley coverage given a manufacturing budget to remain competitive ? For the former, we propose compensation function for each data owner based on Shapley value and privacy sensitivity, and price function for each model buyer based on Shapley coverage sensitivity and noise sensitivity. Both privacy sensitivity and noise sensitivity are measured by the level of differential privacy. For the latter, we formulate two optimization problems for model pricing and model training, and propose efficient dynamic programming algorithms. Experiment results on the real chess dataset and synthetic datasets justify the design of Dealer and verify the efficiency and effectiveness of the proposed algorithms.


2019 ◽  
Vol 1 (2) ◽  
Author(s):  
Feng Hu ◽  
Ban‐Nan Wang ◽  
Ning Wang ◽  
Chao Wang

Algorithms ◽  
2021 ◽  
Vol 14 (6) ◽  
pp. 187
Author(s):  
Aaron Barbosa ◽  
Elijah Pelofske ◽  
Georg Hahn ◽  
Hristo N. Djidjev

Quantum annealers, such as the device built by D-Wave Systems, Inc., offer a way to compute solutions of NP-hard problems that can be expressed in Ising or quadratic unconstrained binary optimization (QUBO) form. Although such solutions are typically of very high quality, problem instances are usually not solved to optimality due to imperfections of the current generations quantum annealers. In this contribution, we aim to understand some of the factors contributing to the hardness of a problem instance, and to use machine learning models to predict the accuracy of the D-Wave 2000Q annealer for solving specific problems. We focus on the maximum clique problem, a classic NP-hard problem with important applications in network analysis, bioinformatics, and computational chemistry. By training a machine learning classification model on basic problem characteristics such as the number of edges in the graph, or annealing parameters, such as the D-Wave’s chain strength, we are able to rank certain features in the order of their contribution to the solution hardness, and present a simple decision tree which allows to predict whether a problem will be solvable to optimality with the D-Wave 2000Q. We extend these results by training a machine learning regression model that predicts the clique size found by D-Wave.


Soil Systems ◽  
2021 ◽  
Vol 5 (3) ◽  
pp. 41
Author(s):  
Tulsi P. Kharel ◽  
Amanda J. Ashworth ◽  
Phillip R. Owens ◽  
Dirk Philipp ◽  
Andrew L. Thomas ◽  
...  

Silvopasture systems combine tree and livestock production to minimize market risk and enhance ecological services. Our objective was to explore and develop a method for identifying driving factors linked to productivity in a silvopastoral system using machine learning. A multi-variable approach was used to detect factors that affect system-level output (i.e., plant production (tree and forage), soil factors, and animal response based on grazing preference). Variables from a three-year (2017–2019) grazing study, including forage, tree, soil, and terrain attribute parameters, were analyzed. Hierarchical variable clustering and random forest model selected 10 important variables for each of four major clusters. A stepwise multiple linear regression and regression tree approach was used to predict cattle grazing hours per animal unit (h ha−1 AU−1) using 40 variables (10 per cluster) selected from 130 total variables. Overall, the variable ranking method selected more weighted variables for systems-level analysis. The regression tree performed better than stepwise linear regression for interpreting factor-level effects on animal grazing preference. Cattle were more likely to graze forage on soils with Cd levels <0.04 mg kg−1 (126% greater grazing hours per AU), soil Cr <0.098 mg kg−1 (108%), and a SAGA wetness index of <2.7 (57%). Cattle also preferred grazing (88%) native grasses compared to orchardgrass (Dactylis glomerata L.). The result shows water flow within the landscape position (wetness index), and associated metals distribution may be used as an indicator of animal grazing preference. Overall, soil nutrient distribution patterns drove grazing response, although animal grazing preference was also influenced by aboveground (forage and tree), soil, and landscape attributes. Machine learning approaches helped explain pasture use and overall drivers of grazing preference in a multifunctional system.


2021 ◽  
Vol 11 (9) ◽  
pp. 3866
Author(s):  
Jun-Ryeol Park ◽  
Hye-Jin Lee ◽  
Keun-Hyeok Yang ◽  
Jung-Keun Kook ◽  
Sanghee Kim

This study aims to predict the compressive strength of concrete using a machine-learning algorithm with linear regression analysis and to evaluate its accuracy. The open-source software library TensorFlow was used to develop the machine-learning algorithm. In the machine-earning algorithm, a total of seven variables were set: water, cement, fly ash, blast furnace slag, sand, coarse aggregate, and coarse aggregate size. A total of 4297 concrete mixtures with measured compressive strengths were employed to train and testing the machine-learning algorithm. Of these, 70% were used for training, and 30% were utilized for verification. For verification, the research was conducted by classifying the mixtures into three cases: the case where the machine-learning algorithm was trained using all the data (Case-1), the case where the machine-learning algorithm was trained while maintaining the same number of training dataset for each strength range (Case-2), and the case where the machine-learning algorithm was trained after making the subcase of each strength range (Case-3). The results indicated that the error percentages of Case-1 and Case-2 did not differ significantly. The error percentage of Case-3 was far smaller than those of Case-1 and Case-2. Therefore, it was concluded that the range of training dataset of the concrete compressive strength is as important as the amount of training dataset for accurately predicting the concrete compressive strength using the machine-learning algorithm.


Energies ◽  
2021 ◽  
Vol 14 (4) ◽  
pp. 1055
Author(s):  
Qian Sun ◽  
William Ampomah ◽  
Junyu You ◽  
Martha Cather ◽  
Robert Balch

Machine-learning technologies have exhibited robust competences in solving many petroleum engineering problems. The accurate predictivity and fast computational speed enable a large volume of time-consuming engineering processes such as history-matching and field development optimization. The Southwest Regional Partnership on Carbon Sequestration (SWP) project desires rigorous history-matching and multi-objective optimization processes, which fits the superiorities of the machine-learning approaches. Although the machine-learning proxy models are trained and validated before imposing to solve practical problems, the error margin would essentially introduce uncertainties to the results. In this paper, a hybrid numerical machine-learning workflow solving various optimization problems is presented. By coupling the expert machine-learning proxies with a global optimizer, the workflow successfully solves the history-matching and CO2 water alternative gas (WAG) design problem with low computational overheads. The history-matching work considers the heterogeneities of multiphase relative characteristics, and the CO2-WAG injection design takes multiple techno-economic objective functions into accounts. This work trained an expert response surface, a support vector machine, and a multi-layer neural network as proxy models to effectively learn the high-dimensional nonlinear data structure. The proposed workflow suggests revisiting the high-fidelity numerical simulator for validation purposes. The experience gained from this work would provide valuable guiding insights to similar CO2 enhanced oil recovery (EOR) projects.


Mathematics ◽  
2021 ◽  
Vol 9 (16) ◽  
pp. 1840
Author(s):  
Nicolás Caselli ◽  
Ricardo Soto ◽  
Broderick Crawford ◽  
Sergio Valdivia ◽  
Rodrigo Olivares

Metaheuristics are intelligent problem-solvers that have been very efficient in solving huge optimization problems for more than two decades. However, the main drawback of these solvers is the need for problem-dependent and complex parameter setting in order to reach good results. This paper presents a new cuckoo search algorithm able to self-adapt its configuration, particularly its population and the abandon probability. The self-tuning process is governed by using machine learning, where cluster analysis is employed to autonomously and properly compute the number of agents needed at each step of the solving process. The goal is to efficiently explore the space of possible solutions while alleviating human effort in parameter configuration. We illustrate interesting experimental results on the well-known set covering problem, where the proposed approach is able to compete against various state-of-the-art algorithms, achieving better results in one single run versus 20 different configurations. In addition, the result obtained is compared with similar hybrid bio-inspired algorithms illustrating interesting results for this proposal.


IEEE Access ◽  
2021 ◽  
pp. 1-1
Author(s):  
Hayssam Dahrouj ◽  
Rawan Alghamdi ◽  
Hibatallah Alwazani ◽  
Sarah Bahanshal ◽  
Alaa Alameer Ahmad ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document