scholarly journals A 64 bit quantum dragon data-set for machine learning

2021 ◽  
Vol 2122 (1) ◽  
pp. 012005
Author(s):  
M.A. Novotný ◽  
Yaroslav Koshka ◽  
G. Inkoonv ◽  
Vivek Dixit

Abstract Design and examples of a sixty-four bit quantum dragon data-set are presented. A quantum dragon is a tight-binding model for a strongly disordered nanodevice, but when connected to appropriate semi-infinite leads has complete electron transmission for a finite interval of energies. The labeled data-set contains records which are quantum dragons, which are not quantum dragons, and which are indeterminate. The quantum dragon data-set is designed to be difficult for trained humans and machines to label a nanodevice with regard to its quantum dragon property. The 64 bit record length allows the data-set to be utilized in restricted Boltzmann machines which fit well onto the D-Wave 2000Q quantum annealer architecture.

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Guanglei Xu ◽  
William S. Oates

AbstractRestricted Boltzmann Machines (RBMs) have been proposed for developing neural networks for a variety of unsupervised machine learning applications such as image recognition, drug discovery, and materials design. The Boltzmann probability distribution is used as a model to identify network parameters by optimizing the likelihood of predicting an output given hidden states trained on available data. Training such networks often requires sampling over a large probability space that must be approximated during gradient based optimization. Quantum annealing has been proposed as a means to search this space more efficiently which has been experimentally investigated on D-Wave hardware. D-Wave implementation requires selection of an effective inverse temperature or hyperparameter ($$\beta $$ β ) within the Boltzmann distribution which can strongly influence optimization. Here, we show how this parameter can be estimated as a hyperparameter applied to D-Wave hardware during neural network training by maximizing the likelihood or minimizing the Shannon entropy. We find both methods improve training RBMs based upon D-Wave hardware experimental validation on an image recognition problem. Neural network image reconstruction errors are evaluated using Bayesian uncertainty analysis which illustrate more than an order magnitude lower image reconstruction error using the maximum likelihood over manually optimizing the hyperparameter. The maximum likelihood method is also shown to out-perform minimizing the Shannon entropy for image reconstruction.


AIP Advances ◽  
2021 ◽  
Vol 11 (1) ◽  
pp. 015127
Author(s):  
Qiuyuan Chen ◽  
Jiawei Chang ◽  
Lin Ma ◽  
Chenghan Li ◽  
Liangfei Duan ◽  
...  

2021 ◽  
Vol 154 (16) ◽  
pp. 164115
Author(s):  
Rebecca K. Lindsey ◽  
Sorin Bastea ◽  
Nir Goldman ◽  
Laurence E. Fried

2005 ◽  
Vol 31 (8) ◽  
pp. 585-595 ◽  
Author(s):  
D. A. Areshkin ◽  
O. A. Shenderova ◽  
J. D. Schall ◽  
D. W. Brenner

Sign in / Sign up

Export Citation Format

Share Document