An Adaptive Statistical Stochastic Deep Gradient Learning for Handling Recurring and Incremental Drifts

Author(s):  
M. Thangam ◽  
A. Bhuvaneswari
Keyword(s):  
2014 ◽  
Vol 138 ◽  
pp. 229-237 ◽  
Author(s):  
Yan Liu ◽  
Wei Wu ◽  
Qinwei Fan ◽  
Dakun Yang ◽  
Jian Wang

2021 ◽  
Vol 9 ◽  
Author(s):  
Vivek Dixit ◽  
Raja Selvarajan ◽  
Muhammad A. Alam ◽  
Travis S. Humble ◽  
Sabre Kais

Restricted Boltzmann Machine (RBM) is an energy-based, undirected graphical model. It is commonly used for unsupervised and supervised machine learning. Typically, RBM is trained using contrastive divergence (CD). However, training with CD is slow and does not estimate the exact gradient of the log-likelihood cost function. In this work, the model expectation of gradient learning for RBM has been calculated using a quantum annealer (D-Wave 2000Q), where obtaining samples is faster than Markov chain Monte Carlo (MCMC) used in CD. Training and classification results of RBM trained using quantum annealing are compared with the CD-based method. The performance of the two approaches is compared with respect to the classification accuracies, image reconstruction, and log-likelihood results. The classification accuracy results indicate comparable performances of the two methods. Image reconstruction and log-likelihood results show improved performance of the CD-based method. It is shown that the samples obtained from quantum annealer can be used to train an RBM on a 64-bit “bars and stripes” dataset with classification performance similar to an RBM trained with CD. Though training based on CD showed improved learning performance, training using a quantum annealer could be useful as it eliminates computationally expensive MCMC steps of CD.


2010 ◽  
Vol 58 (7) ◽  
pp. 872-878 ◽  
Author(s):  
A. Cherubini ◽  
F. Giannone ◽  
L. Iocchi ◽  
D. Nardi ◽  
P.F. Palamara

Sign in / Sign up

Export Citation Format

Share Document