Learning Entry Profiles of Children with Autism from Multivariate Treatment Information Using Restricted Boltzmann Machines

Author(s):  
Pratibha Vellanki ◽  
Dinh Phung ◽  
Thi Duong ◽  
Svetha Venkatesh
Author(s):  
Abeer M Mahmoud ◽  
Hanen Karamti

<span>Recent advanced intelligent learning approaches that are based on using neural networks in medical diagnosing increased researcher expectations. In fact, the literature proved a straight-line relation of the exact needs and the achieved results. Accordingly, it encouraged promising directions of applying these approaches toward saving time and efforts. This paper proposes a novel hybrid deep learning framework that is based on the restricted boltzmann machines (RBM) and the contractive autoencoder (CA) to classify the brain disorder and healthy control cases in children less than 12 years. The RBM focuses on obtaining the discriminative set of high guided features in the classification process, while the CA provides the regularization and the robustness of features for optimal objectives. The proposed framework diagnosed children with autism recording accuracy of 91, 14% and proved enhancement compared to literature.</span>


IEEE Access ◽  
2021 ◽  
pp. 1-1
Author(s):  
Wenhao Lu ◽  
Chi-Sing Leung ◽  
John Sum

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Guanglei Xu ◽  
William S. Oates

AbstractRestricted Boltzmann Machines (RBMs) have been proposed for developing neural networks for a variety of unsupervised machine learning applications such as image recognition, drug discovery, and materials design. The Boltzmann probability distribution is used as a model to identify network parameters by optimizing the likelihood of predicting an output given hidden states trained on available data. Training such networks often requires sampling over a large probability space that must be approximated during gradient based optimization. Quantum annealing has been proposed as a means to search this space more efficiently which has been experimentally investigated on D-Wave hardware. D-Wave implementation requires selection of an effective inverse temperature or hyperparameter ($$\beta $$ β ) within the Boltzmann distribution which can strongly influence optimization. Here, we show how this parameter can be estimated as a hyperparameter applied to D-Wave hardware during neural network training by maximizing the likelihood or minimizing the Shannon entropy. We find both methods improve training RBMs based upon D-Wave hardware experimental validation on an image recognition problem. Neural network image reconstruction errors are evaluated using Bayesian uncertainty analysis which illustrate more than an order magnitude lower image reconstruction error using the maximum likelihood over manually optimizing the hyperparameter. The maximum likelihood method is also shown to out-perform minimizing the Shannon entropy for image reconstruction.


2013 ◽  
Vol 29 (13) ◽  
pp. i126-i134 ◽  
Author(s):  
Yuhao Wang ◽  
Jianyang Zeng

Sign in / Sign up

Export Citation Format

Share Document