A deep learning scheme for mental workload classification based on restricted Boltzmann machines

2017 ◽  
Vol 19 (4) ◽  
pp. 607-631 ◽  
Author(s):  
Jianhua Zhang ◽  
Sunan Li
2017 ◽  
Vol 29 (8) ◽  
pp. 2123-2163 ◽  
Author(s):  
Johan A. K. Suykens

The aim of this letter is to propose a theory of deep restricted kernel machines offering new foundations for deep learning with kernel machines. From the viewpoint of deep learning, it is partially related to restricted Boltzmann machines, which are characterized by visible and hidden units in a bipartite graph without hidden-to-hidden connections and deep learning extensions as deep belief networks and deep Boltzmann machines. From the viewpoint of kernel machines, it includes least squares support vector machines for classification and regression, kernel principal component analysis (PCA), matrix singular value decomposition, and Parzen-type models. A key element is to first characterize these kernel machines in terms of so-called conjugate feature duality, yielding a representation with visible and hidden units. It is shown how this is related to the energy form in restricted Boltzmann machines, with continuous variables in a nonprobabilistic setting. In this new framework of so-called restricted kernel machine (RKM) representations, the dual variables correspond to hidden features. Deep RKM are obtained by coupling the RKMs. The method is illustrated for deep RKM, consisting of three levels with a least squares support vector machine regression level and two kernel PCA levels. In its primal form also deep feedforward neural networks can be trained within this framework.


2017 ◽  
Vol 2017 ◽  
pp. 1-9 ◽  
Author(s):  
Xianchun Zou ◽  
Guijun Wang ◽  
Guoxian Yu

Accurately annotating biological functions of proteins is one of the key tasks in the postgenome era. Many machine learning based methods have been applied to predict functional annotations of proteins, but this task is rarely solved by deep learning techniques. Deep learning techniques recently have been successfully applied to a wide range of problems, such as video, images, and nature language processing. Inspired by these successful applications, we investigate deep restricted Boltzmann machines (DRBM), a representative deep learning technique, to predict the missing functional annotations of partially annotated proteins. Experimental results onHomo sapiens,Saccharomyces cerevisiae,Mus musculus,andDrosophilashow that DRBM achieves better performance than other related methods across different evaluation metrics, and it also runs faster than these comparing methods.


Author(s):  
Yiping Cheng

Restricted Boltzmann machines (RBMs) are the building blocks of some deep learning networks. However, despite their importance, it is our perception that some very important derivations about the RBM are missing in the literature, and a beginner may feel RBM very hard to understand. We provide here these missing derivations. We cover the classic Bernoulli-Bernoulli RBM and the Gaussian-Bernoulli RBM, but leave out the ``continuous'' RBM as it is believed not as mature as the former two. This tutorial can be used as a companion or complement to the famous RBM paper ``Training restricted Boltzmann machines: An introduction'' by Fisher and Igel.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Guanglei Xu ◽  
William S. Oates

AbstractRestricted Boltzmann Machines (RBMs) have been proposed for developing neural networks for a variety of unsupervised machine learning applications such as image recognition, drug discovery, and materials design. The Boltzmann probability distribution is used as a model to identify network parameters by optimizing the likelihood of predicting an output given hidden states trained on available data. Training such networks often requires sampling over a large probability space that must be approximated during gradient based optimization. Quantum annealing has been proposed as a means to search this space more efficiently which has been experimentally investigated on D-Wave hardware. D-Wave implementation requires selection of an effective inverse temperature or hyperparameter ($$\beta $$ β ) within the Boltzmann distribution which can strongly influence optimization. Here, we show how this parameter can be estimated as a hyperparameter applied to D-Wave hardware during neural network training by maximizing the likelihood or minimizing the Shannon entropy. We find both methods improve training RBMs based upon D-Wave hardware experimental validation on an image recognition problem. Neural network image reconstruction errors are evaluated using Bayesian uncertainty analysis which illustrate more than an order magnitude lower image reconstruction error using the maximum likelihood over manually optimizing the hyperparameter. The maximum likelihood method is also shown to out-perform minimizing the Shannon entropy for image reconstruction.


Sign in / Sign up

Export Citation Format

Share Document