Representational Power of Restricted Boltzmann Machines and Deep Belief Networks

2008 ◽  
Vol 20 (6) ◽  
pp. 1631-1649 ◽  
Author(s):  
Nicolas Le Roux ◽  
Yoshua Bengio

Deep belief networks (DBN) are generative neural network models with many layers of hidden explanatory factors, recently introduced by Hinton, Osindero, and Teh (2006) along with a greedy layer-wise unsupervised learning algorithm. The building block of a DBN is a probabilistic model called a restricted Boltzmann machine (RBM), used to represent one layer of the model. Restricted Boltzmann machines are interesting because inference is easy in them and because they have been successfully used as building blocks for training deeper models. We first prove that adding hidden units yields strictly improved modeling power, while a second theorem shows that RBMs are universal approximators of discrete distributions. We then study the question of whether DBNs with more layers are strictly more powerful in terms of representational power. This suggests a new and less greedy criterion for training RBMs within DBNs.

2011 ◽  
Vol 23 (5) ◽  
pp. 1306-1319 ◽  
Author(s):  
Guido Montufar ◽  
Nihat Ay

We improve recently published results about resources of restricted Boltzmann machines (RBM) and deep belief networks (DBN) required to make them universal approximators. We show that any distribution [Formula: see text] on the set [Formula: see text] of binary vectors of length [Formula: see text] can be arbitrarily well approximated by an RBM with [Formula: see text] hidden units, where [Formula: see text] is the minimal number of pairs of binary vectors differing in only one entry such that their union contains the support set of [Formula: see text]. In important cases this number is half the cardinality of the support set of [Formula: see text] (given in Le Roux & Bengio, 2008 ). We construct a DBN with [Formula: see text], hidden layers of width [Formula: see text] that is capable of approximating any distribution on [Formula: see text] arbitrarily well. This confirms a conjecture presented in Le Roux and Bengio ( 2010 ).


Author(s):  
Yong Hu ◽  
Hai-Feng Zhao ◽  
Zhi-Gang Wang

In the sign language fingerspelling scheme, letters in the alphabet are presented by a distinctive finger shape or movement. The presented work is conducted for autokinetic translating fingerspelling signs to text. A recognition framework by using intensity and depth information is proposed and compared with some distinguished works. Histogram of Oriented Gradients (HOG) and Zernike moments are used as discriminative features due to their simplicity and good performance. A Deep Belief Network (DBN) composed of three Restricted Boltzmann Machines (RBMs) is used as a classifier. Experiments are executed on a challenging database, which consists of 120,000 pictures representing 24 alphabet letters over five different users. The proposed approach obtained higher average accuracy, outperforming all other methods. This indicates the effectiveness and the abilities of the proposed framework.


2016 ◽  
Vol 7 (3) ◽  
pp. 395-406 ◽  
Author(s):  
Kodai Ueyoshi ◽  
Takao Marukame ◽  
Tetsuya Asai ◽  
Masato Motomura ◽  
Alexandre Schmid

Sign in / Sign up

Export Citation Format

Share Document