Sign Language Fingerspelling Recognition Using Depth Information and Deep Belief Networks

Author(s):  
Yong Hu ◽  
Hai-Feng Zhao ◽  
Zhi-Gang Wang

In the sign language fingerspelling scheme, letters in the alphabet are presented by a distinctive finger shape or movement. The presented work is conducted for autokinetic translating fingerspelling signs to text. A recognition framework by using intensity and depth information is proposed and compared with some distinguished works. Histogram of Oriented Gradients (HOG) and Zernike moments are used as discriminative features due to their simplicity and good performance. A Deep Belief Network (DBN) composed of three Restricted Boltzmann Machines (RBMs) is used as a classifier. Experiments are executed on a challenging database, which consists of 120,000 pictures representing 24 alphabet letters over five different users. The proposed approach obtained higher average accuracy, outperforming all other methods. This indicates the effectiveness and the abilities of the proposed framework.

2020 ◽  
Vol 31 (10) ◽  
pp. 4217-4228 ◽  
Author(s):  
Gongming Wang ◽  
Junfei Qiao ◽  
Jing Bi ◽  
Qing-Shan Jia ◽  
MengChu Zhou

2011 ◽  
Vol 23 (5) ◽  
pp. 1306-1319 ◽  
Author(s):  
Guido Montufar ◽  
Nihat Ay

We improve recently published results about resources of restricted Boltzmann machines (RBM) and deep belief networks (DBN) required to make them universal approximators. We show that any distribution [Formula: see text] on the set [Formula: see text] of binary vectors of length [Formula: see text] can be arbitrarily well approximated by an RBM with [Formula: see text] hidden units, where [Formula: see text] is the minimal number of pairs of binary vectors differing in only one entry such that their union contains the support set of [Formula: see text]. In important cases this number is half the cardinality of the support set of [Formula: see text] (given in Le Roux & Bengio, 2008 ). We construct a DBN with [Formula: see text], hidden layers of width [Formula: see text] that is capable of approximating any distribution on [Formula: see text] arbitrarily well. This confirms a conjecture presented in Le Roux and Bengio ( 2010 ).


2020 ◽  
Vol 13 (3) ◽  
pp. 508-518
Author(s):  
Abderrazak Khediri ◽  
Mohamed Ridda Laouar ◽  
Sean B. Eom

Background: Enhancing the resiliency of electric power grids is becoming a crucial issue due to the outages that have recently occurred. One solution could be the prediction of imminent failure that is engendered by line contingency or grid disturbances. Therefore, a number of researchers have initiated investigations to generate techniques for predicting outages. However, extended blackouts can still occur due to the frailty of distribution power grids. Objective: This paper implements a proactive prediction model based on deep-belief networks to predict the imminent outages using previous historical blackouts, trigger alarms, and suggest solutions for blackouts. These actions can prevent outages, stop cascading failures and diminish the resulting economic losses. Methods: The proposed model is divided into three phases: A, B and C. The first phase (A) represents the initial segment that collects and extracts data and trains the deep belief network using the collected data. Phase B defines the Power outage threshold and determines whether the grid is in a normal state. Phase C involves detecting potential unsafe events, triggering alarms and proposing emergency action plans for restoration. Results: Different machine learning and deep learning algorithms are used in our experiments to validate our proposition, such as Random forest, Bayesian nets and others. Deep belief Networks can achieve 97.30% accuracy and 97.06% precision. Conclusion: The obtained findings demonstrate that the proposed model would be convenient for blackouts’ prediction and that the deep-belief network represents a powerful deep learning tool that can offer plausible results.


2016 ◽  
Vol 7 (3) ◽  
pp. 395-406 ◽  
Author(s):  
Kodai Ueyoshi ◽  
Takao Marukame ◽  
Tetsuya Asai ◽  
Masato Motomura ◽  
Alexandre Schmid

Sign in / Sign up

Export Citation Format

Share Document