Sign Language Fingerspelling Recognition Using Depth Information and Deep Belief Networks
In the sign language fingerspelling scheme, letters in the alphabet are presented by a distinctive finger shape or movement. The presented work is conducted for autokinetic translating fingerspelling signs to text. A recognition framework by using intensity and depth information is proposed and compared with some distinguished works. Histogram of Oriented Gradients (HOG) and Zernike moments are used as discriminative features due to their simplicity and good performance. A Deep Belief Network (DBN) composed of three Restricted Boltzmann Machines (RBMs) is used as a classifier. Experiments are executed on a challenging database, which consists of 120,000 pictures representing 24 alphabet letters over five different users. The proposed approach obtained higher average accuracy, outperforming all other methods. This indicates the effectiveness and the abilities of the proposed framework.