A study on classification of idiomatic phrases and polysemous signs in Korean Sign Language–Focused on the [GANGHADA]-

2021 ◽  
Vol 74 ◽  
pp. 1-25
Author(s):  
Ki-Hyun Nam
Keyword(s):  
2014 ◽  
Vol 16 (2) ◽  
Author(s):  
Kelly Lais Wiggers ◽  
Angelita Maria de Ré ◽  
Andres Jessé Porfírio

2018 ◽  
Vol 22 (1) ◽  
Author(s):  
Griselda Saldaña González ◽  
Jorge Cerezo Sánchez ◽  
Mario Mauricio Bustillo Díaz ◽  
Apolonio Ata Pérez
Keyword(s):  

2013 ◽  
Vol 8 (1) ◽  
pp. 53 ◽  
Author(s):  
Robert Fulford ◽  
Jane Ginsborg

The first part of this paper reviews literature on the use of gesture in musical contexts and reports an investigation of the gestures (spontaneous gesticulation) made by musicians with different levels of hearing impairment in rehearsal talk. Profoundly deaf musicians, who were also users of British Sign Language, were found to produce significantly more gestures than moderately deaf and hearing musicians. Analysis also revealed the presence of underlying spatial and cross-modal associations in the gestural representations produced by all the musicians. The second part of the paper discusses the results of the study and addresses some wider theoretical questions. First, a classification of ‘musical shaping gestures’ (MSGs) according to existing taxonomies is attempted. Second, the question of how a standardised ‘sign language of music’ could be formed is addressed and, finally, the potential uses of such a system are considered.


Baby Sign Language is used by hearing parents to hearing infants as a preverbal communication which reduce frustration of parents and accelerated learning in babies, increases parent-child bonding, and lets babies communicate vital information, such as if they are hurt or hungry is known as a Baby Sign Language . In the current research work, a study of various existing sign language has been carried out as literature and then after realizing that there is no dataset available for Baby Sign Language, we have created a static dataset for 311 baby signs, which were classified using a MobileNet V1, pretrained Convolution Neural Network [CNN].The focus of the paper is to analyze the effect of Gradient Descent based optimizers, Adam and its variants, Rmsprop optimizers on fine-tuned pretrained CNN model MobileNet V1 that has been trained using customized dataset. The optimizers are used to train and test on MobileNet for 100 epochs on the dataset created for 311 baby Signs. These 10 optimizers Adadelta, Adam, Adamax, SGD, Adagrad, RMSProp were compared based on their processing time.


2021 ◽  
Author(s):  
Qing Han ◽  
Zhanlu Huangfu ◽  
Weidong Min ◽  
Yanqiu Liao

Abstract Most existing deep learning-based dynamic sign language recognition methods directly use either the video sequence based on RGB information, or whole sequences instead of only the video sequence that represents the change of gesture. These characteristics lead to inaccurate extraction of hand gesture features and failure to achieve good recognition accuracy for complex gestures. In order to solve these problems, this paper proposes a new method of dynamic hand gesture recognition for key skeleton information, which combines residual convolutional neural network and long short-term memory recurrent network, which is called KLSTM-3D residual network (K3D ResNet). In K3DResNet, the spatiotemporal complexity of network computation is reduced by extracting the representative skeleton frame of gesture change. Then, the spatiotemporal features are extracted from the skeleton keyframe sequence, and the intermediate score corresponding to each action in the video sequence is established after the feature analysis. Finally, the classification of video sequences can accurately identify sign language. Experiments were performed on datasets DHG14/28 and SHREC’17 Track. The accuracy of verification on dataset DEVISIGN D reached 88.6%. In addition, the accuracy of the combination of RGB and skeleton information reached 93.2%.


Sign in / Sign up

Export Citation Format

Share Document