scholarly journals Machine learning methods for sign language recognition: A critical review and analysis

2021 ◽  
Vol 12 ◽  
pp. 200056
Author(s):  
I.A. Adeyanju ◽  
O.O. Bello ◽  
M.A. Adegboye
2017 ◽  
Author(s):  
◽  
Zeshan Peng

With the advancement of machine learning methods, audio sentiment analysis has become an active research area in recent years. For example, business organizations are interested in persuasion tactics from vocal cues and acoustic measures in speech. A typical approach is to find a set of acoustic features from audio data that can indicate or predict a customer's attitude, opinion, or emotion state. For audio signals, acoustic features have been widely used in many machine learning applications, such as music classification, language recognition, emotion recognition, and so on. For emotion recognition, previous work shows that pitch and speech rate features are important features. This thesis work focuses on determining sentiment from call center audio records, each containing a conversation between a sales representative and a customer. The sentiment of an audio record is considered positive if the conversation ended with an appointment being made, and is negative otherwise. In this project, a data processing and machine learning pipeline for this problem has been developed. It consists of three major steps: 1) an audio record is split into segments by speaker turns; 2) acoustic features are extracted from each segment; and 3) classification models are trained on the acoustic features to predict sentiment. Different set of features have been used and different machine learning methods, including classical machine learning algorithms and deep neural networks, have been implemented in the pipeline. In our deep neural network method, the feature vectors of audio segments are stacked in temporal order into a feature matrix, which is fed into deep convolution neural networks as input. Experimental results based on real data shows that acoustic features, such as Mel frequency cepstral coefficients, timbre and Chroma features, are good indicators for sentiment. Temporal information in an audio record can be captured by deep convolutional neural networks for improved prediction accuracy.


2020 ◽  
Vol 22 ◽  
pp. 145-160
Author(s):  
Darío Tilves Santiago ◽  
Carmén García Mateo ◽  
Soledad Torres Guijarro ◽  
Laura Docío Fernández ◽  
José Luis Alba Castro

Automatic sign language recognition (ASLR) is quite a complex task, not only for the difficulty of dealing with very dynamic video information, but also because almost every sign language (SL) can be considered as an under-resourced language when it comes to language technology. Spanish sign language (LSE) is one of those under-resourced languages. Developing technology for SSL implies a number of technical challenges that must be tackled down in a structured and sequential manner. In this paper, some problems of machine-learning- based ASLR are addressed. A review of publicly available datasets is given and a new one is presented. It is also discussed the current annotations methods and annotation programs. In our review of existing datasets, our main conclusion is that there is a need for more with high-quality data and annotations.


Author(s):  
Wael Suliman ◽  
Mohamed Deriche ◽  
Hamzah Luqman ◽  
Mohamed Mohandes

Author(s):  
Paul D. Rosero-Montalvo ◽  
Pamela Godoy-Trujillo ◽  
Edison Flores-Bosmediano ◽  
Jorge Carrascal-Garcia ◽  
Santiago Otero-Potosi ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document