scholarly journals Comparative Analysis of Convolution Neural Network Models for Continuous Indian Sign Language Classification

2020 ◽  
Vol 171 ◽  
pp. 1542-1550
Author(s):  
Rinki Gupta ◽  
Sreeraman Rajan
IEEE Access ◽  
2021 ◽  
Vol 9 ◽  
pp. 45993-45999
Author(s):  
Ung Yang ◽  
Seungwon Oh ◽  
Seung Gon Wi ◽  
Bok-Rye Lee ◽  
Sang-Hyun Lee ◽  
...  

2019 ◽  
Vol 9 (13) ◽  
pp. 2683 ◽  
Author(s):  
Sang-Ki Ko ◽  
Chang Jo Kim ◽  
Hyedong Jung ◽  
Choongsang Cho

We propose a sign language translation system based on human keypoint estimation. It is well-known that many problems in the field of computer vision require a massive dataset to train deep neural network models. The situation is even worse when it comes to the sign language translation problem as it is far more difficult to collect high-quality training data. In this paper, we introduce the KETI (Korea Electronics Technology Institute) sign language dataset, which consists of 14,672 videos of high resolution and quality. Considering the fact that each country has a different and unique sign language, the KETI sign language dataset can be the starting point for further research on the Korean sign language translation. Using the KETI sign language dataset, we develop a neural network model for translating sign videos into natural language sentences by utilizing the human keypoints extracted from the face, hands, and body parts. The obtained human keypoint vector is normalized by the mean and standard deviation of the keypoints and used as input to our translation model based on the sequence-to-sequence architecture. As a result, we show that our approach is robust even when the size of the training data is not sufficient. Our translation model achieved 93.28% (55.28%, respectively) translation accuracy on the validation set (test set, respectively) for 105 sentences that can be used in emergency situations. We compared several types of our neural sign translation models based on different attention mechanisms in terms of classical metrics for measuring the translation performance.


Recently, the stock market prediction has become one of the essential application areas of time-series forecasting research. The successful prediction of the stock market can be better guided to the investors to maximize their profit and to minimize the risk of investment. The stock market data are very much complex, non-linear and dynamic. Due to this reason, still, it is a challenging task. In recent time, deep learning method has become one of the most popular machine learning methods for time-series forecasting due to their temporal feature extraction capabilities. In this paper, we have proposed a novel Deep Learning-based Integrated Stacked Model (DISM) that integrates both the 1D Convolution neural network and LSTM recurrent neural network to find the spatial and temporal features from the stock market data. Our proposed DISM is applied to forecast the stock market. Here, we have also compared our proposed DISM with the single structured stacked LSTM, and 1D Convolution neural network models, and some other statistical models. We have observed that our proposed DISM produces better results in terms of accuracy and stability.


2021 ◽  
Vol 6 (2) ◽  
pp. 128-133
Author(s):  
Ihor Koval ◽  

The problem of finding objects in images using modern computer vision algorithms has been considered. The description of the main types of algorithms and methods for finding objects based on the use of convolutional neural networks has been given. A comparative analysis and modeling of neural network algorithms to solve the problem of finding objects in images has been conducted. The results of testing neural network models with different architectures on data sets VOC2012 and COCO have been presented. The results of the study of the accuracy of recognition depending on different hyperparameters of learning have been analyzed. The change in the value of the time of determining the location of the object depending on the different architectures of the neural network has been investigated.


Sign in / Sign up

Export Citation Format

Share Document