scholarly journals Multi-View Gait Recognition Based on a Spatial-Temporal Deep Neural Network

IEEE Access ◽  
2018 ◽  
Vol 6 ◽  
pp. 57583-57596 ◽  
Author(s):  
Suibing Tong ◽  
Yuzhuo Fu ◽  
Xinwei Yue ◽  
Hefei Ling
Author(s):  
A. Sokolova ◽  
A. Konushin

In this work we investigate the problem of people recognition by their gait. For this task, we implement deep learning approach using the optical flow as the main source of motion information and combine neural feature extraction with the additional embedding of descriptors for representation improvement. In order to find the best heuristics, we compare several deep neural network architectures, learning and classification strategies. The experiments were made on two popular datasets for gait recognition, so we investigate their advantages and disadvantages and the transferability of considered methods.


Author(s):  
Chaoran Liu ◽  
Wei Qi Yan

Gait recognition mainly uses different postures of each individual to perform identity authentication. In the existing methods, the full-cycle gait images are used for feature extraction, but there are problems such as occlusion and frame loss in the actual scene. It is not easy to obtain a full-cycle gait image. Therefore, how to construct a highly efficient gait recognition algorithm framework based on a small number of gait images to improve the efficiency and accuracy of recognition has become the focus of gait recognition research. In this chapter, deep neural network CRBM+FC is created. Based on the characteristics of Local Binary Pattern (LBP) and Histogram of Oriented Gradient (HOG) fusion, a method of learning gait recognition from GEI to output is proposed. A brand-new gait recognition algorithm based on layered fu-sion of LBP and HOG is proposed. This chapter also proposes a feature learning network, which uses an unsupervised convolutionally constrained Boltzmann machine to train the Gait Energy Images (GEI).


Author(s):  
David T. Wang ◽  
Brady Williamson ◽  
Thomas Eluvathingal ◽  
Bruce Mahoney ◽  
Jennifer Scheler

Author(s):  
P.L. Nikolaev

This article deals with method of binary classification of images with small text on them Classification is based on the fact that the text can have 2 directions – it can be positioned horizontally and read from left to right or it can be turned 180 degrees so the image must be rotated to read the sign. This type of text can be found on the covers of a variety of books, so in case of recognizing the covers, it is necessary first to determine the direction of the text before we will directly recognize it. The article suggests the development of a deep neural network for determination of the text position in the context of book covers recognizing. The results of training and testing of a convolutional neural network on synthetic data as well as the examples of the network functioning on the real data are presented.


2020 ◽  
Author(s):  
Ala Supriya ◽  
Chiluka Venkat ◽  
Aliketti Deepak ◽  
GV Hari Prasad

Sign in / Sign up

Export Citation Format

Share Document