A multilevel paradigm for deep convolutional neural network features selection with an application to human gait recognition

2020 ◽  
Author(s):  
Habiba Arshad ◽  
Muhammad Attique Khan ◽  
Muhammad Irfan Sharif ◽  
Mussarat Yasmin ◽  
João Manuel R. S. Tavares ◽  
...  
Author(s):  
G. Merlin Linda ◽  
G. Themozhi ◽  
Sudheer Reddy Bandi

In recent decades, gait recognition has garnered a lot of attention from the researchers in the IT era. Gait recognition signifies verifying or identifying the individuals by their walking style. Gait supports in surveillance system by identifying people when they are at a distance from the camera and can be used in numerous computer vision and surveillance applications. This paper proposes a stupendous Color-mapped Contour Gait Image (CCGI) for varying factors of Cross-View Gait Recognition (CVGR). The first contour in each gait image sequence is extracted using a Combination of Receptive Fields (CORF) contour tracing algorithm which extracts the contour image using Difference of Gaussians (DoG) and hysteresis thresholding. Moreover, hysteresis thresholding detects the weak edges from the total pixel information and provides more well-balanced smooth features compared to an absolute one. Second CCGI encodes the spatial and temporal information via color mapping to attain the regularized contour images with fewer outliers. Based on the front view of a human walking pattern, the appearance of cross-view variations would reduce drastically with respect to a change of view angles. This proposed work evaluates the performance analysis of CVGR using Deep Convolutional Neural Network (CNN) framework. CCGI is considered a gait feature for comparing and evaluating the robustness of our proposed model. Experiments conducted on CASIA-B database show the comparisons of previous methods with the proposed method and achieved 94.65% accuracy with a better recognition rate.


2020 ◽  
Vol 2020 (4) ◽  
pp. 4-14
Author(s):  
Vladimir Budak ◽  
Ekaterina Ilyina

The article proposes the classification of lenses with different symmetrical beam angles and offers a scale as a spot-light’s palette. A collection of spotlight’s images was created and classified according to the proposed scale. The analysis of 788 pcs of existing lenses and reflectors with different LEDs and COBs carried out, and the dependence of the axial light intensity from beam angle was obtained. A transfer training of new deep convolutional neural network (CNN) based on the pre-trained GoogleNet was performed using this collection. GradCAM analysis showed that the trained network correctly identifies the features of objects. This work allows us to classify arbitrary spotlights with an accuracy of about 80 %. Thus, light designer can determine the class of spotlight and corresponding type of lens with its technical parameters using this new model based on CCN.


Sign in / Sign up

Export Citation Format

Share Document