surveillance videos
Recently Published Documents


TOTAL DOCUMENTS

521
(FIVE YEARS 200)

H-INDEX

23
(FIVE YEARS 6)

Sensors ◽  
2021 ◽  
Vol 21 (24) ◽  
pp. 8291
Author(s):  
Shabana Habib ◽  
Altaf Hussain ◽  
Waleed Albattah ◽  
Muhammad Islam ◽  
Sheroz Khan ◽  
...  

Background and motivation: Every year, millions of Muslims worldwide come to Mecca to perform the Hajj. In order to maintain the security of the pilgrims, the Saudi government has installed about 5000 closed circuit television (CCTV) cameras to monitor crowd activity efficiently. Problem: As a result, these cameras generate an enormous amount of visual data through manual or offline monitoring, requiring numerous human resources for efficient tracking. Therefore, there is an urgent need to develop an intelligent and automatic system in order to efficiently monitor crowds and identify abnormal activity. Method: The existing method is incapable of extracting discriminative features from surveillance videos as pre-trained weights of different architectures were used. This paper develops a lightweight approach for accurately identifying violent activity in surveillance environments. As the first step of the proposed framework, a lightweight CNN model is trained on our own pilgrim’s dataset to detect pilgrims from the surveillance cameras. These preprocessed salient frames are passed to a lightweight CNN model for spatial features extraction in the second step. In the third step, a Long Short Term Memory network (LSTM) is developed to extract temporal features. Finally, in the last step, in the case of violent activity or accidents, the proposed system will generate an alarm in real time to inform law enforcement agencies to take appropriate action, thus helping to avoid accidents and stampedes. Results: We have conducted multiple experiments on two publicly available violent activity datasets, such as Surveillance Fight and Hockey Fight datasets; our proposed model achieved accuracies of 81.05 and 98.00, respectively.


2021 ◽  
pp. 113-125
Author(s):  
Roa’a M. Alairaji ◽  
Ibtisam A. Aljazaery ◽  
Haider TH. Salim ALRikabi

2021 ◽  
Author(s):  
Chun-Lung Yang ◽  
Tsung-Hsuan Wu ◽  
Shang-Hong Lai

2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
G. Merlin Linda ◽  
N.V.S. Sree Rathna Lakshmi ◽  
N. Senthil Murugan ◽  
Rajendra Prasad Mahapatra ◽  
V. Muthukumaran ◽  
...  

PurposeThe paper aims to introduce an intelligent recognition system for viewpoint variations of gait and speech. It proposes a convolutional neural network-based capsule network (CNN-CapsNet) model and outlining the performance of the system in recognition of gait and speech variations. The proposed intelligent system mainly focuses on relative spatial hierarchies between gait features in the entities of the image due to translational invariances in sub-sampling and speech variations.Design/methodology/approachThis proposed work CNN-CapsNet is mainly used for automatic learning of feature representations based on CNN and used capsule vectors as neurons to encode all the spatial information of an image by adapting equal variances to change in viewpoint. The proposed study will resolve the discrepancies caused by cofactors and gait recognition between opinions based on a model of CNN-CapsNet.FindingsThis research work provides recognition of signal, biometric-based gait recognition and sound/speech analysis. Empirical evaluations are conducted on three aspects of scenarios, namely fixed-view, cross-view and multi-view conditions. The main parameters for recognition of gait are speed, change in clothes, subjects walking with carrying object and intensity of light.Research limitations/implicationsThe proposed CNN-CapsNet has some limitations when considering for detecting the walking targets from surveillance videos considering multimodal fusion approaches using hardware sensor devices. It can also act as a pre-requisite tool to analyze, identify, detect and verify the malware practices.Practical implicationsThis research work includes for detecting the walking targets from surveillance videos considering multimodal fusion approaches using hardware sensor devices. It can also act as a pre-requisite tool to analyze, identify, detect and verify the malware practices.Originality/valueThis proposed research work proves to be performing better for the recognition of gait and speech when compared with other techniques.


Sign in / Sign up

Export Citation Format

Share Document