An optimal video summarization of surveillance systems using LFOB-COA with deep features and RBLSTM

Author(s):  
D. Minola Davids ◽  
C. Seldev Christopher

The visual data attained from surveillance single-camera or multi-view camera networks is exponentially increasing every day. Identifying the important shots in the presented video which faithfully signify the original video is the major task in video summarization. For executing efficient video summarization of the surveillance systems, optimization algorithm like LFOB-COA is proposed in this paper. Data collection, pre-processing, deep feature extraction (FE), shot segmentation JSFCM, classification using Rectified Linear Unit activated BLSTM, and LFOB-COA are the proposed method’s five steps. Finally a post-processing step is utilized. For recognizing the proposed method’s effectiveness, the results are then contrasted with the existent methods.

2021 ◽  
Vol 11 (11) ◽  
pp. 5260
Author(s):  
Theodoros Psallidas ◽  
Panagiotis Koromilas ◽  
Theodoros Giannakopoulos ◽  
Evaggelos Spyrou

The exponential growth of user-generated content has increased the need for efficient video summarization schemes. However, most approaches underestimate the power of aural features, while they are designed to work mainly on commercial/professional videos. In this work, we present an approach that uses both aural and visual features in order to create video summaries from user-generated videos. Our approach produces dynamic video summaries, that is, comprising the most “important” parts of the original video, which are arranged so as to preserve their temporal order. We use supervised knowledge from both the aforementioned modalities and train a binary classifier, which learns to recognize the important parts of videos. Moreover, we present a novel user-generated dataset which contains videos from several categories. Every 1 sec part of each video from our dataset has been annotated by more than three annotators as being important or not. We evaluate our approach using several classification strategies based on audio, video and fused features. Our experimental results illustrate the potential of our approach.


Author(s):  
Erik Carlbaum ◽  
Sina Sharif Mansouri ◽  
Christoforos Kanellakis ◽  
Anton Koval ◽  
George Nikolakopoulos

2021 ◽  
pp. 1063293X2198894
Author(s):  
Prabira Kumar Sethy ◽  
Santi Kumari Behera ◽  
Nithiyakanthan Kannan ◽  
Sridevi Narayanan ◽  
Chanki Pandey

Paddy is an essential nutrient worldwide. Rice gives 21% of worldwide human per capita energy and 15% of per capita protein. Asia represented 60% of the worldwide populace, about 92% of the world’s rice creation, and 90% of worldwide rice utilization. With the increase in population, the demand for rice is increased. So, the productivity of farming is needed to be enhanced by introducing new technology. Deep learning and IoT are hot topics for research in various fields. This paper suggested a setup comprising deep learning and IoT for monitoring of paddy field remotely. The vgg16 pre-trained network is considered for the identification of paddy leaf diseases and nitrogen status estimation. Here, two strategies are carried out to identify images: transfer learning and deep feature extraction. The deep feature extraction approach is combined with a support vector machine (SVM) to classify images. The transfer learning approach of vgg16 for identifying four types of leaf diseases and prediction of nitrogen status results in 79.86% and 84.88% accuracy. Again, the deep features of Vgg16 and SVM results for identifying four types of leaf diseases and prediction of nitrogen status have achieved an accuracy of 97.31% and 99.02%, respectively. Besides, a framework is suggested for monitoring of paddy field remotely based on IoT and deep learning. The suggested prototype’s superiority is that it controls temperature and humidity like the state-of-the-art and can monitor the additional two aspects, such as detecting nitrogen status and diseases.


Video Surveillance System uses video cameras to capture images and videos that can be compressed, stored and send to place with the limited set of monitors .Now a Days all the public places such as bank, educational institutions, Offices, Hospitals are equipped with multiple surveillance cameras having overlapping field of view for security and environment monitoring purposes. A Video Summarization is a technique to generate the summary of entire Video Content either by still images or through video skim. The summarized video length should be less than the original video length and it should covers maximum information from the original video. Video summarization studies concentrating on monocular videos cannot be applied directly to multiple-view videos due to redundancy in multiple views. Generating Summary for Surveillance videos is more challenging because, videos Captured by surveillance cameras is long, contains uninteresting events, same scene recorded in different views leading to inter-view dependencies and variation in illuminations. In this paper, we present a survey on the research work carried on video summarization techniques for videos captured through multiple views. The summarized video generated can be used for the analysis of post-accident scenarios, identifying suspicious events, theft in public which supports Crime department for the investigation purposes.


Sign in / Sign up

Export Citation Format

Share Document