Detection of the Periodicity of Human Actions for Efficient Video Summarization

Author(s):  
Okan Candir ◽  
M. Elif Karsligil
2013 ◽  
Vol 4 ◽  
pp. 78-84 ◽  
Author(s):  
Walid Barhoumi ◽  
Ezzeddine Zagrouba

Author(s):  
Hesham Farouk ◽  
Kamal ElDahshan ◽  
Amr Abd Elawed Abozeid

in the context of mobile computing and multimedia processing, video summarization plays an important role for video browsing, streaming, indexing and storing. In this paper, an effective and efficient video summarization approach for mobile devices is proposed. The goal of this approach is to generate a video summary (static and dynamic) based on Visual Attention Model (VAM) and new Fast Directional Motion Intensity Estimation (FDMIE) algorithm for mobile devices. The VAM is based on how to simulate the Human Vision System (HVS) to extract the salient areas that have more attention values from video contents. The evaluation results demonstrate that, the effectiveness rate up to 87% with respect to the manually generated summary and the state of the art approaches. Moreover, the efficiency of the proposed approach makes it suitable for online and mobile applications.


2021 ◽  
Vol 11 (11) ◽  
pp. 5260
Author(s):  
Theodoros Psallidas ◽  
Panagiotis Koromilas ◽  
Theodoros Giannakopoulos ◽  
Evaggelos Spyrou

The exponential growth of user-generated content has increased the need for efficient video summarization schemes. However, most approaches underestimate the power of aural features, while they are designed to work mainly on commercial/professional videos. In this work, we present an approach that uses both aural and visual features in order to create video summaries from user-generated videos. Our approach produces dynamic video summaries, that is, comprising the most “important” parts of the original video, which are arranged so as to preserve their temporal order. We use supervised knowledge from both the aforementioned modalities and train a binary classifier, which learns to recognize the important parts of videos. Moreover, we present a novel user-generated dataset which contains videos from several categories. Every 1 sec part of each video from our dataset has been annotated by more than three annotators as being important or not. We evaluate our approach using several classification strategies based on audio, video and fused features. Our experimental results illustrate the potential of our approach.


Author(s):  
D. Minola Davids ◽  
C. Seldev Christopher

The visual data attained from surveillance single-camera or multi-view camera networks is exponentially increasing every day. Identifying the important shots in the presented video which faithfully signify the original video is the major task in video summarization. For executing efficient video summarization of the surveillance systems, optimization algorithm like LFOB-COA is proposed in this paper. Data collection, pre-processing, deep feature extraction (FE), shot segmentation JSFCM, classification using Rectified Linear Unit activated BLSTM, and LFOB-COA are the proposed method’s five steps. Finally a post-processing step is utilized. For recognizing the proposed method’s effectiveness, the results are then contrasted with the existent methods.


Sign in / Sign up

Export Citation Format

Share Document