scholarly journals The impact of colour visual attention for video summarization.

2021 ◽  
Author(s):  
Yiming Qian

A High Definition visual attention based video summarization algorithm is proposed to extract feature frames and create a video summary. Specifically, the proposed framework is used as the basis for establishing whether or not there is a measurable impact on summaries constructed when choosing to incorporate visual attention mechanisms into the processing pipeline. The algorithm was assessed against manual human generated key-frame summaries presented with tested datasets from the Open Video Dataset (www.open-video.org). Of the frames selected by the algorithm, up to 68.1% were in agreement with the manual frame summaries depending on the category and length of the video. Specifically, a clear impact of agreement rate with the ground truth is demonstrated when including colour-attention models (in general) into the summarization framework, with the proposed colour-attention model achieving stronger agreement with human selected summaries, than other models from the literature.

2021 ◽  
Author(s):  
Yiming Qian

A High Definition visual attention based video summarization algorithm is proposed to extract feature frames and create a video summary. Specifically, the proposed framework is used as the basis for establishing whether or not there is a measurable impact on summaries constructed when choosing to incorporate visual attention mechanisms into the processing pipeline. The algorithm was assessed against manual human generated key-frame summaries presented with tested datasets from the Open Video Dataset (www.open-video.org). Of the frames selected by the algorithm, up to 68.1% were in agreement with the manual frame summaries depending on the category and length of the video. Specifically, a clear impact of agreement rate with the ground truth is demonstrated when including colour-attention models (in general) into the summarization framework, with the proposed colour-attention model achieving stronger agreement with human selected summaries, than other models from the literature.


Author(s):  
Yu Zhang ◽  
Ju Liu ◽  
Xiaoxi Liu ◽  
Xuesong Gao

In this manuscript, the authors present a keyshots-based supervised video summarization method, where feature fusion and LSTM networks are used for summarization. The framework can be divided into three folds: 1) The authors formulate video summarization as a sequence to sequence problem, which should predict the importance score of video content based on video feature sequence. 2) By simultaneously considering visual features and textual features, the authors present the deep fusion multimodal features and summarize videos based on recurrent encoder-decoder architecture with bi-directional LSTM. 3) Most importantly, in order to train the supervised video summarization framework, the authors adopt the number of users who decided to select current video clip in their final video summary as the importance scores and ground truth. Comparisons are performed with the state-of-the-art methods and different variants of FLSum and T-FLSum. The results of F-score and rank correlation coefficients on TVSum and SumMe shows the outstanding performance of the method proposed in this manuscript.


Author(s):  
Tao Wang ◽  
Yue Gao ◽  
Patricia Wang ◽  
Wei Hu ◽  
Jianguo Li ◽  
...  

Video summary is very important for users to grasp a whole video’s content quickly for efficient browsing and editing. In this chapter, we propose a novel video summarization approach based on redundancy removing and content ranking. Firstly, by video parsing and cast indexing, the approach constructs a story board to let user know about the main scenes and the main actors in the video. Then it removes redundant frames to generate a “story-constraint summary” by key frame clustering and repetitive segment detection. To shorten the video summary length to a target length, “time-constraint summary” is constructed by important factor based content ranking. Extensive experiments are carried out on TV series, movies, and cartoons. Good results demonstrate the effectiveness of the proposed method.


Author(s):  
Hesham Farouk ◽  
Kamal ElDahshan ◽  
Amr Abd Elawed Abozeid

in the context of mobile computing and multimedia processing, video summarization plays an important role for video browsing, streaming, indexing and storing. In this paper, an effective and efficient video summarization approach for mobile devices is proposed. The goal of this approach is to generate a video summary (static and dynamic) based on Visual Attention Model (VAM) and new Fast Directional Motion Intensity Estimation (FDMIE) algorithm for mobile devices. The VAM is based on how to simulate the Human Vision System (HVS) to extract the salient areas that have more attention values from video contents. The evaluation results demonstrate that, the effectiveness rate up to 87% with respect to the manually generated summary and the state of the art approaches. Moreover, the efficiency of the proposed approach makes it suitable for online and mobile applications.


2009 ◽  
Vol 20 (12) ◽  
pp. 3240-3253 ◽  
Author(s):  
Guo-Min ZHANG ◽  
Jian-Ping YIN ◽  
En ZHU ◽  
Ling MAO

Sign in / Sign up

Export Citation Format

Share Document