static video summarization
Recently Published Documents


TOTAL DOCUMENTS

21
(FIVE YEARS 9)

H-INDEX

5
(FIVE YEARS 2)

2021 ◽  
Author(s):  
Yunyun Sun ◽  
Peng Li ◽  
Yutong Liu ◽  
Zhaohui Jiang

Abstract Numerous limitations of shot based and content based key frame extraction approaches have encouraged the development of cluster based methods. This work provides OTMW, Optimal Threshold and Maximum Weight clustering method, as a novel cluster based key frame extraction method. The video feature dataset is constructed by computing the color, texture and information complexity features of frame images. An optimization function is developed to compute the optimal clustering threshold. It is constrained by fidelity and ratio measure parameters. We turn to an empirical study on the proposed method in multi-type video key frame extraction tasks and compare it with popular cluster based methods including Mean-shift, DBSCAN, GMM and K-means. OTWM method achieves an average fidelity and ratio of 96.12 and 97.13, respectively. Experimental results demonstrate that OTMW can bring higher fidelity and ratio performance, while still maintaining a competitive performance over other cluster based methods. Overall, the proposed method can accurately extract key frames from multi-type videos.


2020 ◽  
Vol 9 (2) ◽  
pp. 1030-1032

Video summarization plays an important role in too many fields, such as video indexing, video browsing, video compression, video analyzing and so on. One of the fundamental units in the video structure analysis is the keyframe extraction, Keyframe provides meaningful frames from the video. The keyframe consists of the meaningful frame from the videos which help for video summarization. In this proposed model, we presented an approach that is based on Convolutional Neural Network, keyframe extraction from videos and static video summarization. First, the video should be converted to frames. Then we perform redundancy elimination techniques to reduce the redundancy from frames. Then extract the keyframes from video by using the Convolutional Neural Network(CNN) model. From the extracted keyframe, we form a video summarization.


2019 ◽  
pp. 1627-1638
Author(s):  
Abdul Amir Abdullah Karim ◽  
Rafal Ali Sameer

Video represented by a large number of frames synchronized with audio making video saving requires more storage, it's delivery slower, and computation cost expensive. Video summarization provides entire video information in minimum amount of time. This paper proposes static and dynamic video summarizationmethods. The proposed static video summarization method includes several steps which are extracting frames from video, keyframes selection, feature extraction and description, and matching feature descriptor with bag of visual words, and finally save frames when features matched. The proposed dynamic video summarizationmethod includes in general extracting audio from video, calculating audio features using the average of samples in windows and find the highest average which reflects portion of video with loudest sound. The experimental results for the proposed static video summarization show that there is no redundancy between selected representative keyframes and the subjective evaluation results ensure the importance of the selected keyframes. While the experimental results for the proposed static video summarization show that all the segments of goals have been extracted to provide video summary. Static and dynamic video summarization methods done to football or soccer video type.


Sign in / Sign up

Export Citation Format

Share Document