The development of a video browsing and video summary review tool

Author(s):  
Chiman Kwan ◽  
Jin Zhou ◽  
Jenson Yin
Keyword(s):  
Author(s):  
Hesham Farouk ◽  
Kamal ElDahshan ◽  
Amr Abd Elawed Abozeid

in the context of mobile computing and multimedia processing, video summarization plays an important role for video browsing, streaming, indexing and storing. In this paper, an effective and efficient video summarization approach for mobile devices is proposed. The goal of this approach is to generate a video summary (static and dynamic) based on Visual Attention Model (VAM) and new Fast Directional Motion Intensity Estimation (FDMIE) algorithm for mobile devices. The VAM is based on how to simulate the Human Vision System (HVS) to extract the salient areas that have more attention values from video contents. The evaluation results demonstrate that, the effectiveness rate up to 87% with respect to the manually generated summary and the state of the art approaches. Moreover, the efficiency of the proposed approach makes it suitable for online and mobile applications.


2011 ◽  
Vol 403-408 ◽  
pp. 516-521 ◽  
Author(s):  
Sanjay Singh ◽  
Srinivasa Murali Dunga ◽  
AS Mandal ◽  
Chandra Shekhar ◽  
Santanu Chaudhury

In any remote surveillance scenario, smart cameras have to take intelligent decisions to generate summary frames to minimize communication and processing overhead. Video summary generation, in the context of smart camera, is the process of merging the information from multiple frames. A summary generation scheme based on clustering based change detection algorithm has been implemented in our smart camera system for generating frames to deliver requisite information. In this paper we propose an embedded platform based framework for implementing summary generation scheme using HW-SW Co-Design based methodology. The complete system is implemented on Xilinx XUP Virtex-II Pro FPGA board. The overall algorithm is running on PowerPC405 and some of the blocks which are computationally intensive and more frequently called are implemented in hardware using VHDL. The system is designed using Xilinx Embedded Design Kit (EDK).


Author(s):  
Jun He ◽  
Hanwang Zhang ◽  
Ling Shen ◽  
Richang Hong ◽  
Tat-Seng Chua

2011 ◽  
Vol 10 (03) ◽  
pp. 247-259 ◽  
Author(s):  
Dianting Liu ◽  
Mei-Ling Shyu ◽  
Chao Chen ◽  
Shu-Ching Chen

In consequence of the popularity of family video recorders and the surge of Web 2.0, increasing amounts of videos have made the management and integration of the information in videos an urgent and important issue in video retrieval. Key frames, as a high-quality summary of videos, play an important role in the areas of video browsing, searching, categorisation, and indexing. An effective set of key frames should include major objects and events of the video sequence, and should contain minimum content redundancies. In this paper, an innovative key frame extraction method is proposed to select representative key frames for a video. By analysing the differences between frames and utilising the clustering technique, a set of key frame candidates (KFCs) is first selected at the shot level, and then the information within a video shot and between video shots is used to filter the candidate set to generate the final set of key frames. Experimental results on the TRECVID 2007 video dataset have demonstrated the effectiveness of our proposed key frame extraction method in terms of the percentage of the extracted key frames and the retrieval precision.


Sign in / Sign up

Export Citation Format

Share Document