scholarly journals Hybrid Method of Video Shot Segmentation Based on YCbCr Space Color Model

Author(s):  
Sasmita Kumari Nayak ◽  
Jharna Majumdar

In this digital world, Video analysis is the most important and useful task. Currently, tremendous tasks have been done in video analysis like compressing the videos, video retrieval process and video database indexing, etc. For all these tasks, one common step is segmenting the video shots, which are referred to as Video Shots Segmentation (VSS). Video shots segmentation is used to segment the input videos into a number of frames sequentially where the scene changes occurred, i.e. called shots. In this article, segmenting the video shots follows a hybrid procedure. Here, we have introduced the moments of colors, distance metrics and threshold techniques. All the videos follow the above mentioned steps for segmenting the video shots. But, before that, the input video is converted into a specific color model i.e. YCbCr. Then, apply the color moments to extract the feature vectors of frames, which are differentiated based on the color features of frames. In every two frames of the video, distance metrics methods are applying to compute the similarity and dissimilarity of frames. And the dissimilarity of the frames can be computed by using the threshold technique to get the shots from the video. In this paper, we are using the adaptive threshold technique to segment the videos into various shots. In this step, we will get a true number of shots. By the experimental results, this proposed methodology can be evaluated with the sequence of videos based on the performance or evaluation metrics.

Author(s):  
Nandini H. M. ◽  
Chethan H. K. ◽  
Rashmi B. S.

Shot boundary detection in videos is one of the most fundamental tasks towards content-based video retrieval and analysis. In this aspect, an efficient approach to detect abrupt and gradual transition in videos is presented. The proposed method detects the shot boundaries in videos by extracting block-based mean probability binary weight (MPBW) histogram from the normalized Kirsch magnitude frames as an amalgamation of local and global features. Abrupt transitions in videos are detected by utilizing the distance measure between consecutive MPBW histograms and employing an adaptive threshold. In the subsequent step, co-efficient of mean deviation and variance statistical measure is applied on MPBW histograms to detect gradual transitions in the video. Experiments were conducted on TRECVID 2001 and 2007 datasets to analyse and validate the proposed method. Experimental result shows significant improvement of the proposed SBD approach over some of the state-of-the-art algorithms in terms of recall, precision, and F1-score.


2011 ◽  
Vol 10 (03) ◽  
pp. 247-259 ◽  
Author(s):  
Dianting Liu ◽  
Mei-Ling Shyu ◽  
Chao Chen ◽  
Shu-Ching Chen

In consequence of the popularity of family video recorders and the surge of Web 2.0, increasing amounts of videos have made the management and integration of the information in videos an urgent and important issue in video retrieval. Key frames, as a high-quality summary of videos, play an important role in the areas of video browsing, searching, categorisation, and indexing. An effective set of key frames should include major objects and events of the video sequence, and should contain minimum content redundancies. In this paper, an innovative key frame extraction method is proposed to select representative key frames for a video. By analysing the differences between frames and utilising the clustering technique, a set of key frame candidates (KFCs) is first selected at the shot level, and then the information within a video shot and between video shots is used to filter the candidate set to generate the final set of key frames. Experimental results on the TRECVID 2007 video dataset have demonstrated the effectiveness of our proposed key frame extraction method in terms of the percentage of the extracted key frames and the retrieval precision.


Author(s):  
Rashmi B S ◽  
Nagendraswamy H S

The amount of video data generated and made publicly available has been tremendously increased in today's digital era. Analyzing these huge video repositories require effective and efficient content-based video analysis systems. Shot boundary detection and Keyframe extraction are the two major tasks in video analysis. In this direction, a method for detecting abrupt shot boundaries and extracting representative keyframe from each video shot is proposed. These objectives are achieved by incorporating the concepts of fuzzy sets and intuitionistic fuzzy sets. Shot boundaries are detected using coefficient of correlation on fuzzified frames. Further, probabilistic entropy measures are computed to extract the keyframe within fuzzified frames of a shot. The keyframe representative of a shot is the frame with highest entropy value. To show the efficacy of the proposed methods two benchmark datasets are used (TRECVID and Open Video Project). The proposed methods outperform when compared with some of state-of-the-art shot boundary detection and keyframe extraction methods.


2013 ◽  
Vol 2013 ◽  
pp. 1-8
Author(s):  
Sujuan Hou ◽  
Shangbo Zhou

Query by example video retrieval aims at automatic retrieval of video samples which are similar to a user-provided example from video database. Considering that much of prior work on video analysis support retrieval using only visual features, in this paper, a two-step method for query by example is proposed, in which both audio and visual features are used. In the proposed method, a set of audio and visual features are, respectively, extracted from the shot level and key frame level. Among these features, audio features are employed to rough retrieval, while visual features are applied to refine retrieval. The experimental results demonstrate the good performance of the proposed approach.


Author(s):  
Jianping Fan ◽  
Xingquan Zhu ◽  
Jing Xiao

Recent advances in digital video compression and networks have made videos more accessible than ever. Several content-based video retrieval systems have been proposed in the past.  In this chapter, we first review these existing content-based video retrieval systems and then propose a new framework, called ClassView, to make some advances towards more efficient content-based video retrieval. This framework includes: (a) an efficient video content analysis and representation scheme to support high-level visual concept characterization; (b) a hierarchical video classification technique to bridge the semantic gap between low-level visual features and high-level semantic visual concepts; and (c) a hierarchical video database indexing structure to enable video access over large-scale database. Integrating video access with efficient database indexing tree structures has provided a great opportunity for supporting more powerful video search engines.


Author(s):  
LIANG-HUA CHEN ◽  
KUO-HAO CHIN ◽  
HONG-YUAN MARK LIAO

The usefulness of a video database depends on whether the video of interest can be easily located. In this paper, we propose a video retrieval algorithm based on the integration of several visual cues. In contrast to key-frame based representation of shot, our approach analyzes all frames within a shot to construct a compact representation of video shot. In the video matching step, by integrating the color and motion features, a similarity measure is defined to locate the occurrence of similar video clips in the database. Therefore, our approach is able to fully exploit the spatio-temporal information contained in video. Experimental results indicate that the proposed approach is effective and outperforms some existing technique.


2020 ◽  
Vol 8 (5) ◽  
pp. 4763-4769

Now days as the progress of digital image technology, video files raise fast, there is a great demand for automatic video semantic study in many scenes, such as video semantic understanding, content-based analysis, video retrieval. Shot boundary detection is an elementary step for video analysis. However, recent methods are time consuming and perform badly in the gradual transition detection. In this paper we have projected a novel approach for video shot boundary detection using CNN which is based on feature extraction. We designed couple of steps to implement this method for automatic video shot boundary detection (VSBD). Primarily features are extracted using H, V&S parameters based on mean log difference along with implementation of histogram distribution function. This feature is given as an input to CNN algorithm which detects shots which is based on probability function. CNN is implemented using convolution and rectifier linear unit activation matrix which is followed after filter application and zero padding. After downsizing the matrix it is given as a input to fully connected layer which indicates shot boundaries comparing the proposed method with CNN method based on GPU the results are encouraging with substantially high values of precision Recall & F1 measures. CNN methods perform moderately better for animated videos while it excels for complex video which is observed in the results.


Sign in / Sign up

Export Citation Format

Share Document