scholarly journals Video content analysis using the video time density function and statistical models

2021 ◽  
Author(s):  
Junfeng Jiang

As an interesting, meaningful, and challenging topic, video content analysis is to find meaningful structure and patterns from visual data for the purpose of efficient indexing and mining of videos. In this thesis, a new theoretical framework on video content analysis using the video time density function (VTDF) and statistical models is proposed. The proposed framework tries to tackle the problems in video content analysis based on its semantic information from three perspectives: video summarization, video similarity measure, and video event detection. In particular, the main research problems are formulated mathematically first. Two kinds of video data modeling tools are then presented to explore the spatiotemporal characteristics of video data, the independent component analysis (ICA)-based feature extraction and the VTDF. Video summarization is categorized into two types: static and dynamic. Two new methods are proposed to generate the static video summary. One is hierarchical key frame tree to summarize video content hierarchically. Another is vector quantization-based method using Gaussian mixture (GM) and ICA mixture (ICAM) to explore the characteristics of video data in the spatial domain to generate a compact video summary. The VTDF is then applied to develop several approaches for content-based video analysis. In particular, VTDF-based temporal quantization and statistical models are proposed to summarize video content dynamically. VTDF-based video similarity measure model is to measure the similarity between two video sequences. VTDF-based video event detection method is to classify a video into pre-defined events. Video players with content-based fast-forward playback support are designed, developed, and implemented to demonstrate the feasibility of the proposed methods. Given the richness of literature in effective and efficient information coding and representation using probability density function (PDF), the VTDF is expected to serve as a foundation of video content representation and more video content analysis methods will be developed based on the VTDF framework.

2021 ◽  
Author(s):  
Junfeng Jiang

As an interesting, meaningful, and challenging topic, video content analysis is to find meaningful structure and patterns from visual data for the purpose of efficient indexing and mining of videos. In this thesis, a new theoretical framework on video content analysis using the video time density function (VTDF) and statistical models is proposed. The proposed framework tries to tackle the problems in video content analysis based on its semantic information from three perspectives: video summarization, video similarity measure, and video event detection. In particular, the main research problems are formulated mathematically first. Two kinds of video data modeling tools are then presented to explore the spatiotemporal characteristics of video data, the independent component analysis (ICA)-based feature extraction and the VTDF. Video summarization is categorized into two types: static and dynamic. Two new methods are proposed to generate the static video summary. One is hierarchical key frame tree to summarize video content hierarchically. Another is vector quantization-based method using Gaussian mixture (GM) and ICA mixture (ICAM) to explore the characteristics of video data in the spatial domain to generate a compact video summary. The VTDF is then applied to develop several approaches for content-based video analysis. In particular, VTDF-based temporal quantization and statistical models are proposed to summarize video content dynamically. VTDF-based video similarity measure model is to measure the similarity between two video sequences. VTDF-based video event detection method is to classify a video into pre-defined events. Video players with content-based fast-forward playback support are designed, developed, and implemented to demonstrate the feasibility of the proposed methods. Given the richness of literature in effective and efficient information coding and representation using probability density function (PDF), the VTDF is expected to serve as a foundation of video content representation and more video content analysis methods will be developed based on the VTDF framework.


The immense growth in the video content retrieval and video content analysis have motivated the practitioners to migrate the video contents and the analytic applications on to the cloud. The cloud computing platform provides scalability for applications and data, which enables the application owners to deal with complex algorithms, which is needed for video content analysis and retrievals. The primary concern of the video data retrieval on cloud services are the weak security for the standard data during migrating from one VM to another VM. Also, the standard encryption algorithms have failed to demonstrate higher performance during encryption of a large file. Hence, the demand of the recent research is to ensure reduced performance implications for video content encryption over cloud services. This work proposes an adaptive encryption and decryption algorithm for large video data over cloud as Encryption as A Service (EAAS). This work proposes a novel key age calculation dependent on Quartic Polynomial Randomization. The quartic part utilized in the proposed calculation can produce numerous defining moments, which makes the calculation results hard to foresee and the utilization of polynomial randomization can further build the haphazardness of the defining moments. Likewise, the higher size of the video information must be diminished without rotting the data and without trading off the security. Subsequently, this work proposes a novel key edge comparability extraction procedure utilizing versatile movement. The similitude areas in the key casings contains comparable data and, in this manner, can be scrambled all around. This diminishes the time unpredictability to a more noteworthy broaden. Associated with the comparable line of advancement, this work likewise proposes time limited encryption and unscrambling calculations, which can separate between the comparable and unique areas and decrease the time intricacy further. The proposed algorithm demonstrates nearly 40% improvements over the standard encryption algorithms.


2006 ◽  
Vol 52 (3) ◽  
pp. 870-878 ◽  
Author(s):  
Jungong Han ◽  
D. Farin ◽  
P.H.N. de With ◽  
Weilun Lao

Author(s):  
Min Chen

The fast proliferation of video data archives has increased the need for automatic video content analysis and semantic video retrieval. Since temporal information is critical in conveying video content, in this chapter, an effective temporal-based event detection framework is proposed to support high-level video indexing and retrieval. The core is a temporal association mining process that systematically captures characteristic temporal patterns to help identify and define interesting events. This framework effectively tackles the challenges caused by loose video structure and class imbalance issues. One of the unique characteristics of this framework is that it offers strong generality and extensibility with the capability of exploring representative event patterns with little human interference. The temporal information and event detection results can then be input into our proposed distributed video retrieval system to support the high-level semantic querying, selective video browsing and event-based video retrieval.


Sign in / Sign up

Export Citation Format

Share Document