A Novel Framework for Video Retrieval Algorithm Evaluations and Methods for Effective Context-Aware Video Content Retrial Method on Cloud

Author(s):  
T. Naga Raja ◽  
V. V. Venkata Ramana ◽  
A. Damodaram
Author(s):  
Min Chen

The fast proliferation of video data archives has increased the need for automatic video content analysis and semantic video retrieval. Since temporal information is critical in conveying video content, in this chapter, an effective temporal-based event detection framework is proposed to support high-level video indexing and retrieval. The core is a temporal association mining process that systematically captures characteristic temporal patterns to help identify and define interesting events. This framework effectively tackles the challenges caused by loose video structure and class imbalance issues. One of the unique characteristics of this framework is that it offers strong generality and extensibility with the capability of exploring representative event patterns with little human interference. The temporal information and event detection results can then be input into our proposed distributed video retrieval system to support the high-level semantic querying, selective video browsing and event-based video retrieval.


2009 ◽  
pp. 1080-1095
Author(s):  
Janne Lahti ◽  
Utz Westermann ◽  
Marko Palola ◽  
Johannes Peltola

Video management research has been neglecting the increased attractiveness of using cameraequipped mobile phones for the production of short home video clips. But specific capabilities of modern phones — especially the availability of rich context data — open up new approaches to traditional video management problems, such as the notorious lack of annotated metadata for home video content. In this chapter, we present MobiCon, a mobile, context-aware home video production tool. MobiCon allows users to capture video clips with their camera phones, to semi-automatically create MPEG-7-conformant annotations by exploiting available context data at capture time, to upload both clips and annotations to the users’ video collections, and to share these clips with friends using OMA DRM. Thereby, MobiCon enables mobile users to effortlessly create richly annotated home video clips with their camera phones, paving the way to a more effective organization of their home video collections.


Author(s):  
Min Chen

The fast proliferation of video data archives has increased the need for automatic video content analysis and semantic video retrieval. Since temporal information is critical in conveying video content, in this chapter, an effective temporal-based event detection framework is proposed to support high-level video indexing and retrieval. The core is a temporal association mining process that systematically captures characteristic temporal patterns to help identify and define interesting events. This framework effectively tackles the challenges caused by loose video structure and class imbalance issues. One of the unique characteristics of this framework is that it offers strong generality and extensibility with the capability of exploring representative event patterns with little human interference. The temporal information and event detection results can then be input into our proposed distributed video retrieval system to support the high-level semantic querying, selective video browsing and event-based video retrieval.


2018 ◽  
Vol 10 (4) ◽  
pp. 52-61
Author(s):  
Xiaoxi Liu ◽  
Ju Liu ◽  
Lingchen Gu ◽  
Yannan Ren

This article describes how due to the diversification of electronic equipment in public security forensics, vehicle surveillance video as a burgeoning way attracts us attention. The vehicle surveillance videos contain useful evidence, and video retrieval can help us find evidence contained in them. In order to get the evidence videos accurately and effectively, a convolution neural network (CNN) is widely applied to improve performance in surveillance video retrieval. In this article, it is proposed that a vehicle surveillance video retrieval method with deep feature derived from CNN and with iterative quantization (ITQ) encoding, when given any frame of a video, it can generate a short video which can be applied to public security forensics. Experiments show that the retrieved video can describe the video content before and after entering the keyframe directly and efficiently, and the final short video for an accident scene in the surveillance video can be regarded as forensic evidence.


Electronics ◽  
2020 ◽  
Vol 9 (12) ◽  
pp. 2125
Author(s):  
Xiaoyu Wu ◽  
Tiantian Wang ◽  
Shengjin Wang

Text-video retrieval tasks face a great challenge in the semantic gap between cross modal information. Some existing methods transform the text or video into the same subspace to measure their similarity. However, this kind of method does not consider adding a semantic consistency constraint when associating the two modalities of semantic encoding, and the associated result is poor. In this paper, we propose a multi-modal retrieval algorithm based on semantic association and multi-task learning. Firstly, the multi-level features of video or text are extracted based on multiple deep learning networks, so that the information of the two modalities can be fully encoded. Then, in the public feature space where the two modalities information are mapped together, we propose a semantic similarity measurement and semantic consistency classification based on text-video features for a multi-task learning framework. With the semantic consistency classification task, the learning of semantic association task is restrained. So multi-task learning guides the better feature mapping of two modalities and optimizes the construction of unified feature subspace. Finally, the experimental results of our proposed algorithm on the Microsoft Video Description dataset (MSVD) and MSR-Video to Text (MSR-VTT) are better than the existing research, which prove that our algorithm can improve the performance of cross-modal retrieval.


Author(s):  
Sri Wahyuni

ABSTRACT Introduction One of the efforts to provide the best service for users is by developing innovative library services. One of them is by developing a video content-based library collection. MMTC Yogyakarta Multi Media College Library has developed a video content-based information retrieval system. It is hoped that by utilizing this video content-based STKI, users will be helped and get accelerated information in finding the material needed, especially searching for material in video files. Data Collection Method. In this paper the writer uses qualitative research with a library research approach, while the data analysis uses content analysis techniques. This method the authors use to observe and analyze an information system. Results and Discussions. In developing a Content Based Video Retrieval strategy in the MMTC Yogyakarta Multi Media High School Library, it begins with identifying user needs, creating a system design, evaluating the system design, pouring the system design into a programming language, testing the system, evaluating the system and using it. Then, the authors also provide an overview of the development of the STKI by conducting a SWOT analysis. Based on the macro analysis, the opportunity and threat variables will be formulated, while the internal analysis will formulate the strength and weakness variables. The last stage is the STKI analysis, while the stages are: complete definition, problem analysis, needs analysis, logic design and needs analysis. Conclusions. In the Content Based Video Retrieval development strategy at the MMTC Yogyakarta Multi Media College Library, there are several things that need to be considered in the development of an information retrieval system, including: User needs, development budget (budget), human resources, support from leaders and facilities (software and hardware) and IT infrastructure (internet network). The development of the STKI should begin with identifying user needs and conducting a SWOT analysis to determine the strengths and weaknesses of the system, as well as the goal so that the system can be optimally empowered by users. Keywords: Library, Information Retrieval System, Video Content


Sign in / Sign up

Export Citation Format

Share Document