Keyframe-Based Vehicle Surveillance Video Retrieval

2018 ◽  
Vol 10 (4) ◽  
pp. 52-61
Author(s):  
Xiaoxi Liu ◽  
Ju Liu ◽  
Lingchen Gu ◽  
Yannan Ren

This article describes how due to the diversification of electronic equipment in public security forensics, vehicle surveillance video as a burgeoning way attracts us attention. The vehicle surveillance videos contain useful evidence, and video retrieval can help us find evidence contained in them. In order to get the evidence videos accurately and effectively, a convolution neural network (CNN) is widely applied to improve performance in surveillance video retrieval. In this article, it is proposed that a vehicle surveillance video retrieval method with deep feature derived from CNN and with iterative quantization (ITQ) encoding, when given any frame of a video, it can generate a short video which can be applied to public security forensics. Experiments show that the retrieved video can describe the video content before and after entering the keyframe directly and efficiently, and the final short video for an accident scene in the surveillance video can be regarded as forensic evidence.

2019 ◽  
pp. 535-544
Author(s):  
Xiaoxi Liu ◽  
Ju Liu ◽  
Lingchen Gu ◽  
Yannan Ren

This article describes how due to the diversification of electronic equipment in public security forensics, vehicle surveillance video as a burgeoning way attracts us attention. The vehicle surveillance videos contain useful evidence, and video retrieval can help us find evidence contained in them. In order to get the evidence videos accurately and effectively, a convolution neural network (CNN) is widely applied to improve performance in surveillance video retrieval. In this article, it is proposed that a vehicle surveillance video retrieval method with deep feature derived from CNN and with iterative quantization (ITQ) encoding, when given any frame of a video, it can generate a short video which can be applied to public security forensics. Experiments show that the retrieved video can describe the video content before and after entering the keyframe directly and efficiently, and the final short video for an accident scene in the surveillance video can be regarded as forensic evidence.


2019 ◽  
Vol 66 (2) ◽  
pp. 136-143
Author(s):  
Richard Arinitwe ◽  
Alice Willson ◽  
Sean Batenhorst ◽  
Peter T Cartledge

Abstract Introduction In resource-limited settings, the ratio of trained health care professionals to admitted neonates is low. Parents therefore, frequently need to provide primary neonatal care. In order to do so safely, they require effective education and confidence. The evolution and availability of technology mean that video education is becoming more readily available in this setting. Aim This study aimed to investigate whether showing a short video on a specific neonatal topic could change the knowledge and confidence of mothers of admitted neonates. Methods A prospective interventional study was conducted in two hospitals in Kigali, Rwanda. Mothers of admitted neonates at a teaching hospital and a district hospital were invited to participate. Fifty-nine mothers met the inclusion criteria. Participants were shown ‘Increasing Your Milk Supply, for mothers’ a seven-minute Global Health Media Project video in the local language (Kinyarwanda). Before and after watching the video, mothers completed a Likert-based questionnaire which assessed confidence and knowledge on the subject. Results Composite Likert scores showed a statistically significant increase in knowledge (pre = 27.2, post = 33.2, p < 0.001) and confidence (pre = 5.9, post = 14.2, p < 0.001). Satisfaction levels were high regarding the video content, language and quality. However, only 10% of mothers owned a smartphone. Discussion We have shown that maternal confidence and knowledge on a specific neonatal topic can be increased through the use of a short video and these videos have the potential to improve the quality of care provided to admitted neonates by their parents in low-resource settings.


2021 ◽  
Vol 13 (5) ◽  
pp. 869
Author(s):  
Zheng Zhuo ◽  
Zhong Zhou

In recent years, the amount of remote sensing imagery data has increased exponentially. The ability to quickly and effectively find the required images from massive remote sensing archives is the key to the organization, management, and sharing of remote sensing image information. This paper proposes a high-resolution remote sensing image retrieval method with Gabor-CA-ResNet and a split-based deep feature transform network. The main contributions include two points. (1) For the complex texture, diverse scales, and special viewing angles of remote sensing images, A Gabor-CA-ResNet network taking ResNet as the backbone network is proposed by using Gabor to represent the spatial-frequency structure of images, channel attention (CA) mechanism to obtain stronger representative and discriminative deep features. (2) A split-based deep feature transform network is designed to divide the features extracted by the Gabor-CA-ResNet network into several segments and transform them separately for reducing the dimensionality and the storage space of deep features significantly. The experimental results on UCM, WHU-RS, RSSCN7, and AID datasets show that, compared with the state-of-the-art methods, our method can obtain competitive performance, especially for remote sensing images with rare targets and complex textures.


Author(s):  
Min Chen

The fast proliferation of video data archives has increased the need for automatic video content analysis and semantic video retrieval. Since temporal information is critical in conveying video content, in this chapter, an effective temporal-based event detection framework is proposed to support high-level video indexing and retrieval. The core is a temporal association mining process that systematically captures characteristic temporal patterns to help identify and define interesting events. This framework effectively tackles the challenges caused by loose video structure and class imbalance issues. One of the unique characteristics of this framework is that it offers strong generality and extensibility with the capability of exploring representative event patterns with little human interference. The temporal information and event detection results can then be input into our proposed distributed video retrieval system to support the high-level semantic querying, selective video browsing and event-based video retrieval.


Author(s):  
Min Chen

The fast proliferation of video data archives has increased the need for automatic video content analysis and semantic video retrieval. Since temporal information is critical in conveying video content, in this chapter, an effective temporal-based event detection framework is proposed to support high-level video indexing and retrieval. The core is a temporal association mining process that systematically captures characteristic temporal patterns to help identify and define interesting events. This framework effectively tackles the challenges caused by loose video structure and class imbalance issues. One of the unique characteristics of this framework is that it offers strong generality and extensibility with the capability of exploring representative event patterns with little human interference. The temporal information and event detection results can then be input into our proposed distributed video retrieval system to support the high-level semantic querying, selective video browsing and event-based video retrieval.


Sign in / Sign up

Export Citation Format

Share Document