video browsing
Recently Published Documents


TOTAL DOCUMENTS

160
(FIVE YEARS 5)

H-INDEX

18
(FIVE YEARS 1)

2021 ◽  
Author(s):  
Xiaomeng Wang ◽  
Alan F. Blackwell ◽  
Richard Jones ◽  
Hieu T. Nguyen

AbstractScene Walk is a video viewing technique suited to first-person video recorded from wearable cameras. It integrates a 2D video player and visualisation of the camera trajectory into a non-photorealistic partial rendering of the 3D environment as reconstructed from image content. Applications include forensic analysis of first-person video archives, for example as recorded by emergency response teams. The Scene Walk method is designed to support the viewer’s construction and application of a cognitive map of the context in which first-person video was captured. We use methods from wayfinding research to assess the effectiveness of this non-photorealistic approach in comparison to actual physical experience of the scene. We find that Scene Walk does allow viewers to create a more accurate and effective cognitive map of first-person video than is achieved using a conventional video browsing interface and that this model is comparable to actually walking through the original environment.


2020 ◽  
Vol 9 (2) ◽  
pp. 1030-1032

Video summarization plays an important role in too many fields, such as video indexing, video browsing, video compression, video analyzing and so on. One of the fundamental units in the video structure analysis is the keyframe extraction, Keyframe provides meaningful frames from the video. The keyframe consists of the meaningful frame from the videos which help for video summarization. In this proposed model, we presented an approach that is based on Convolutional Neural Network, keyframe extraction from videos and static video summarization. First, the video should be converted to frames. Then we perform redundancy elimination techniques to reduce the redundancy from frames. Then extract the keyframes from video by using the Convolutional Neural Network(CNN) model. From the extracted keyframe, we form a video summarization.


Author(s):  
Miroslav Kratochvíl ◽  
Patrik Veselý ◽  
František Mejzlík ◽  
Jakub Lokoč
Keyword(s):  

Videos are recorded and uploaded daily to the sites like YouTube, Facebook etc. from devices such as mobile phones and digital cameras with less or without metadata (semantic tags) associated with it. This makes extremely difficult to retrieve similar videos based on this metadata without using content based semantic search. Content based video retrieval is problem of retrieving most similar videos to a given query video and has wide range of applications such as video browsing, content filtering, video indexing, etc. Traditional video level features based on key frame level hand engineered features which does not exploit rich dynamics present in the video. In this paper we propose a fast content based video retrieval framework using compact spatio-temporal features learned by deep learning. Specifically, deep CNN along with LSTM is deploy to learn spatio-temporal representations of video. For fast retrieval, binary code is generated by hashing learning component in the framework. For fast and effective learning of hash code proposed framework is trained in two stages. First stage learns the video dynamics and in second stage compact code is learn using learned video’s temporal variation from the first stage. UCF101 dataset is used to test the proposed method and results compared by other hashing methods. Results show that our approach is able to improve the performance over existing methods.


Author(s):  
Nobuyuki Kitamura ◽  
Hidehiko Shishido ◽  
Takuya Enomoto ◽  
Yoshinari Kameda ◽  
Jun-ichi Yamamoto ◽  
...  
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document