video searching
Recently Published Documents


TOTAL DOCUMENTS

28
(FIVE YEARS 6)

H-INDEX

4
(FIVE YEARS 0)

Author(s):  
Duaa Mohammad ◽  
Inad Aljarrah ◽  
Moath Jarrah

Manual video inspection, searching, and analyzing is exhausting and inefficient. This paper presents an intelligent system to search surveillance video contents using deep learning. The proposed system reduced the amount of work that is needed to perform video searching and improved the speed and accuracy. A pre-trained VGG-16 CNNs model is used for dataset training. In addition, key frames of videos were extracted in order to save space, reduce the amount of work, and reduce the execution time. The extracted key frames were processed using the sobel operator edge detector and the max-pooling in order to eliminate redundancy. This increases compaction and avoids similarities between extracted frames. A text file, that contains key frame index, time of occurrence, and the classification of the VGG-16 model, is produced. The text file enables humans to easily search for objects of interest. VIRAT and IVY LAB datasets were used in the experiments. In addition, 128 different classes were identified in the datasets. The classes represent important objects for surveillance systems. However, users can identify other classes and utilize the proposed methodology. Experiments and evaluation showed that the proposed system outperformed existing methods in an order of magnitude. The system achieved the best results in speed while providing a high accuracy in classification.


2021 ◽  
Vol 58 (2) ◽  
pp. 4574-4586
Author(s):  
Aniek Juliarini, Jamila Lestyowati

One of the things done in the knowledge management is making tacit knowledge videos.  This study aims to: (1) analyze the process of tacit knowledge video production; (2) analyze the obstacles faced in making tacit knowledge videos; (3) find a solution to the obstacles encountered in making video tacit knowledge. The method used is the action research method where the authors conducted this research themselves, and then analyzed in a descriptive qualitative manner. The study uses primary and secondary data in the form of processes and experiences conducted by researchers, literature study, and video searching on Kemenkeu Learning Center (KLC). The results showed that the making of video through a series of processes, which are pre-production, production, post-production, evaluation, and uploading on KLC. The obstacle of making videos is because there is no clarity of the deadline for completing the work in the quality control stage and the lack of Widyaiswara's ability to make videos. The results of this action research can be copied by other parties who will do similar research. Another important finding from this research is the need for employees with special expertise in the video making.


Author(s):  
Jaimon Jacob ◽  
M. Sudheep Elayidom ◽  
V. P. Devassia

Videos are used often for communicating ideas, concepts, experience, and situations, because of the significant advances made in video communication technology. The social media platforms enhanced the video usage expeditiously. At, present, recognition of a video is done, using the metadata like video title, video descriptions, and video thumbnails. There are situations like video searcher requires only a video clip on a specific topic from a long video. This paper proposes a novel methodology for the analysis of video content and using video storytelling and indexing techniques for the retrieval of the intended video clip from a long duration video. Video storytelling technique is used for video content analysis and to produce a description of the video. The video description thus created is used for preparation of an index using wormhole algorithm, guarantying the search of a keyword of definite length L, within the minimum worst-case time. This video index can be used by video searching algorithm to retrieve the relevant part of the video by virtue of the frequency of the word in the keyword search of the video index. Instead of downloading and transferring a whole video, the user can download or transfer the specifically necessary video clip. The network constraints associated with the transfer of videos are considerably addressed.


2020 ◽  
Vol 10 (4) ◽  
pp. 39-48
Author(s):  
Sadia Anayat ◽  
◽  
Arfa Sikandar ◽  
Sheeza Abdul Rasheed ◽  
Saher butt
Keyword(s):  

Symmetry ◽  
2020 ◽  
Vol 12 (6) ◽  
pp. 992
Author(s):  
Akshay Aggarwal ◽  
Aniruddha Chauhan ◽  
Deepika Kumar ◽  
Mamta Mittal ◽  
Sudipta Roy ◽  
...  

Traditionally, searching for videos on popular streaming sites like YouTube is performed by taking the keywords, titles, and descriptions that are already tagged along with the video into consideration. However, the video content is not utilized for searching of the user’s query because of the difficulty in encoding the events in a video and comparing them to the search query. One solution to tackle this problem is to encode the events in a video and then compare them to the query in the same space. A method of encoding meaning to a video could be video captioning. The captioned events in the video can be compared to the query of the user, and we can get the optimal search space for the videos. There have been many developments over the course of the past few years in modeling video-caption generators and sentence embeddings. In this paper, we exploit an end-to-end video captioning model and various sentence embedding techniques that collectively help in building the proposed video-searching method. The YouCook2 dataset was used for the experimentation. Seven sentence embedding techniques were used, out of which the Universal Sentence Encoder outperformed over all the other six, with a median percentile score of 99.51. Thus, this method of searching, when integrated with traditional methods, can help improve the quality of search results.


2018 ◽  
Vol 8 (10) ◽  
pp. 1735 ◽  
Author(s):  
DaYou Jiang ◽  
Jongweon Kim

This work presents a novel shot boundary detection (SBD) method based on the Place-centric deep network (PlaceNet), with the aim of using video shots and image queries for video searching (VS) and fingerprint detection. The SBD method has three stages. In the first stage, we employed Local Binary Pattern-Singular Value Decomposition (LBP-SVD) features for candidate shot boundaries selection. In the second stage, we used the PlaceNet to select the shot boundary by semantic labels. In the third stage, we used the Scale-Invariant Feature Transform (SIFT) descriptor to eliminate falsely detected boundaries. The experimental results show that our SBD method is effective on a series of SBD datasets. In addition, video searching experiments are conducted by using one query image instead of video sequences. The results under several image transitions by using shot fingerprints have shown good precision.


IJARCCE ◽  
2017 ◽  
Vol 6 (3) ◽  
pp. 477-480
Author(s):  
Prof Aparna S Kalaskar ◽  
Kaustubh Karanjkar ◽  
Harish Gwalani ◽  
Naman Jain ◽  
Shridhar Pawar
Keyword(s):  

2017 ◽  
Vol 2017 ◽  
pp. 1-11
Author(s):  
Yingsheng Ye ◽  
Xingming Zhang ◽  
Wing W. Y. Ng

Accompanying the growth of surveillance infrastructures, surveillance IP cameras mount up rapidly, crowding Internet of Things (IoT) with countless surveillance frames and increasing the need of person reidentification (Re-ID) in video searching for surveillance and forensic fields. In real scenarios, performance of current proposed Re-ID methods suffers from pose and viewpoint variations due to feature extraction containing background pixels and fixed feature selection strategy for pose and viewpoint variations. To deal with pose and viewpoint variations, we propose the color distribution pattern metric (CDPM) method, employing color distribution pattern (CDP) for feature representation and SVM for classification. Different from other methods, CDP does not extract features over a certain number of dense blocks and is free from varied pedestrian image resolutions and resizing distortion. Moreover, it provides more precise features with less background influences under different body types, severe pose variations, and viewpoint variations. Experimental results show that our CDPM method achieves state-of-the-art performance on both 3DPeS dataset and ImageLab Pedestrian Recognition dataset with 68.8% and 79.8% rank 1 accuracy, respectively, under the single-shot experimental setting.


Sign in / Sign up

Export Citation Format

Share Document