shot segmentation
Recently Published Documents


TOTAL DOCUMENTS

119
(FIVE YEARS 46)

H-INDEX

8
(FIVE YEARS 3)

Author(s):  
D. Minola Davids ◽  
C. Seldev Christopher

The visual data attained from surveillance single-camera or multi-view camera networks is exponentially increasing every day. Identifying the important shots in the presented video which faithfully signify the original video is the major task in video summarization. For executing efficient video summarization of the surveillance systems, optimization algorithm like LFOB-COA is proposed in this paper. Data collection, pre-processing, deep feature extraction (FE), shot segmentation JSFCM, classification using Rectified Linear Unit activated BLSTM, and LFOB-COA are the proposed method’s five steps. Finally a post-processing step is utilized. For recognizing the proposed method’s effectiveness, the results are then contrasted with the existent methods.


2021 ◽  
Vol 12 (5) ◽  
pp. 1-19
Author(s):  
Yuan Cheng ◽  
Yuchao Yang ◽  
Hai-Bao Chen ◽  
Ngai Wong ◽  
Hao Yu

Real-time segmentation and understanding of driving scenes are crucial in autonomous driving. Traditional pixel-wise approaches extract scene information by segmenting all pixels in a frame, and hence are inefficient and slow. Proposal-wise approaches only learn from the proposed object candidates, but still require multiple steps on the expensive proposal methods. Instead, this work presents a fast single-shot segmentation strategy for video scene understanding. The proposed net, called S3-Net, quickly locates and segments target sub-scenes , and meanwhile extracts attention-aware time-series sub-scene features ( ats-features ) as inputs to an attention-aware spatio-temporal model (ASM) . Utilizing tensorization and quantization techniques, S3-Net is intended to be lightweight for edge computing. Experiments results on CityScapes, UCF11, HMDB51, and MOMENTS datasets demonstrate that the proposed S3-Net achieves an accuracy improvement of 8.1% versus the 3D-CNN based approach on UCF11, a storage reduction of 6.9× and an inference speed of 22.8 FPS on CityScapes with a GTX1080Ti GPU.


2021 ◽  
Author(s):  
Kaiqi Dong ◽  
Wei Yang ◽  
Zhenbo Xu ◽  
Liusheng Huang ◽  
Zhidong Yu

Information ◽  
2021 ◽  
Vol 12 (10) ◽  
pp. 406
Author(s):  
Jingyao Li ◽  
Lianglun Cheng ◽  
Zewen Zheng ◽  
Jiahong Chen ◽  
Genping Zhao ◽  
...  

The datasets in the latest semantic segmentation model often need to be manually labeled for each pixel, which is time-consuming and requires much effort. General models are unable to make better predictions, for new categories of information that have never been seen before, than the few-shot segmentation that has emerged. However, the few-shot segmentation is still faced up with two challenges. One is the inadequate exploration of semantic information conveyed in the high-level features, and the other is the inconsistency of segmenting objects at different scales. To solve these two problems, we have proposed a prior feature matching network (PFMNet). It includes two novel modules: (1) the Query Feature Enhancement Module (QFEM), which makes full use of the high-level semantic information in the support set to enhance the query feature, and (2) the multi-scale feature matching module (MSFMM), which increases the matching probability of multi-scales of objects. Our method achieves an intersection over union average score of 61.3% for one-shot segmentation and 63.4% for five-shot segmentation, which surpasses the state-of-the-art results by 0.5% and 1.5%, respectively.


2021 ◽  
Author(s):  
XiaoLiu Luo ◽  
Taiping Zhang ◽  
Zhao Duan ◽  
Jin Tan
Keyword(s):  

2021 ◽  
pp. 102170
Author(s):  
Saidi Guo ◽  
Lin Xu ◽  
Cheng Feng ◽  
Huahua Xiong ◽  
Zhifan Gao ◽  
...  

2021 ◽  
Author(s):  
Bingfeng Zhang ◽  
Jimin Xiao ◽  
Terry Qin

Sign in / Sign up

Export Citation Format

Share Document