video saliency
Recently Published Documents


TOTAL DOCUMENTS

154
(FIVE YEARS 68)

H-INDEX

16
(FIVE YEARS 7)

2021 ◽  
Author(s):  
Jiazhong Chen ◽  
Jie Chen ◽  
Yuan Dong ◽  
Dakai Ren ◽  
Shiqi Zhang ◽  
...  

2021 ◽  
Author(s):  
Guanqun Ding ◽  
Nevrez Imamoglu ◽  
Ali Caglayan ◽  
Masahiro Murakawa ◽  
Ryosuke Nakamura

Author(s):  
G. Bellitto ◽  
F. Proietto Salanitri ◽  
S. Palazzo ◽  
F. Rundo ◽  
D. Giordano ◽  
...  

AbstractIn this work, we propose a 3D fully convolutional architecture for video saliency prediction that employs hierarchical supervision on intermediate maps (referred to as conspicuity maps) generated using features extracted at different abstraction levels. We provide the base hierarchical learning mechanism with two techniques for domain adaptation and domain-specific learning. For the former, we encourage the model to unsupervisedly learn hierarchical general features using gradient reversal at multiple scales, to enhance generalization capabilities on datasets for which no annotations are provided during training. As for domain specialization, we employ domain-specific operations (namely, priors, smoothing and batch normalization) by specializing the learned features on individual datasets in order to maximize performance. The results of our experiments show that the proposed model yields state-of-the-art accuracy on supervised saliency prediction. When the base hierarchical model is empowered with domain-specific modules, performance improves, outperforming state-of-the-art models on three out of five metrics on the DHF1K benchmark and reaching the second-best results on the other two. When, instead, we test it in an unsupervised domain adaptation setting, by enabling hierarchical gradient reversal layers, we obtain performance comparable to supervised state-of-the-art. Source code, trained models and example outputs are publicly available at https://github.com/perceivelab/hd2s.


2021 ◽  
Vol 576 ◽  
pp. 819-830
Author(s):  
Muwei Jian ◽  
Jiaojin Wang ◽  
Hui Yu ◽  
Gai-Ge Wang

Sensors ◽  
2021 ◽  
Vol 21 (19) ◽  
pp. 6429
Author(s):  
Liqun Lin ◽  
Jing Yang ◽  
Zheng Wang ◽  
Liping Zhou ◽  
Weiling Chen ◽  
...  

Video coding technology makes the required storage and transmission bandwidth of video services decrease by reducing the bitrate of the video stream. However, the compressed video signals may involve perceivable information loss, especially when the video is overcompressed. In such cases, the viewers can observe visually annoying artifacts, namely, Perceivable Encoding Artifacts (PEAs), which degrade their perceived video quality. To monitor and measure these PEAs (including blurring, blocking, ringing and color bleeding), we propose an objective video quality metric named Saliency-Aware Artifact Measurement (SAAM) without any reference information. The SAAM metric first introduces video saliency detection to extract interested regions and further splits these regions into a finite number of image patches. For each image patch, the data-driven model is utilized to evaluate intensities of PEAs. Finally, these intensities are fused into an overall metric using Support Vector Regression (SVR). In experiment section, we compared the SAAM metric with other popular video quality metrics on four publicly available databases: LIVE, CSIQ, IVP and FERIT-RTRK. The results reveal the promising quality prediction performance of the SAAM metric, which is superior to most of the popular compressed video quality evaluation models.


2021 ◽  
Author(s):  
Xiangwei Lu ◽  
Muwei Jian ◽  
Rui Wang ◽  
Zhichao Yun ◽  
Peiguang Lin ◽  
...  

2021 ◽  
Author(s):  
Jiazhong Chen ◽  
Zongyi Li ◽  
Yi Jin ◽  
Dakai Ren ◽  
Hefei Ling

2021 ◽  
Author(s):  
Mert Cokelek ◽  
Nevrez Imamoglu ◽  
Cagri Ozcinar ◽  
Erkut Erdem ◽  
Aykut Erdem

Sign in / Sign up

Export Citation Format

Share Document