Lighter and Faster Cross-Concatenated Multi-Scale Residual Block Based Network for Visual Saliency Prediction

Author(s):  
Sai Phani Kumar Malladi ◽  
Jayanta Mukhopadhyay ◽  
Chaker Larabi ◽  
Santanu Chaudhury
Author(s):  
Yubao Sun ◽  
Mengyang Zhao ◽  
Kai Hu ◽  
Shaojing Fan

Author(s):  
Marcella Cornia ◽  
Lorenzo Baraldi ◽  
Giuseppe Serra ◽  
Rita Cucchiara

IEEE Access ◽  
2020 ◽  
Vol 8 ◽  
pp. 121330-121343
Author(s):  
Alessandro Bruno ◽  
Francesco Gugliuzza ◽  
Roberto Pirrone ◽  
Edoardo Ardizzone

Complexity ◽  
2020 ◽  
Vol 2020 ◽  
pp. 1-17
Author(s):  
Yildiz Aydin ◽  
Bekir Dizdaroğlu

Degradations frequently occur in archive films that symbolize the historical and cultural heritage of a nation. In this study, the problem of detection blotches commonly encountered in archive films is handled. Here, a block-based blotch detection method is proposed based on a visual saliency map. The visual saliency map reveals prominent areas in an input frame and thus enables more accurate results in the blotch detection. A simple and effective visual saliency map method is taken into consideration in order to reduce computational complexity for the detection phase. After the visual saliency maps of the given frames are obtained, blotch regions are estimated by considered spatiotemporal patches—without the requirement for motion estimation—around the saliency pixels, which are subjected to a prethresholding process. Experimental results show that the proposed block-based blotch detection method provides a significant advantage with reducing false alarm rates over HOG feature (Yous and Serir, 2017), LBP feature (Yous and Serir, 2017), and regions-matching (Yous and Serir, 2016) methods presented in recent years.


Author(s):  
Bo Dai ◽  
Weijing Ye ◽  
Jing Zheng ◽  
Qianyi Chai ◽  
Yiyang Yao

Author(s):  
Xiong Zhang ◽  
Congli Feng ◽  
Anhong Wang ◽  
Linlin Yang ◽  
Yawen Hao

2016 ◽  
Vol 25 (1) ◽  
pp. 013008 ◽  
Author(s):  
Amin Banitalebi-Dehkordi ◽  
Eleni Nasiopoulos ◽  
Mahsa T. Pourazad ◽  
Panos Nasiopoulos

2014 ◽  
Vol 602-605 ◽  
pp. 2238-2241
Author(s):  
Jian Kun Chen ◽  
Zhi Wei Kang

In this paper, we present a new visual saliency model, which based on Wavelet Transform and simple Priors. Firstly, we create multi-scale feature maps to represent different features from edge to texture in wavelet transform. Then we modulate local saliency at a location and its global saliency, combine the local saliency and global saliency to generate a new saliency .Finally, the final saliency is generated by combining the new saliency and two simple priors (color prior an location prior). Experimental evaluation shows the proposed model can achieve state-of-the-art results and better than the other models on a public available benchmark dataset.


Sign in / Sign up

Export Citation Format

Share Document