scholarly journals Visual Saliency via Multiscale Analysis in Frequency Domain and Its Applications to Ship Detection in Optical Satellite Images

2022 ◽  
Vol 15 ◽  
Author(s):  
Ying Yu ◽  
Jun Qian ◽  
Qinglong Wu

This article proposes a bottom-up visual saliency model that uses the wavelet transform to conduct multiscale analysis and computation in the frequency domain. First, we compute the multiscale magnitude spectra by performing a wavelet transform to decompose the magnitude spectrum of the discrete cosine coefficients of an input image. Next, we obtain multiple saliency maps of different spatial scales through an inverse transformation from the frequency domain to the spatial domain, which utilizes the discrete cosine magnitude spectra after multiscale wavelet decomposition. Then, we employ an evaluation function to automatically select the two best multiscale saliency maps. A final saliency map is generated via an adaptive integration of the two selected multiscale saliency maps. The proposed model is fast, efficient, and can simultaneously detect salient regions or objects of different sizes. It outperforms state-of-the-art bottom-up saliency approaches in the experiments of psychophysical consistency, eye fixation prediction, and saliency detection for natural images. In addition, the proposed model is applied to automatic ship detection in optical satellite images. Ship detection tests on satellite data of visual optical spectrum not only demonstrate our saliency model's effectiveness in detecting small and large salient targets but also verify its robustness against various sea background disturbances.

2018 ◽  
Vol 10 (12) ◽  
pp. 1863 ◽  
Author(s):  
Zhenhui Sun ◽  
Qingyan Meng ◽  
Weifeng Zhai

Built-up areas extraction from satellite images is an important aspect of urban planning and land use; however, this remains a challenging task when using optical satellite images. Existing methods may be limited because of the complex background. In this paper, an improved boosting learning saliency method for built-up area extraction from Sentinel-2 images is proposed. First, the optimal band combination for extracting such areas from Sentinel-2 data is determined; then, a coarse saliency map is generated, based on multiple cues and the geodesic weighted Bayesian (GWB) model, that provides training samples for a strong model; a refined saliency map is subsequently obtained using the strong model. Furthermore, cuboid cellular automata (CCA) is used to integrate multiscale saliency maps for improving the refined saliency map. Then, coarse and refined saliency maps are synthesized to create a final saliency map. Finally, the fractional-order Darwinian particle swarm optimization algorithm (FODPSO) is employed to extract the built-up areas from the final saliency result. Cities in five different types of ecosystems in China (desert, coastal, riverside, valley, and plain) are used to evaluate the proposed method. Analyses of results and comparative analyses with other methods suggest that the proposed method is robust, with good accuracy.


2014 ◽  
Vol 602-605 ◽  
pp. 2238-2241
Author(s):  
Jian Kun Chen ◽  
Zhi Wei Kang

In this paper, we present a new visual saliency model, which based on Wavelet Transform and simple Priors. Firstly, we create multi-scale feature maps to represent different features from edge to texture in wavelet transform. Then we modulate local saliency at a location and its global saliency, combine the local saliency and global saliency to generate a new saliency .Finally, the final saliency is generated by combining the new saliency and two simple priors (color prior an location prior). Experimental evaluation shows the proposed model can achieve state-of-the-art results and better than the other models on a public available benchmark dataset.


2019 ◽  
Vol 11 (18) ◽  
pp. 2173 ◽  
Author(s):  
Jinlei Ma ◽  
Zhiqiang Zhou ◽  
Bo Wang ◽  
Hua Zong ◽  
Fei Wu

To accurately detect ships of arbitrary orientation in optical remote sensing images, we propose a two-stage CNN-based ship-detection method based on the ship center and orientation prediction. Center region prediction network and ship orientation classification network are constructed to generate rotated region proposals, and then we can predict rotated bounding boxes from rotated region proposals to locate arbitrary-oriented ships more accurately. The two networks share the same deconvolutional layers to perform semantic segmentation for the prediction of center regions and orientations of ships, respectively. They can provide the potential center points of the ships helping to determine the more confident locations of the region proposals, as well as the ship orientation information, which is beneficial to the more reliable predetermination of rotated region proposals. Classification and regression are then performed for the final ship localization. Compared with other typical object detection methods for natural images and ship-detection methods, our method can more accurately detect multiple ships in the high-resolution remote sensing image, irrespective of the ship orientations and a situation in which the ships are docked very closely. Experiments have demonstrated the promising improvement of ship-detection performance.


IEEE Access ◽  
2018 ◽  
Vol 6 ◽  
pp. 71122-71131 ◽  
Author(s):  
Ye Yu ◽  
Hua Ai ◽  
Xiaojun He ◽  
Shuhai Yu ◽  
Xing Zhong ◽  
...  

2014 ◽  
Vol 11 (3) ◽  
pp. 641-645 ◽  
Author(s):  
Guang Yang ◽  
Bo Li ◽  
Shufan Ji ◽  
Feng Gao ◽  
Qizhi Xu

Sign in / Sign up

Export Citation Format

Share Document