Ship Detection From Optical Satellite Images Based on Sea Surface Analysis

2014 ◽  
Vol 11 (3) ◽  
pp. 641-645 ◽  
Author(s):  
Guang Yang ◽  
Bo Li ◽  
Shufan Ji ◽  
Feng Gao ◽  
Qizhi Xu
2019 ◽  
Vol 11 (18) ◽  
pp. 2173 ◽  
Author(s):  
Jinlei Ma ◽  
Zhiqiang Zhou ◽  
Bo Wang ◽  
Hua Zong ◽  
Fei Wu

To accurately detect ships of arbitrary orientation in optical remote sensing images, we propose a two-stage CNN-based ship-detection method based on the ship center and orientation prediction. Center region prediction network and ship orientation classification network are constructed to generate rotated region proposals, and then we can predict rotated bounding boxes from rotated region proposals to locate arbitrary-oriented ships more accurately. The two networks share the same deconvolutional layers to perform semantic segmentation for the prediction of center regions and orientations of ships, respectively. They can provide the potential center points of the ships helping to determine the more confident locations of the region proposals, as well as the ship orientation information, which is beneficial to the more reliable predetermination of rotated region proposals. Classification and regression are then performed for the final ship localization. Compared with other typical object detection methods for natural images and ship-detection methods, our method can more accurately detect multiple ships in the high-resolution remote sensing image, irrespective of the ship orientations and a situation in which the ships are docked very closely. Experiments have demonstrated the promising improvement of ship-detection performance.


IEEE Access ◽  
2018 ◽  
Vol 6 ◽  
pp. 71122-71131 ◽  
Author(s):  
Ye Yu ◽  
Hua Ai ◽  
Xiaojun He ◽  
Shuhai Yu ◽  
Xing Zhong ◽  
...  

2020 ◽  
Vol 12 (24) ◽  
pp. 4192
Author(s):  
Gang Tang ◽  
Shibo Liu ◽  
Iwao Fujino ◽  
Christophe Claramunt ◽  
Yide Wang ◽  
...  

Ship detection from high-resolution optical satellite images is still an important task that deserves optimal solutions. This paper introduces a novel high-resolution image network-based approach based on the preselection of a region of interest (RoI). This pre-selected network first identifies and extracts a region of interest from input images. In order to efficiently match ship candidates, the principle of our approach is to distinguish suspected areas from the images based on hue, saturation, value (HSV) differences between ships and the background. The whole approach is the basis of an experiment with a large ship dataset, consisting of Google Earth images and HRSC2016 datasets. The experiment shows that the H-YOLO network, which uses the same weight training from a set of remote sensing images, has a 19.01% higher recognition rate and a 16.19% higher accuracy than applying the you only look once (YOLO) network alone. After image preprocessing, the value of the intersection over union (IoU) is also greatly improved.


2022 ◽  
Vol 15 ◽  
Author(s):  
Ying Yu ◽  
Jun Qian ◽  
Qinglong Wu

This article proposes a bottom-up visual saliency model that uses the wavelet transform to conduct multiscale analysis and computation in the frequency domain. First, we compute the multiscale magnitude spectra by performing a wavelet transform to decompose the magnitude spectrum of the discrete cosine coefficients of an input image. Next, we obtain multiple saliency maps of different spatial scales through an inverse transformation from the frequency domain to the spatial domain, which utilizes the discrete cosine magnitude spectra after multiscale wavelet decomposition. Then, we employ an evaluation function to automatically select the two best multiscale saliency maps. A final saliency map is generated via an adaptive integration of the two selected multiscale saliency maps. The proposed model is fast, efficient, and can simultaneously detect salient regions or objects of different sizes. It outperforms state-of-the-art bottom-up saliency approaches in the experiments of psychophysical consistency, eye fixation prediction, and saliency detection for natural images. In addition, the proposed model is applied to automatic ship detection in optical satellite images. Ship detection tests on satellite data of visual optical spectrum not only demonstrate our saliency model's effectiveness in detecting small and large salient targets but also verify its robustness against various sea background disturbances.


Author(s):  
Sui Haigang ◽  
Song Zhina

Reliably ship detection in optical satellite images has a wide application in both military and civil fields. However, this problem is very difficult in complex backgrounds, such as waves, clouds, and small islands. Aiming at these issues, this paper explores an automatic and robust model for ship detection in large-scale optical satellite images, which relies on detecting statistical signatures of ship targets, in terms of biologically-inspired visual features. This model first selects salient candidate regions across large-scale images by using a mechanism based on biologically-inspired visual features, combined with visual attention model with local binary pattern (CVLBP). Different from traditional studies, the proposed algorithm is high-speed and helpful to focus on the suspected ship areas avoiding the separation step of land and sea. Largearea images are cut into small image chips and analyzed in two complementary ways: Sparse saliency using visual attention model and detail signatures using LBP features, thus accordant with sparseness of ship distribution on images. Then these features are employed to classify each chip as containing ship targets or not, using a support vector machine (SVM). After getting the suspicious areas, there are still some false alarms such as microwaves and small ribbon clouds, thus simple shape and texture analysis are adopted to distinguish between ships and nonships in suspicious areas. Experimental results show the proposed method is insensitive to waves, clouds, illumination and ship size.


2015 ◽  
Vol 12 (7) ◽  
pp. 1451-1455 ◽  
Author(s):  
Shengxiang Qi ◽  
Jie Ma ◽  
Jin Lin ◽  
Yansheng Li ◽  
Jinwen Tian

Sign in / Sign up

Export Citation Format

Share Document