scholarly journals Thermal Infrared Small Ship Detection in Sea Clutter Based on Morphological Reconstruction and Multi-Feature Analysis

2019 ◽  
Vol 9 (18) ◽  
pp. 3786 ◽  
Author(s):  
Yongsong Li ◽  
Zhengzhou Li ◽  
Yong Zhu ◽  
Bo Li ◽  
Weiqi Xiong ◽  
...  

The existing thermal infrared (TIR) ship detection methods may suffer serious performance degradation in the situation of heavy sea clutter. To cope with this problem, a novel ship detection method based on morphological reconstruction and multi-feature analysis is proposed in this paper. Firstly, the TIR image is processed by opening- or closing-based gray-level morphological reconstruction (GMR) to smooth intricate background clutter while maintaining the intensity, shape, and contour features of ship target. Then, considering the intensity and contrast features, the fused saliency detection strategy including intensity foreground saliency map (IFSM) and brightness contrast saliency map (BCSM) is presented to highlight potential ship targets and suppress sea clutter. After that, an effective contour descriptor namely average eigenvalue measure of structure tensor (STAEM) is designed to characterize candidate ship targets, and the statistical shape knowledge is introduced to identify true ship targets from residual non-ship targets. Finally, the dual method is adopted to simultaneously detect both bright and dark ship targets in TIR image. Extensive experiments show that the proposed method outperforms the compared state-of-the-art methods, especially for infrared images with intricate sea clutter. Moreover, the proposed method can work stably for ship target with unknown brightness, variable quantities, sizes, and shapes.

2020 ◽  
Vol 12 (1) ◽  
pp. 152 ◽  
Author(s):  
Ting Nie ◽  
Xiyu Han ◽  
Bin He ◽  
Xiansheng Li ◽  
Hongxing Liu ◽  
...  

Ship detection in panchromatic optical remote sensing images is faced with two major challenges, locating candidate regions from complex backgrounds quickly and describing ships effectively to reduce false alarms. Here, a practical method was proposed to solve these issues. Firstly, we constructed a novel visual saliency detection method based on a hyper-complex Fourier transform of a quaternion to locate regions of interest (ROIs), which can improve the accuracy of the subsequent discrimination process for panchromatic images, compared with the phase spectrum quaternary Fourier transform (PQFT) method. In addition, the Gaussian filtering of different scales was performed on the transformed result to synthesize the best saliency map. An adaptive method based on GrabCut was then used for binary segmentation to extract candidate positions. With respect to the discrimination stage, a rotation-invariant modified local binary pattern (LBP) description was achieved by combining shape, texture, and moment invariant features to describe the ship targets more powerfully. Finally, the false alarms were eliminated through SVM training. The experimental results on panchromatic optical remote sensing images demonstrated that the presented saliency model under various indicators is superior, and the proposed ship detection method is accurate and fast with high robustness, based on detailed comparisons to existing efforts.


Author(s):  
Ning-Min Shen ◽  
Jing Li ◽  
Pei-Yun Zhou ◽  
Ying Huo ◽  
Yi Zhuang

Co-saliency detection, an emerging research area in saliency detection, aims to extract the common saliency from the multi images. The extracted co-saliency map has been utilized in various applications, such as in co-segmentation, co-recognition and so on. With the rapid development of image acquisition technology, the original digital images are becoming more and more clearly. The existing co-saliency detection methods processing these images need enormous computer memory along with high computational complexity. These limitations made it hard to satisfy the demand of real-time user interaction. This paper proposes a fast co-saliency detection method based on the image block partition and sparse feature extraction method (BSFCoS). Firstly, the images are divided into several uniform blocks, and the low-level features are extracted from Lab and RGB color spaces. In order to maintain the characteristics of the original images and reduce the number of feature points as well as possible, Truncated Power for sparse principal components method are employed to extract sparse features. Furthermore, K-Means method is adopted to cluster the extracted sparse features, and calculate the three salient feature weights. Finally, the co-saliency map was acquired from the feature fusion of the saliency map for single image and multi images. The proposed method has been tested and simulated on two benchmark datasets: Co-saliency Pairs and CMU Cornell iCoseg datasets. Compared with the existing co-saliency methods, BSFCoS has a significant running time improvement in multi images processing while ensuring detection results. Lastly, the co-segmentation method based on BSFCoS is also given and has a better co-segmentation performance.


Symmetry ◽  
2018 ◽  
Vol 10 (10) ◽  
pp. 457 ◽  
Author(s):  
Dandan Zhu ◽  
Lei Dai ◽  
Ye Luo ◽  
Guokai Zhang ◽  
Xuan Shao ◽  
...  

Previous saliency detection methods usually focused on extracting powerful discriminative features to describe images with a complex background. Recently, the generative adversarial network (GAN) has shown a great ability in feature learning for synthesizing high quality natural images. Since the GAN shows a superior feature learning ability, we present a new multi-scale adversarial feature learning (MAFL) model for image saliency detection. In particular, we build this model, which is composed of two convolutional neural network (CNN) modules: the multi-scale G-network takes natural images as inputs and generates the corresponding synthetic saliency map, and we design a novel layer in the D-network, namely a correlation layer, which is used to determine whether one image is a synthetic saliency map or ground-truth saliency map. Quantitative and qualitative comparisons on several public datasets show the superiority of our approach.


2020 ◽  
Vol 12 (18) ◽  
pp. 2997 ◽  
Author(s):  
Tianwen Zhang ◽  
Xiaoling Zhang ◽  
Xiao Ke ◽  
Xu Zhan ◽  
Jun Shi ◽  
...  

Ship detection in synthetic aperture radar (SAR) images is becoming a research hotspot. In recent years, as the rise of artificial intelligence, deep learning has almost dominated SAR ship detection community for its higher accuracy, faster speed, less human intervention, etc. However, today, there is still a lack of a reliable deep learning SAR ship detection dataset that can meet the practical migration application of ship detection in large-scene space-borne SAR images. Thus, to solve this problem, this paper releases a Large-Scale SAR Ship Detection Dataset-v1.0 (LS-SSDD-v1.0) from Sentinel-1, for small ship detection under large-scale backgrounds. LS-SSDD-v1.0 contains 15 large-scale SAR images whose ground truths are correctly labeled by SAR experts by drawing support from the Automatic Identification System (AIS) and Google Earth. To facilitate network training, the large-scale images are directly cut into 9000 sub-images without bells and whistles, providing convenience for subsequent detection result presentation in large-scale SAR images. Notably, LS-SSDD-v1.0 has five advantages: (1) large-scale backgrounds, (2) small ship detection, (3) abundant pure backgrounds, (4) fully automatic detection flow, and (5) numerous and standardized research baselines. Last but not least, combined with the advantage of abundant pure backgrounds, we also propose a Pure Background Hybrid Training mechanism (PBHT-mechanism) to suppress false alarms of land in large-scale SAR images. Experimental results of ablation study can verify the effectiveness of the PBHT-mechanism. LS-SSDD-v1.0 can inspire related scholars to make extensive research into SAR ship detection methods with engineering application value, which is conducive to the progress of SAR intelligent interpretation technology.


2020 ◽  
Vol 2020 (2) ◽  
pp. 98-1-98-6
Author(s):  
Yuzhong Jiao ◽  
Mark Ping Chan Mok ◽  
Kayton Wai Keung Cheung ◽  
Man Chi Chan ◽  
Tak Wai Shen ◽  
...  

The objective of this paper is to research a dynamic computation of Zero-Parallax-Setting (ZPS) for multi-view autostereoscopic displays in order to effectively alleviate blurry 3D vision for images with large disparity. Saliency detection techniques can yield saliency map which is a topographic representation of saliency which refers to visually dominant locations. By using saliency map, we can predict what attracts the attention, or region of interest, to viewers. Recently, deep learning techniques have been applied in saliency detection. Deep learning-based salient object detection methods have the advantage of highlighting most of the salient objects. With the help of depth map, the spatial distribution of salient objects can be computed. In this paper, we will compare two dynamic ZPS techniques based on visual attention. They are 1) maximum saliency computation by Graphic-Based Visual Saliency (GBVS) algorithm and 2) spatial distribution of salient objects by a convolutional neural networks (CNN)-based model. Experiments prove that both methods can help improve the 3D effect of autostereoscopic displays. Moreover, the spatial distribution of salient objects-based dynamic ZPS technique can achieve better 3D performance than maximum saliency-based method.


2018 ◽  
Vol 8 (12) ◽  
pp. 2526 ◽  
Author(s):  
Huiyuan Luo ◽  
Guangliang Han ◽  
Peixun Liu ◽  
Yanfeng Wu

Diffusion-based salient region detection methods have gained great popularity. In most diffusion-based methods, the saliency values are ranked on 2-layer neighborhood graph by connecting each node to its neighboring nodes and the nodes sharing common boundaries with its neighboring nodes. However, only considering the local relevance between neighbors, the salient region may be heterogeneous and even wrongly suppressed, especially when the features of salient object are diverse. In order to address the issue, we present an effective saliency detection method using diffusing process on the graph with nonlocal connections. First, a saliency-biased Gaussian model is used to refine the saliency map based on the compactness cue, and then, the saliency information of compactness is diffused on 2-layer sparse graph with nonlocal connections. Second, we obtain the contrast of each superpixel by restricting the reference region to the background. Similarly, a saliency-biased Gaussian refinement model is generated and the saliency information based on the uniqueness cue is propagated on the 2-layer sparse graph. We linearly integrate the initial saliency maps based on the compactness and uniqueness cues due to the complementarity to each other. Finally, to obtain a highlighted and homogeneous saliency map, a single-layer updating and multi-layer integrating scheme is presented. Comprehensive experiments on four benchmark datasets demonstrate that the proposed method performs better in terms of various evaluation metrics.


2018 ◽  
Vol 2018 ◽  
pp. 1-11 ◽  
Author(s):  
Wenzhao Feng ◽  
Junguo Zhang ◽  
Chunhe Hu ◽  
Yuan Wang ◽  
Qiumin Xiang ◽  
...  

We proposed a novel saliency detection method based on histogram contrast algorithm and images captured with WMSN (wireless multimedia sensor network) for practical wild animal monitoring purpose. Current studies on wild animal monitoring mainly focus on analyzing images with high resolution, complex background, and nonuniform illumination features. Most current visual saliency detection methods are not capable of completing the processing work. In this algorithm, we firstly smoothed the image texture and reduced the noise with the help of structure extraction method based on image total variation. After that, the saliency target edge information was obtained by Canny operator edge detection method, which will be further improved by position saliency map according to the Hanning window. In order to verify the efficiency of the proposed algorithm, field-captured wild animal images were tested by using our algorithm in terms of visual effect and detection efficiency. Compared with histogram contrast algorithm, the result shows that the rate of average precision, recall and F-measure improved by 18.38%, 19.53%, 19.06%, respectively, when processing the captured animal images.


Author(s):  
M. N. Favorskaya ◽  
L. C. Jain

Introduction:Saliency detection is a fundamental task of computer vision. Its ultimate aim is to localize the objects of interest that grab human visual attention with respect to the rest of the image. A great variety of saliency models based on different approaches was developed since 1990s. In recent years, the saliency detection has become one of actively studied topic in the theory of Convolutional Neural Network (CNN). Many original decisions using CNNs were proposed for salient object detection and, even, event detection.Purpose:A detailed survey of saliency detection methods in deep learning era allows to understand the current possibilities of CNN approach for visual analysis conducted by the human eyes’ tracking and digital image processing.Results:A survey reflects the recent advances in saliency detection using CNNs. Different models available in literature, such as static and dynamic 2D CNNs for salient object detection and 3D CNNs for salient event detection are discussed in the chronological order. It is worth noting that automatic salient event detection in durable videos became possible using the recently appeared 3D CNN combining with 2D CNN for salient audio detection. Also in this article, we have presented a short description of public image and video datasets with annotated salient objects or events, as well as the often used metrics for the results’ evaluation.Practical relevance:This survey is considered as a contribution in the study of rapidly developed deep learning methods with respect to the saliency detection in the images and videos.


2021 ◽  
Vol 11 (14) ◽  
pp. 6269
Author(s):  
Wang Jing ◽  
Wang Leqi ◽  
Han Yanling ◽  
Zhang Yun ◽  
Zhou Ruyan

For the fast detection and recognition of apple fruit targets, based on the real-time DeepSnake deep learning instance segmentation model, this paper provided an algorithm basis for the practical application and promotion of apple picking robots. Since the initial detection results have an important impact on the subsequent edge prediction, this paper proposed an automatic detection method for apple fruit targets in natural environments based on saliency detection and traditional color difference methods. Combined with the original image, the histogram backprojection algorithm was used to further optimize the salient image results. A dynamic adaptive overlapping target separation algorithm was proposed to locate the single target fruit and further to determine the initial contour for DeepSnake, in view of the possible overlapping fruit regions in the saliency map. Finally, the target fruit was labeled based on the segmentation results of the examples. In the experiment, 300 training datasets were used to train the DeepSnake model, and the self-built dataset containing 1036 pictures of apples in various situations under natural environment was tested. The detection accuracy of target fruits under non-overlapping shaded fruits, overlapping fruits, shaded branches and leaves, and poor illumination conditions were 99.12%, 94.78%, 90.71%, and 94.46% respectively. The comprehensive detection accuracy was 95.66%, and the average processing time was 0.42 s in 1036 test images, which showed that the proposed algorithm can effectively separate the overlapping fruits through a not-very-large training samples and realize the rapid and accurate detection of apple targets.


2021 ◽  
Vol 13 (10) ◽  
pp. 1909
Author(s):  
Jiahuan Jiang ◽  
Xiongjun Fu ◽  
Rui Qin ◽  
Xiaoyan Wang ◽  
Zhifeng Ma

Synthetic Aperture Radar (SAR) has become one of the important technical means of marine monitoring in the field of remote sensing due to its all-day, all-weather advantage. National territorial waters to achieve ship monitoring is conducive to national maritime law enforcement, implementation of maritime traffic control, and maintenance of national maritime security, so ship detection has been a hot spot and focus of research. After the development from traditional detection methods to deep learning combined methods, most of the research always based on the evolving Graphics Processing Unit (GPU) computing power to propose more complex and computationally intensive strategies, while in the process of transplanting optical image detection ignored the low signal-to-noise ratio, low resolution, single-channel and other characteristics brought by the SAR image imaging principle. Constantly pursuing detection accuracy while ignoring the detection speed and the ultimate application of the algorithm, almost all algorithms rely on powerful clustered desktop GPUs, which cannot be implemented on the frontline of marine monitoring to cope with the changing realities. To address these issues, this paper proposes a multi-channel fusion SAR image processing method that makes full use of image information and the network’s ability to extract features; it is also based on the latest You Only Look Once version 4 (YOLO-V4) deep learning framework for modeling architecture and training models. The YOLO-V4-light network was tailored for real-time and implementation, significantly reducing the model size, detection time, number of computational parameters, and memory consumption, and refining the network for three-channel images to compensate for the loss of accuracy due to light-weighting. The test experiments were completed entirely on a portable computer and achieved an Average Precision (AP) of 90.37% on the SAR Ship Detection Dataset (SSDD), simplifying the model while ensuring a lead over most existing methods. The YOLO-V4-lightship detection algorithm proposed in this paper has great practical application in maritime safety monitoring and emergency rescue.


Sign in / Sign up

Export Citation Format

Share Document