scholarly journals Multiple Ship Tracking in Remote Sensing Images Using Deep Learning

2021 ◽  
Vol 13 (18) ◽  
pp. 3601
Author(s):  
Jin Wu ◽  
Changqing Cao ◽  
Yuedong Zhou ◽  
Xiaodong Zeng ◽  
Zhejun Feng ◽  
...  

In remote sensing images, small target size and diverse background cause difficulty in locating targets accurately and quickly. To address the lack of accuracy and inefficient real-time performance of existing tracking algorithms, a multi-object tracking (MOT) algorithm for ships using deep learning was proposed in this study. The feature extraction capability of target detectors determines the performance of MOT algorithms. Therefore, you only look once (YOLO)-v3 model, which has better accuracy and speed than other algorithms, was selected as the target detection framework. The high similarity of ship targets will cause poor tracking results; therefore, we used the multiple granularity network (MGN) to extract richer target appearance information to improve the generalization ability of similar images. We compared the proposed algorithm with other state-of-the-art multi-object tracking algorithms. Results show that the tracking accuracy is improved by 2.23%, while the average running speed is close to 21 frames per second, meeting the needs of real-time tracking.

Symmetry ◽  
2021 ◽  
Vol 13 (3) ◽  
pp. 495
Author(s):  
Liang Jin ◽  
Guodong Liu

Compared with ordinary images, each of the remote sensing images contains many kinds of objects with large scale changes, providing more details. As a typical object of remote sensing image, ship detection has been playing an essential role in the field of remote sensing. With the rapid development of deep learning, remote sensing image detection method based on convolutional neural network (CNN) has occupied a key position. In remote sensing images, the objects of which small scale objects account for a large proportion are closely arranged. In addition, the convolution layer in CNN lacks ample context information, leading to low detection accuracy for remote sensing image detection. To improve detection accuracy and keep the speed of real-time detection, this paper proposed an efficient object detection algorithm for ship detection of remote sensing image based on improved SSD. Firstly, we add a feature fusion module to shallow feature layers to refine feature extraction ability of small object. Then, we add Squeeze-and-Excitation Network (SE) module to each feature layers, introducing attention mechanism to network. The experimental results based on Synthetic Aperture Radar ship detection dataset (SSDD) show that the mAP reaches 94.41%, and the average detection speed is 31FPS. Compared with SSD and other representative object detection algorithms, this improved algorithm has a better performance in detection accuracy and can realize real-time detection.


2021 ◽  
Vol 13 (10) ◽  
pp. 1995
Author(s):  
Pan Xu ◽  
Qingyang Li ◽  
Bo Zhang ◽  
Fan Wu ◽  
Ke Zhao ◽  
...  

Synthetic aperture radar (SAR) satellites produce large quantities of remote sensing images that are unaffected by weather conditions and, therefore, widely used in marine surveillance. However, because of the hysteresis of satellite-ground communication and the massive quantity of remote sensing images, rapid analysis is not possible and real-time information for emergency situations is restricted. To solve this problem, this paper proposes an on-board ship detection scheme that is based on the traditional constant false alarm rate (CFAR) method and lightweight deep learning. This scheme can be used by the SAR satellite on-board computing platform to achieve near real-time image processing and data transmission. First, we use CFAR to conduct the initial ship detection and then apply the You Only Look Once version 4 (YOLOv4) method to obtain more accurate final results. We built a ground verification system to assess the feasibility of our scheme. With the help of the embedded Graphic Processing Unit (GPU) with high integration, our method achieved 85.9% precision for the experimental data, and the experimental results showed that the processing time was nearly half that required by traditional methods.


2021 ◽  
Vol 26 (1) ◽  
pp. 200-215
Author(s):  
Muhammad Alam ◽  
Jian-Feng Wang ◽  
Cong Guangpei ◽  
LV Yunrong ◽  
Yuanfang Chen

AbstractIn recent years, the success of deep learning in natural scene image processing boosted its application in the analysis of remote sensing images. In this paper, we applied Convolutional Neural Networks (CNN) on the semantic segmentation of remote sensing images. We improve the Encoder- Decoder CNN structure SegNet with index pooling and U-net to make them suitable for multi-targets semantic segmentation of remote sensing images. The results show that these two models have their own advantages and disadvantages on the segmentation of different objects. In addition, we propose an integrated algorithm that integrates these two models. Experimental results show that the presented integrated algorithm can exploite the advantages of both the models for multi-target segmentation and achieve a better segmentation compared to these two models.


Author(s):  
Dimitrios Meimetis ◽  
Ioannis Daramouskas ◽  
Isidoros Perikos ◽  
Ioannis Hatzilygeroudis

IEEE Access ◽  
2021 ◽  
Vol 9 ◽  
pp. 28349-28360
Author(s):  
Jiali Cai ◽  
Chunjuan Liu ◽  
Haowen Yan ◽  
Xiaosuo Wu ◽  
Wanzhen Lu ◽  
...  

2021 ◽  
Vol 13 (13) ◽  
pp. 2524
Author(s):  
Ziyi Chen ◽  
Dilong Li ◽  
Wentao Fan ◽  
Haiyan Guan ◽  
Cheng Wang ◽  
...  

Deep learning models have brought great breakthroughs in building extraction from high-resolution optical remote-sensing images. Among recent research, the self-attention module has called up a storm in many fields, including building extraction. However, most current deep learning models loading with the self-attention module still lose sight of the reconstruction bias’s effectiveness. Through tipping the balance between the abilities of encoding and decoding, i.e., making the decoding network be much more complex than the encoding network, the semantic segmentation ability will be reinforced. To remedy the research weakness in combing self-attention and reconstruction-bias modules for building extraction, this paper presents a U-Net architecture that combines self-attention and reconstruction-bias modules. In the encoding part, a self-attention module is added to learn the attention weights of the inputs. Through the self-attention module, the network will pay more attention to positions where there may be salient regions. In the decoding part, multiple large convolutional up-sampling operations are used for increasing the reconstruction ability. We test our model on two open available datasets: the WHU and Massachusetts Building datasets. We achieve IoU scores of 89.39% and 73.49% for the WHU and Massachusetts Building datasets, respectively. Compared with several recently famous semantic segmentation methods and representative building extraction methods, our method’s results are satisfactory.


Sign in / Sign up

Export Citation Format

Share Document