Aircraft detection in remote sensing images based on a deep residual network and Super-Vector coding

2017 ◽  
Vol 9 (3) ◽  
pp. 228-236 ◽  
Author(s):  
Jiachen Yang ◽  
Yinghao Zhu ◽  
Bin Jiang ◽  
Lei Gao ◽  
Liping Xiao ◽  
...  
2021 ◽  
Vol 13 (17) ◽  
pp. 3425
Author(s):  
Xin Zhao ◽  
Hui Li ◽  
Ping Wang ◽  
Linhai Jing

Accurate registration for multisource high-resolution remote sensing images is an essential step for various remote sensing applications. Due to the complexity of the feature and texture information of high-resolution remote sensing images, especially for images covering earthquake disasters, feature-based image registration methods need a more helpful feature descriptor to improve the accuracy. However, traditional image registration methods that only use local features at low levels have difficulty representing the features of the matching points. To improve the accuracy of matching features for multisource high-resolution remote sensing images, an image registration method based on a deep residual network (ResNet) and scale-invariant feature transform (SIFT) was proposed. It used the fusion of SIFT features and ResNet features on the basis of the traditional algorithm to achieve image registration. The proposed method consists of two parts: model construction and training and image registration using a combination of SIFT and ResNet34 features. First, a registration sample set constructed from high-resolution satellite remote sensing images was used to fine-tune the network to obtain the ResNet model. Then, for the image to be registered, the Shi_Tomas algorithm and the combination of SIFT and ResNet features were used for feature extraction to complete the image registration. Considering the difference in image sizes and scenes, five pairs of images were used to conduct experiments to verify the effectiveness of the method in different practical applications. The experimental results showed that the proposed method can achieve higher accuracies and more tie points than traditional feature-based methods.


2018 ◽  
Vol 38 (1) ◽  
pp. 0111005
Author(s):  
侯宇青阳 Hou Yuqingyang ◽  
全吉成 Quan Jicheng ◽  
魏湧明 Wei Yongming

Sensors ◽  
2019 ◽  
Vol 19 (23) ◽  
pp. 5270
Author(s):  
Yantian Wang ◽  
Haifeng Li ◽  
Peng Jia ◽  
Guo Zhang ◽  
Taoyang Wang ◽  
...  

Deep learning-based aircraft detection methods have been increasingly implemented in recent years. However, due to the multi-resolution imaging modes, aircrafts in different images show very wide diversity on size, view and other visual features, which brings great challenges to detection. Although standard deep convolution neural networks (DCNN) can extract rich semantic features, they destroy the bottom-level location information. The features of small targets may also be submerged by redundant top-level features, resulting in poor detection. To address these problems, we proposed a compact multi-scale dense convolutional neural network (MS-DenseNet) for aircraft detection in remote sensing images. Herein, DenseNet was utilized for feature extraction, which enhances the propagation and reuse of the bottom-level high-resolution features. Subsequently, we combined feature pyramid network (FPN) with DenseNet to form a MS-DenseNet for learning multi-scale features, especially features of small objects. Finally, by compressing some of the unnecessary convolution layers of each dense block, we designed three new compact architectures: MS-DenseNet-41, MS-DenseNet-65, and MS-DenseNet-77. Comparative experiments showed that the compact MS-DenseNet-65 obtained a noticeable improvement in detecting small aircrafts and achieved state-of-the-art performance with a recall of 94% and an F1-score of 92.7% and cost less computational time. Furthermore, the experimental results on robustness of UCAS-AOD and RSOD datasets also indicate the good transferability of our method.


Sign in / Sign up

Export Citation Format

Share Document