Multiple kernel relevance vector machine for geospatial objects detection in high-resolution remote sensing images

2012 ◽  
Vol 29 (5) ◽  
pp. 353-360
Author(s):  
Xiangjuan Li ◽  
Xian Sun ◽  
Hongqi Wang ◽  
Yu Li ◽  
Hao Sun
2012 ◽  
Vol 532-533 ◽  
pp. 1258-1262
Author(s):  
Xiang Juan Li ◽  
Hao Sun ◽  
Xin Wei Zheng ◽  
Xian Sun ◽  
Hong Qi Wang

The objective of this work is multiple objects detection in remote sensing images. Many classifiers have been proposed to detect military objects. In this paper, we demonstrate that linear combination of kernels can get a better classification precision than product of kernels. Starting with base kernels, we obtain different weights for each class through learning. Experiment on Caltech-101 dataset shows the learnt kernels yields superior classification results compared with single-kernel SVM. While such a powerful classifier act as a sliding-window detector to search planes in images collected from Google Earth, results shows the effectiveness of using MKL detector to locate military objects in remote sensing images.


2021 ◽  
Vol 13 (24) ◽  
pp. 4971
Author(s):  
Congcong Wang ◽  
Wenbin Sun ◽  
Deqin Fan ◽  
Xiaoding Liu ◽  
Zhi Zhang

The characteristics of a wide variety of scales about objects and complex texture features of high-resolution remote sensing images make deep learning-based change detection methods the mainstream method. However, existing deep learning methods have problems with spatial information loss and insufficient feature representation, resulting in unsatisfactory effects of small objects detection and boundary positioning in high-resolution remote sensing images change detection. To address the problems, a network architecture based on 2-dimensional discrete wavelet transform and adaptive feature weighted fusion is proposed. The proposed network takes Siamese network and Nested U-Net as the backbone; 2-dimensional discrete wavelet transform is used to replace the pooling layer; and the inverse transform is used to replace the upsampling to realize image reconstruction, reduce the loss of spatial information, and fully retain the original image information. In this way, the proposed network can accurately detect changed objects of different scales and reconstruct change maps with clear boundaries. Furthermore, different feature fusion methods of different stages are proposed to fully integrate multi-scale and multi-level features and improve the comprehensive representation ability of features, so as to achieve a more refined change detection effect while reducing pseudo-changes. To verify the effectiveness and advancement of the proposed method, it is compared with seven state-of-the-art methods on two datasets of Lebedev and SenseTime from the three aspects of quantitative analysis, qualitative analysis, and efficiency analysis, and the effectiveness of proposed modules is validated by an ablation study. The results of quantitative analysis and efficiency analysis show that, under the premise of taking into account the operation efficiency, our method can improve the recall while ensuring the detection precision, and realize the improvement of the overall detection performance. Specifically, it shows an average improvement of 37.9% and 12.35% on recall, and 34.76% and 11.88% on F1 with the Lebedev and SenseTime datasets, respectively, compared to other methods. The qualitative analysis shows that our method has better performance on small objects detection and boundary positioning than other methods, and a more refined change map can be obtained.


Sign in / Sign up

Export Citation Format

Share Document