Spatial hierarchy perception and hard samples metric learning for high-resolution remote sensing image object detection

Author(s):  
Dongjun Zhu ◽  
Shixiong Xia ◽  
Jiaqi Zhao ◽  
Yong Zhou ◽  
Qiang Niu ◽  
...  
2021 ◽  
Author(s):  
Wenhua Zhuang ◽  
Xiao-Gang Tang ◽  
Guangyu Yang ◽  
Guangming Yuan ◽  
Haoyuan Yu

IEEE Access ◽  
2021 ◽  
Vol 9 ◽  
pp. 118696-118707
Author(s):  
Pengfei Shi ◽  
Zhongxin Zhao ◽  
Xinnan Fan ◽  
Xijun Yan ◽  
Wei Yan ◽  
...  

2020 ◽  
Vol 79 (47-48) ◽  
pp. 34973-34992
Author(s):  
Dongjun Zhu ◽  
Shixiong Xia ◽  
Jiaqi Zhao ◽  
Yong Zhou ◽  
Qiang Niu ◽  
...  

2020 ◽  
Vol 9 (2) ◽  
pp. 61
Author(s):  
Hongwei Zhao ◽  
Lin Yuan ◽  
Haoyu Zhao

Recently, with the rapid growth of the number of datasets with remote sensing images, it is urgent to propose an effective image retrieval method to manage and use such image data. In this paper, we propose a deep metric learning strategy based on Similarity Retention Loss (SRL) for content-based remote sensing image retrieval. We have improved the current metric learning methods from the following aspects—sample mining, network model structure and metric loss function. On the basis of redefining the hard samples and easy samples, we mine the positive and negative samples according to the size and spatial distribution of the dataset classes. At the same time, Similarity Retention Loss is proposed and the ratio of easy samples to hard samples in the class is used to assign dynamic weights to the hard samples selected in the experiment to learn the sample structure characteristics within the class. For negative samples, different weights are set based on the spatial distribution of the surrounding samples to maintain the consistency of similar structures among classes. Finally, we conduct a large number of comprehensive experiments on two remote sensing datasets with the fine-tuning network. The experiment results show that the method used in this paper achieves the state-of-the-art performance.


Author(s):  
C. K. Li ◽  
W. Fang ◽  
X. J. Dong

With the development of remote sensing technology, the spatial resolution, spectral resolution and time resolution of remote sensing data is greatly improved. How to efficiently process and interpret the massive high resolution remote sensing image data for ground objects, which with spatial geometry and texture information, has become the focus and difficulty in the field of remote sensing research. An object oriented and rule of the classification method of remote sensing data has presents in this paper. Through the discovery and mining the rich knowledge of spectrum and spatial characteristics of high-resolution remote sensing image, establish a multi-level network image object segmentation and classification structure of remote sensing image to achieve accurate and fast ground targets classification and accuracy assessment. Based on worldview-2 image data in the Zangnan area as a study object, using the object-oriented image classification method and rules to verify the experiment which is combination of the mean variance method, the maximum area method and the accuracy comparison to analysis, selected three kinds of optimal segmentation scale and established a multi-level image object network hierarchy for image classification experiments. The results show that the objectoriented rules classification method to classify the high resolution images, enabling the high resolution image classification results similar to the visual interpretation of the results and has higher classification accuracy. The overall accuracy and Kappa coefficient of the object-oriented rules classification method were 97.38%, 0.9673; compared with object-oriented SVM method, respectively higher than 6.23%, 0.078; compared with object-oriented KNN method, respectively more than 7.96%, 0.0996. The extraction precision and user accuracy of the building compared with object-oriented SVM method, respectively higher than 18.39%, 3.98%, respectively better than the object-oriented KNN method 21.27%, 14.97%.


Sign in / Sign up

Export Citation Format

Share Document