A Method of Extracting Inter-Object Spatial Relationship in Remote Sensing Image Objects

2014 ◽  
Vol 519-520 ◽  
pp. 537-540
Author(s):  
Xiao Li Liu ◽  
Wen Tao Yang ◽  
Guo Bin Zhu ◽  
Jing Gang LI ◽  
Xue Li

Spatial relation extraction is very important for remote sensing application, but most existing topological models hardly describe the order property of transformations among the topological relations. This paper proposed a new method to extract inclusion relation of image based on topology map in the process of multi-level image classification.

2014 ◽  
Vol 519-520 ◽  
pp. 585-589
Author(s):  
Zhi Ge Jia ◽  
Xue Jun Qiao ◽  
Zheng Song Chen ◽  
Xiao Li Liu

For remote sensing imagery, spatial relationships are extracted by segmentation and classification. Unlike traditional vector relationships, spatial relationship reasoning is equivalent to constraint satisfaction problems in the image process. Based on 9-intersect model, the paper discuss spatial topological relationship between image objects to provide a theoretical and technical support for low levels of image segmentation, high-level analysis.


Author(s):  
Y. Yao ◽  
H. Zhao ◽  
D. Huang ◽  
Q. Tan

<p><strong>Abstract.</strong> Remote sensing image scene classification has gained remarkable attention, due to its versatile use in different applications like geospatial object detection, ground object information extraction, environment monitoring and etc. The scene not only contains the information of the ground objects, but also includes the spatial relationship between the ground objects and the environment. With rapid growth of the amount of remote sensing image data, the need for automatic annotation methods for image scenes is more urgent. This paper proposes a new framework for high resolution remote sensing images scene classification based on convolutional neural network. To eliminate the requirement of fixed-size input image, multiple pyramid pooling strategy is equipped between convolutional layers and fully connected layers. Then, the fixed-size features generated by multiple pyramid pooling layer was extended to one-dimension fixed-length vector and fed into fully connected layers. Our method could generate a fixed-length representation regardless of image size, at the same time get higher classification accuracy. On UC-Merced and NWPU-RESISC45 datasets, our framework achieved satisfying accuracies, which is 93.24% and 88.62% respectively.</p>


Author(s):  
C. K. Li ◽  
W. Fang ◽  
X. J. Dong

With the development of remote sensing technology, the spatial resolution, spectral resolution and time resolution of remote sensing data is greatly improved. How to efficiently process and interpret the massive high resolution remote sensing image data for ground objects, which with spatial geometry and texture information, has become the focus and difficulty in the field of remote sensing research. An object oriented and rule of the classification method of remote sensing data has presents in this paper. Through the discovery and mining the rich knowledge of spectrum and spatial characteristics of high-resolution remote sensing image, establish a multi-level network image object segmentation and classification structure of remote sensing image to achieve accurate and fast ground targets classification and accuracy assessment. Based on worldview-2 image data in the Zangnan area as a study object, using the object-oriented image classification method and rules to verify the experiment which is combination of the mean variance method, the maximum area method and the accuracy comparison to analysis, selected three kinds of optimal segmentation scale and established a multi-level image object network hierarchy for image classification experiments. The results show that the objectoriented rules classification method to classify the high resolution images, enabling the high resolution image classification results similar to the visual interpretation of the results and has higher classification accuracy. The overall accuracy and Kappa coefficient of the object-oriented rules classification method were 97.38%, 0.9673; compared with object-oriented SVM method, respectively higher than 6.23%, 0.078; compared with object-oriented KNN method, respectively more than 7.96%, 0.0996. The extraction precision and user accuracy of the building compared with object-oriented SVM method, respectively higher than 18.39%, 3.98%, respectively better than the object-oriented KNN method 21.27%, 14.97%.


2019 ◽  
Vol 11 (9) ◽  
pp. 1044 ◽  
Author(s):  
Wei Cui ◽  
Fei Wang ◽  
Xin He ◽  
Dongyou Zhang ◽  
Xuxiang Xu ◽  
...  

A comprehensive interpretation of remote sensing images involves not only remote sensing object recognition but also the recognition of spatial relations between objects. Especially in the case of different objects with the same spectrum, the spatial relationship can help interpret remote sensing objects more accurately. Compared with traditional remote sensing object recognition methods, deep learning has the advantages of high accuracy and strong generalizability regarding scene classification and semantic segmentation. However, it is difficult to simultaneously recognize remote sensing objects and their spatial relationship from end-to-end only relying on present deep learning networks. To address this problem, we propose a multi-scale remote sensing image interpretation network, called the MSRIN. The architecture of the MSRIN is a parallel deep neural network based on a fully convolutional network (FCN), a U-Net, and a long short-term memory network (LSTM). The MSRIN recognizes remote sensing objects and their spatial relationship through three processes. First, the MSRIN defines a multi-scale remote sensing image caption strategy and simultaneously segments the same image using the FCN and U-Net on different spatial scales so that a two-scale hierarchy is formed. The output of the FCN and U-Net are masked to obtain the location and boundaries of remote sensing objects. Second, using an attention-based LSTM, the remote sensing image captions include the remote sensing objects (nouns) and their spatial relationships described with natural language. Finally, we designed a remote sensing object recognition and correction mechanism to build the relationship between nouns in captions and object mask graphs using an attention weight matrix to transfer the spatial relationship from captions to objects mask graphs. In other words, the MSRIN simultaneously realizes the semantic segmentation of the remote sensing objects and their spatial relationship identification end-to-end. Experimental results demonstrated that the matching rate between samples and the mask graph increased by 67.37 percentage points, and the matching rate between nouns and the mask graph increased by 41.78 percentage points compared to before correction. The proposed MSRIN has achieved remarkable results.


Sign in / Sign up

Export Citation Format

Share Document