Classification Method of High-Resolution Remote Sensing Scenes Based on Fusion of Global and Local Deep Features

2019 ◽  
Vol 39 (3) ◽  
pp. 0301002
Author(s):  
龚希 Gong Xi ◽  
吴亮 Wu Liang ◽  
谢忠 Xie Zhong ◽  
陈占龙 Chen Zhanlong ◽  
刘袁缘 Liu Yuanyuan ◽  
...  
2020 ◽  
Vol 12 (22) ◽  
pp. 3845
Author(s):  
Zhiyu Xu ◽  
Yi Zhou ◽  
Shixin Wang ◽  
Litao Wang ◽  
Feng Li ◽  
...  

The real-time, accurate, and refined monitoring of urban green space status information is of great significance in the construction of urban ecological environment and the improvement of urban ecological benefits. The high-resolution technology can provide abundant information of ground objects, which makes the information of urban green surface more complicated. The existing classification methods are challenging to meet the classification accuracy and automation requirements of high-resolution images. This paper proposed a deep learning classification method for urban green space based on phenological features constraints in order to make full use of the spectral and spatial information of green space provided by high-resolution remote sensing images (GaoFen-2) in different periods. The vegetation phenological features were added as auxiliary bands to the deep learning network for training and classification. We used the HRNet (High-Resolution Network) as our model and introduced the Focal Tversky Loss function to solve the sample imbalance problem. The experimental results show that the introduction of phenological features into HRNet model training can effectively improve urban green space classification accuracy by solving the problem of misclassification of evergreen and deciduous trees. The improvement rate of F1-Score of deciduous trees, evergreen trees, and grassland were 0.48%, 4.77%, and 3.93%, respectively, which proved that the combination of vegetation phenology and high-resolution remote sensing image can improve the results of deep learning urban green space classification.


Author(s):  
C. K. Li ◽  
W. Fang ◽  
X. J. Dong

With the development of remote sensing technology, the spatial resolution, spectral resolution and time resolution of remote sensing data is greatly improved. How to efficiently process and interpret the massive high resolution remote sensing image data for ground objects, which with spatial geometry and texture information, has become the focus and difficulty in the field of remote sensing research. An object oriented and rule of the classification method of remote sensing data has presents in this paper. Through the discovery and mining the rich knowledge of spectrum and spatial characteristics of high-resolution remote sensing image, establish a multi-level network image object segmentation and classification structure of remote sensing image to achieve accurate and fast ground targets classification and accuracy assessment. Based on worldview-2 image data in the Zangnan area as a study object, using the object-oriented image classification method and rules to verify the experiment which is combination of the mean variance method, the maximum area method and the accuracy comparison to analysis, selected three kinds of optimal segmentation scale and established a multi-level image object network hierarchy for image classification experiments. The results show that the objectoriented rules classification method to classify the high resolution images, enabling the high resolution image classification results similar to the visual interpretation of the results and has higher classification accuracy. The overall accuracy and Kappa coefficient of the object-oriented rules classification method were 97.38%, 0.9673; compared with object-oriented SVM method, respectively higher than 6.23%, 0.078; compared with object-oriented KNN method, respectively more than 7.96%, 0.0996. The extraction precision and user accuracy of the building compared with object-oriented SVM method, respectively higher than 18.39%, 3.98%, respectively better than the object-oriented KNN method 21.27%, 14.97%.


2017 ◽  
Vol 9 (4) ◽  
pp. 329 ◽  
Author(s):  
Haiyan Gu ◽  
Haitao Li ◽  
Li Yan ◽  
Zhengjun Liu ◽  
Thomas Blaschke ◽  
...  

2020 ◽  
Vol 9 (4) ◽  
pp. 254
Author(s):  
Xingang Zhang ◽  
Haowen Yan ◽  
Liming Zhang ◽  
Hao Wang

Content integrity of high-resolution remote sensing (HRRS) images is the premise of its usability. Existing HRRS image integrity authentication methods are mostly binary decision-making processes, which cannot provide a further interpretable information (e.g., tamper localization, tamper type determination). Due to this reason, a robust HRRS images integrity authentication algorithm using perceptual hashing technology considering both global and local features is proposed in this paper. It extracts global features by the efficient recognition ability of Zernike moments to texture information. Meanwhile, Features from Accelerated Segment Test (FAST) key points are applied to local features construction and tamper localization. By applying the concept of multi-feature combination to the integrity authentication of HRRS images, the authentication process is more convincing in comparison to existing algorithms. Furthermore, an interpretable authentication result can be given. The experimental results show that the algorithm proposed in this paper is highly robust to the content retention operation, has a strong sensitivity to the content changing operations, and the result of tampering localization is more precise comparing with existing algorithms.


2019 ◽  
Vol 11 (1) ◽  
pp. 69 ◽  
Author(s):  
Zachary L. Langford ◽  
Jitendra Kumar ◽  
Forrest M. Hoffman ◽  
Amy L. Breen ◽  
Colleen M. Iversen

Land cover datasets are essential for modeling and analysis of Arctic ecosystem structure and function and for understanding land–atmosphere interactions at high spatial resolutions. However, most Arctic land cover products are generated at a coarse resolution, often limited due to cloud cover, polar darkness, and poor availability of high-resolution imagery. A multi-sensor remote sensing-based deep learning approach was developed for generating high-resolution (5 m) vegetation maps for the western Alaskan Arctic on the Seward Peninsula, Alaska. The fusion of hyperspectral, multispectral, and terrain datasets was performed using unsupervised and supervised classification techniques over a ∼343 km2 area, and a high-resolution (5 m) vegetation classification map was generated. An unsupervised technique was developed to classify high-dimensional remote sensing datasets into cohesive clusters. We employed a quantitative method to add supervision to the unlabeled clusters, producing a fully labeled vegetation map. We then developed convolutional neural networks (CNNs) using the multi-sensor fusion datasets to map vegetation distributions using the original classes and the classes produced by the unsupervised classification method. To validate the resulting CNN maps, vegetation observations were collected at 30 field plots during the summer of 2016, and the resulting vegetation products developed were evaluated against them for accuracy. Our analysis indicates the CNN models based on the labels produced by the unsupervised classification method provided the most accurate mapping of vegetation types, increasing the validation score (i.e., precision) from 0.53 to 0.83 when evaluated against field vegetation observations.


Sign in / Sign up

Export Citation Format

Share Document