High spatial-resolution classification of urban surfaces using a deep learning method

2021 ◽  
pp. 107949
Author(s):  
Yifan Fan ◽  
Xiaotian Ding ◽  
Jindong Wu ◽  
Jian Ge ◽  
Yuguo Li
2021 ◽  
Vol 13 (17) ◽  
pp. 3460
Author(s):  
Yuling Chen ◽  
Wentao Teng ◽  
Zhen Li ◽  
Qiqi Zhu ◽  
Qingfeng Guan

By labelling high spatial resolution (HSR) images with specific semantic classes according to geographical properties, scene classification has been proven to be an effective method for HSR remote sensing image semantic interpretation. Deep learning is widely applied in HSR remote sensing scene classification. Most of the scene classification methods based on deep learning assume that the training datasets and the test datasets come from the same datasets or obey similar feature distributions. However, in practical application scenarios, it is difficult to guarantee this assumption. For new datasets, it is time-consuming and labor-intensive to repeat data annotation and network design. The neural architecture search (NAS) can automate the process of redesigning the baseline network. However, traditional NAS lacks the generalization ability to different settings and tasks. In this paper, a novel neural network search architecture framework—the spatial generalization neural architecture search (SGNAS) framework—is proposed. This model applies the NAS of spatial generalization to cross-domain scene classification of HSR images to bridge the domain gap. The proposed SGNAS can automatically search the architecture suitable for HSR image scene classification and possesses network design principles similar to the manually designed networks, which can make the obtained network migrate to different tasks. To obtain a simple and low-dimensional search space, the traditional NAS search space was optimized and the human-the-loop method was used. To extend the optimized search space to different tasks, the search space was generalized. The experimental results demonstrate that the network searched by the SGNAS framework with good generalization ability displays its effectiveness for cross-domain scene classification of HSR images, both in accuracy and time efficiency.


2021 ◽  
Vol 13 (3) ◽  
pp. 364
Author(s):  
Han Gao ◽  
Jinhui Guo ◽  
Peng Guo ◽  
Xiuwan Chen

Recently, deep learning has become the most innovative trend for a variety of high-spatial-resolution remote sensing imaging applications. However, large-scale land cover classification via traditional convolutional neural networks (CNNs) with sliding windows is computationally expensive and produces coarse results. Additionally, although such supervised learning approaches have performed well, collecting and annotating datasets for every task are extremely laborious, especially for those fully supervised cases where the pixel-level ground-truth labels are dense. In this work, we propose a new object-oriented deep learning framework that leverages residual networks with different depths to learn adjacent feature representations by embedding a multibranch architecture in the deep learning pipeline. The idea is to exploit limited training data at different neighboring scales to make a tradeoff between weak semantics and strong feature representations for operational land cover mapping tasks. We draw from established geographic object-based image analysis (GEOBIA) as an auxiliary module to reduce the computational burden of spatial reasoning and optimize the classification boundaries. We evaluated the proposed approach on two subdecimeter-resolution datasets involving both urban and rural landscapes. It presented better classification accuracy (88.9%) compared to traditional object-based deep learning methods and achieves an excellent inference time (11.3 s/ha).


Sensors ◽  
2021 ◽  
Vol 21 (13) ◽  
pp. 4447
Author(s):  
Jisun Shin ◽  
Young-Heon Jo ◽  
Joo-Hyung Ryu ◽  
Boo-Keun Khim ◽  
Soo Mee Kim

Red tides caused by Margalefidinium polykrikoides occur continuously along the southern coast of Korea, where there are many aquaculture cages, and therefore, prompt monitoring of bloom water is required to prevent considerable damage. Satellite-based ocean-color sensors are widely used for detecting red tide blooms, but their low spatial resolution restricts coastal observations. Contrarily, terrestrial sensors with a high spatial resolution are good candidate sensors, despite the lack of spectral resolution and bands for red tide detection. In this study, we developed a U-Net deep learning model for detecting M. polykrikoides blooms along the southern coast of Korea from PlanetScope imagery with a high spatial resolution of 3 m. The U-Net model was trained with four different datasets that were constructed with randomly or non-randomly chosen patches consisting of different ratios of red tide and non-red tide pixels. The qualitative and quantitative assessments of the conventional red tide index (RTI) and four U-Net models suggest that the U-Net model, which was trained with a dataset of non-randomly chosen patches including non-red tide patches, outperformed RTI in terms of sensitivity, precision, and F-measure level, accounting for an increase of 19.84%, 44.84%, and 28.52%, respectively. The M. polykrikoides map derived from U-Net provides the most reasonable red tide patterns in all water areas. Combining high spatial resolution images and deep learning approaches represents a good solution for the monitoring of red tides over coastal regions.


2016 ◽  
Vol 36 (4) ◽  
pp. 0428001 ◽  
Author(s):  
刘大伟 Liu Dawei ◽  
韩玲 Han Ling ◽  
韩晓勇 Han Xiaoyong

2020 ◽  
Vol 12 (21) ◽  
pp. 3608
Author(s):  
Kelsey Warkentin ◽  
Douglas Stow ◽  
Kellie Uyeda ◽  
John O’Leary ◽  
Julie Lambert ◽  
...  

The purpose of this study is to map shrub distributions and estimate shrub cover fractions based on the classification of high-spatial-resolution aerial orthoimagery and light detection and ranging (LiDAR) data for portions of the highly disturbed coastal sage scrub landscapes of San Clemente Island, California. We utilized nine multi-temporal aerial orthoimage sets for the 2010 to 2018 period to map shrub cover. Pixel-based and object-based image analysis (OBIA) approaches to image classification of growth forms were tested. Shrub fractional cover was estimated for 10, 20 and 40 m grid sizes and assessed for accuracy. The most accurate estimates of shrub cover were generated with the OBIA method with both multispectral brightness values and canopy height estimates from a normalized digital surface model (nDSM). Fractional cover products derived from 2015 and 2017 orthoimagery with nDSM data incorporated yielded the highest accuracies. Major factors that influenced the accuracy of shrub maps and fractional cover estimates include the time of year and spatial resolution of the imagery, the type of classifier, feature inputs to the classifier, and the grid size used for fractional cover estimation. While tracking actual changes in shrub cover over time was not the purpose, this study illustrates the importance of consistent mapping approaches and high-quality inputs, including very-high-spatial-resolution imagery and an nDSM.


Sign in / Sign up

Export Citation Format

Share Document