How to improve 3-m resolution land cover mapping from imperfect 10-m resolution land cover mapping product?

Author(s):  
Runmin Dong ◽  
Haohuan Fu

<p>Land cover mapping has made drastic progress with the improvement of the resolution of remote sensing images in recent research. However, with various limitations of public land cover datasets, human efforts on interpreting and labelling images still account for a significant part of the total cost. For example, it took 10 months and $1.3 million to label about 160,000 square kilometers in the Chesapeake Bay watershed in the northeastern United States. Therefore, it is significant to consider the human interpreting cost of the large-scale land cover mapping.</p><p> </p><p>In this work, we explore a possible solution to achieve 3-m resolution land cover mapping without any human interpretation. This is made possible thanks to a 10-m resolution global land cover map developed for the year of 2017. We propose a complete workflow and a novel deep learning based network to transform the imperfect 10-m resolution land cover map to a preferable 3-m resolution land cover map, which is beneficial to reduce the research thresholds in this community and give similar studies as an example. As we use the imperfect training label, a well-designed and robust approach is strongly needed. We integrate a deep high-resolution network with instance normalization, adaptive histogram equalization, and a pruning process for large-scale land cover mapping.</p><p> </p><p>Our proposed approach achieves the overall accuracy (OA) of 86.83% on the test data set for China, improving the previous state-of-the-art accuracies of 10-m resolution land cover mapping product by 5.35% in OA. Moreover, we present detailed results obtained over three mega cities in China as example and demonstrate the effectiveness of our proposed approach for 3-m resolution large-scale land cover mapping.</p>

Author(s):  
M. Schmitt ◽  
J. Prexl ◽  
P. Ebel ◽  
L. Liebel ◽  
X. X. Zhu

Abstract. Fully automatic large-scale land cover mapping belongs to the core challenges addressed by the remote sensing community. Usually, the basis of this task is formed by (supervised) machine learning models. However, in spite of recent growth in the availability of satellite observations, accurate training data remains comparably scarce. On the other hand, numerous global land cover products exist and can be accessed often free-of-charge. Unfortunately, these maps are typically of a much lower resolution than modern day satellite imagery. Besides, they always come with a significant amount of noise, as they cannot be considered ground truth, but are products of previous (semi-)automatic prediction tasks. Therefore, this paper seeks to make a case for the application of weakly supervised learning strategies to get the most out of available data sources and achieve progress in high-resolution large-scale land cover mapping. Challenges and opportunities are discussed based on the SEN12MS dataset, for which also some baseline results are shown. These baselines indicate that there is still a lot of potential for dedicated approaches designed to deal with remote sensing-specific forms of weak supervision.


2021 ◽  
Vol 13 (6) ◽  
pp. 1060
Author(s):  
Luc Baudoux ◽  
Jordi Inglada ◽  
Clément Mallet

CORINE Land-Cover (CLC) and its by-products are considered as a reference baseline for land-cover mapping over Europe and subsequent applications. CLC is currently tediously produced each six years from both the visual interpretation and the automatic analysis of a large amount of remote sensing images. Observing that various European countries regularly produce in parallel their own land-cover country-scaled maps with their own specifications, we propose to directly infer CORINE Land-Cover from an existing map, therefore steadily decreasing the updating time-frame. No additional remote sensing image is required. In this paper, we focus more specifically on translating a country-scale remote sensed map, OSO (France), into CORINE Land Cover, in a supervised way. OSO and CLC not only differ in nomenclature but also in spatial resolution. We jointly harmonize both dimensions using a contextual and asymmetrical Convolution Neural Network with positional encoding. We show for various use cases that our method achieves a superior performance than the traditional semantic-based translation approach, achieving an 81% accuracy over all of France, close to the targeted 85% accuracy of CLC.


Sign in / Sign up

Export Citation Format

Share Document