scholarly journals Integrating Recent Land Cover Mapping Efforts to Update the National Gap Analysis Program's Species Habitat Map

Author(s):  
A. J. McKerrow ◽  
A. Davidson ◽  
T. S. Earnhardt ◽  
A. L. Benson

Over the past decade, great progress has been made to develop national extent land cover mapping products to address natural resource issues. One of the core products of the GAP Program is range-wide species distribution models for nearly 2000 terrestrial vertebrate species in the U.S. We rely on deductive modeling of habitat affinities using these products to create models of habitat availability. That approach requires that we have a thematically rich and ecologically meaningful map legend to support the modeling effort. In this work, we tested the integration of the Multi-Resolution Landscape Characterization Consortium's National Land Cover Database 2011 and LANDFIRE's Disturbance Products to update the 2001 National GAP Vegetation Dataset to reflect 2011 conditions. The revised product can then be used to update the species models. <br><br> We tested the update approach in three geographic areas (Northeast, Southeast, and Interior Northwest). We used the NLCD product to identify areas where the cover type mapped in 2011 was different from what was in the 2001 land cover map. We used Google Earth and ArcGIS base maps as reference imagery in order to label areas identified as "changed" to the appropriate class from our map legend. Areas mapped as urban or water in the 2011 NLCD map that were mapped differently in the 2001 GAP map were accepted without further validation and recoded to the corresponding GAP class. We used LANDFIRE's Disturbance products to identify changes that are the result of recent disturbance and to inform the reassignment of areas to their updated thematic label. We ran species habitat models for three species including Lewis's Woodpecker (<i>Melanerpes lewis</i>) and the White-tailed Jack Rabbit (<i>Lepus townsendii</i>) and Brown Headed nuthatch (<i>Sitta pusilla</i>). For each of three vertebrate species we found important differences in the amount and location of suitable habitat between the 2001 and 2011 habitat maps. Specifically, Brown headed nuthatch habitat in 2011 was &minus;14% of the 2001 modeled habitat, whereas Lewis's Woodpecker increased by 4%. The white-tailed jack rabbit (<i>Lepus townsendii</i>) had a net change of &minus;1% (11% decline, 10% gain). For that species we found the updates related to opening of forest due to burning and regenerating shrubs following harvest to be the locally important main transitions. In the Southeast updates related to timber management and urbanization are locally important.

2020 ◽  
Vol 12 (4) ◽  
pp. 602 ◽  
Author(s):  
Qingyu Li ◽  
Chunping Qiu ◽  
Lei Ma ◽  
Michael Schmitt ◽  
Xiao Zhu

The remote sensing based mapping of land cover at extensive scales, e.g., of whole continents, is still a challenging task because of the need for sophisticated pipelines that combine every step from data acquisition to land cover classification. Utilizing the Google Earth Engine (GEE), which provides a catalog of multi-source data and a cloud-based environment, this research generates a land cover map of the whole African continent at 10 m resolution. This land cover map could provide a large-scale base layer for a more detailed local climate zone mapping of urban areas, which lie in the focus of interest of many studies. In this regard, we provide a free download link for our land cover maps of African cities at the end of this paper. It is shown that our product has achieved an overall accuracy of 81% for five classes, which is superior to the existing 10 m land cover product FROM-GLC10 in detecting urban class in city areas and identifying the boundaries between trees and low plants in rural areas. The best data input configurations are carefully selected based on a comparison of results from different input sources, which include Sentinel-2, Landsat-8, Global Human Settlement Layer (GHSL), Night Time Light (NTL) Data, Shuttle Radar Topography Mission (SRTM), and MODIS Land Surface Temperature (LST). We provide a further investigation of the importance of individual features derived from a Random Forest (RF) classifier. In order to study the influence of sampling strategies on the land cover mapping performance, we have designed a transferability analysis experiment, which has not been adequately addressed in the current literature. In this experiment, we test whether trained models from several cities contain valuable information to classify a different city. It was found that samples of the urban class have better reusability than those of other natural land cover classes, i.e., trees, low plants, bare soil or sand, and water. After experimental evaluation of different land cover classes across different cities, we conclude that continental land cover mapping results can be considerably improved when training samples of natural land cover classes are collected and combined from areas covering each Köppen climate zone.


2021 ◽  
Vol 13 (6) ◽  
pp. 1060
Author(s):  
Luc Baudoux ◽  
Jordi Inglada ◽  
Clément Mallet

CORINE Land-Cover (CLC) and its by-products are considered as a reference baseline for land-cover mapping over Europe and subsequent applications. CLC is currently tediously produced each six years from both the visual interpretation and the automatic analysis of a large amount of remote sensing images. Observing that various European countries regularly produce in parallel their own land-cover country-scaled maps with their own specifications, we propose to directly infer CORINE Land-Cover from an existing map, therefore steadily decreasing the updating time-frame. No additional remote sensing image is required. In this paper, we focus more specifically on translating a country-scale remote sensed map, OSO (France), into CORINE Land Cover, in a supervised way. OSO and CLC not only differ in nomenclature but also in spatial resolution. We jointly harmonize both dimensions using a contextual and asymmetrical Convolution Neural Network with positional encoding. We show for various use cases that our method achieves a superior performance than the traditional semantic-based translation approach, achieving an 81% accuracy over all of France, close to the targeted 85% accuracy of CLC.


2018 ◽  
Vol 10 (8) ◽  
pp. 1212 ◽  
Author(s):  
Xiaohong Yang ◽  
Zhong Xie ◽  
Feng Ling ◽  
Xiaodong Li ◽  
Yihang Zhang ◽  
...  

Super-resolution land cover mapping (SRM) is a method that aims to generate land cover maps with fine spatial resolutions from the original coarse spatial resolution remotely sensed image. The accuracy of the resultant land cover map produced by existing SRM methods is often limited by the errors of fraction images and the uncertainty of spatial pattern models. To address these limitations in this study, we proposed a fuzzy c-means clustering (FCM)-based spatio-temporal SRM (FCM_STSRM) model that combines the spectral, spatial, and temporal information into a single objective function. The spectral term is constructed with the FCM criterion, the spatial term is constructed with the maximal spatial dependence principle, and the temporal term is characterized by the land cover transition probabilities in the bitemporal land cover maps. The performance of the proposed FCM_STSRM method is assessed using data simulated from the National Land Cover Database dataset and real Landsat images. Results of the two experiments show that the proposed FCM_STSRM method can decrease the influence of fraction errors by directly using the original images as the input and the spatial pattern uncertainty by inheriting land cover information from the existing fine resolution land cover map. Compared with the hard classification and FCM_SRM method applied to mono-temporal images, the proposed FCM_STSRM method produced fine resolution land cover maps with high accuracy, thus showing the efficiency and potential of the novel approach for producing fine spatial resolution maps from coarse resolution remotely sensed images.


2019 ◽  
Vol 11 (24) ◽  
pp. 3023 ◽  
Author(s):  
Shuai Xie ◽  
Liangyun Liu ◽  
Xiao Zhang ◽  
Jiangning Yang ◽  
Xidong Chen ◽  
...  

The Google Earth Engine (GEE) has emerged as an essential cloud-based platform for land-cover classification as it provides massive amounts of multi-source satellite data and high-performance computation service. This paper proposed an automatic land-cover classification method using time-series Landsat data on the GEE cloud-based platform. The Moderate Resolution Imaging Spectroradiometer (MODIS) land-cover products (MCD12Q1.006) with the International Geosphere–Biosphere Program (IGBP) classification scheme were used to provide accurate training samples using the rules of pixel filtering and spectral filtering, which resulted in an overall accuracy (OA) of 99.2%. Two types of spectral–temporal features (percentile composited features and median composited monthly features) generated from all available Landsat Thematic Mapper (TM) and Enhanced Thematic Mapper Plus (ETM+) data from the year 2010 ± 1 were used as input features to a Random Forest (RF) classifier for land-cover classification. The results showed that the monthly features outperformed the percentile features, giving an average OA of 80% against 77%. In addition, the monthly features composited using the median outperformed those composited using the maximum Normalized Difference Vegetation Index (NDVI) with an average OA of 80% against 78%. Therefore, the proposed method is able to generate accurate land-cover mapping automatically based on the GEE cloud-based platform, which is promising for regional and global land-cover mapping.


2020 ◽  
Vol 12 (8) ◽  
pp. 1235
Author(s):  
Jesús A. Anaya ◽  
Víctor H. Gutiérrez-Vélez ◽  
Ana M. Pacheco-Pascagaza ◽  
Sebastián Palomino-Ángel ◽  
Natasha Han ◽  
...  

Tropical forests are disappearing at unprecedented rates, but the drivers behind this transformation are not always clear. This limits the decision-making processes and the effectiveness of forest management policies. In this paper, we address the extent and drivers of deforestation of the Choco biodiversity hotspot, which has not received much scientific attention despite its high levels of plant diversity and endemism. The climate is characterized by persistent cloud cover which is a challenge for land cover mapping from optical satellite imagery. By using Google Earth Engine to select pixels with minimal cloud content and applying a random forest classifier to Landsat and Sentinel data, we produced a wall-to-wall land cover map, enabling a diagnosis of the status and drivers of forest loss in the region. Analyses of these new maps together with information from illicit crops and alluvial mining uncovered the pressure over intact forests. According to Global Forest Change (GFC) data, 2324 km2 were deforested in this area from 2001 to 2018, reaching a maximum in 2016 and 2017. We found that 68% of the area is covered by broadleaf forests (67,473 km2) and 15% by shrublands (14,483 km2), the latter with enormous potential to promote restoration projects. This paper provides a new insight into the conservation of this exceptional forest with a discussion of the drivers of forest loss, where illicit crops and alluvial mining were found to be responsible for 60% of forest loss.


2019 ◽  
Vol 11 (16) ◽  
pp. 1907 ◽  
Author(s):  
Mohammad Mardani ◽  
Hossein Mardani ◽  
Lorenzo De Simone ◽  
Samuel Varas ◽  
Naoki Kita ◽  
...  

In-time and accurate monitoring of land cover and land use are essential tools for countries to achieve sustainable food production. However, many developing countries are struggling to efficiently monitor land resources due to the lack of financial support and limited access to adequate technology. This study aims at offering a solution to fill in such a gap in developing countries, by developing a land cover solution that is free of costs. A fully automated framework for land cover mapping was developed using 10-m resolution open access satellite images and machine learning (ML) techniques for the African country of Lesotho. Sentinel-2 satellite images were accessed through Google Earth Engine (GEE) for initial processing and feature extraction at a national level. Also, Food and Agriculture Organization’s land cover of Lesotho (FAO LCL) data were used to train a support vector machine (SVM) and bagged trees (BT) classifiers. SVM successfully classified urban and agricultural lands with 62 and 67% accuracy, respectively. Also, BT could classify the two categories with 81 and 65% accuracy, correspondingly. The trained models could provide precise LC maps in minutes or hours. they can also be utilized as a viable solution for developing countries as an alternative to traditional geographic information system (GIS) methods, which are often labor intensive, require acquisition of very high-resolution commercial satellite imagery, time consuming and call for high budgets.


2020 ◽  
Author(s):  
Laura Bindereif ◽  
Tobias Rentschler ◽  
Martin Batelheim ◽  
Marta Díaz-Zorita Bonilla ◽  
Philipp Gries ◽  
...  

&lt;p&gt;Land cover information plays an essential role for resource development, environmental monitoring and protection. Amongst other natural resources, soils and soil properties are strongly affected by land cover and land cover change, which can lead to soil degradation. Remote sensing techniques are very suitable for spatio-temporal mapping of land cover mapping and change detection. With remote sensing programs vast data archives were established. Machine learning applications provide appropriate algorithms to analyse such amounts of data efficiently and with accurate results. However, machine learning methods require specific sampling techniques and are usually made for balanced datasets with an even training sample frequency. Though, most real-world datasets are imbalanced and methods to reduce the imbalance of datasets with synthetic sampling are required. Synthetic sampling methods increase the number of samples in the minority class and/or decrease the number in the majority class to achieve higher model accuracy. The Synthetic Minority Over-Sampling Technique (SMOTE) is a method to generate synthetic samples and balance the dataset used in many machine learning applications. In the middle Guadalquivir basin, Andalusia, Spain, we used random forests with Landsat images from 1984 to 2018 as covariates to map the land cover change with the Google Earth Engine. The sampling design was based on stratified random sampling according to the CORINE land cover classification of 2012. The land cover classes in our study were arable land, permanent crops (plantations), pastures/grassland, forest and shrub. Artificial surfaces and water bodies were excluded from modelling. However, the number of the 130 training samples was imbalanced. The classes pasture (7&amp;#160;samples) and shrub (13&amp;#160;samples) show a lower number than the other classes (48, 47 and 16&amp;#160;samples). This led to misclassifications and negatively affected the classification accuracy. Therefore, we applied SMOTE to increase the number of samples and the classification accuracy of the model. Preliminary results are promising and show an increase of the classification accuracy, especially the accuracy of the previously underrepresented classes pasture and shrub. This corresponds to the results of studies with other objectives which also see the use of synthetic sampling methods as an improvement for the performance of classification frameworks.&lt;/p&gt;


2020 ◽  
Author(s):  
Runmin Dong ◽  
Haohuan Fu

&lt;p&gt;Land cover mapping has made drastic progress with the improvement of the resolution of remote sensing images in recent research. However, with various limitations of public land cover datasets, human efforts on interpreting and labelling images still account for a significant part of the total cost. For example, it took 10 months and $1.3 million to label about 160,000 square kilometers in the Chesapeake Bay watershed in the northeastern United States. Therefore, it is significant to consider the human interpreting cost of the large-scale land cover mapping.&lt;/p&gt;&lt;p&gt;&amp;#160;&lt;/p&gt;&lt;p&gt;In this work, we explore a possible solution to achieve 3-m resolution land cover mapping without any human interpretation. This is made possible thanks to a 10-m resolution global land cover map developed for the year of 2017. We propose a complete workflow and a novel deep learning based network to transform the imperfect 10-m resolution land cover map to a preferable 3-m resolution land cover map, which is beneficial to reduce the research thresholds in this community and give similar studies as an example. As we use the imperfect training label, a well-designed and robust approach is strongly needed. We integrate a deep high-resolution network with instance normalization, adaptive histogram equalization, and a pruning process for large-scale land cover mapping.&lt;/p&gt;&lt;p&gt;&amp;#160;&lt;/p&gt;&lt;p&gt;Our proposed approach achieves the overall accuracy (OA) of 86.83% on the test data set for China, improving the previous state-of-the-art accuracies of 10-m resolution land cover mapping product by 5.35% in OA. Moreover, we present detailed results obtained over three mega cities in China as example and demonstrate the effectiveness of our proposed approach for 3-m resolution large-scale land cover mapping.&lt;/p&gt;


2014 ◽  
Vol 18 (3) ◽  
pp. 287
Author(s):  
Khil Ha Lee ◽  
Sung Wook Kim ◽  
Eun Kyeong Choi ◽  
Kyu Hwan Lee

Sign in / Sign up

Export Citation Format

Share Document