scholarly journals More Than Meets the Eye: Using Sentinel-2 to Map Small Plantations in Complex Forest Landscapes

2018 ◽  
Vol 10 (11) ◽  
pp. 1693 ◽  
Author(s):  
Keiko Nomura ◽  
Edward Mitchard

Many tropical forest landscapes are now complex mosaics of intact forests, recovering forests, tree crops, agroforestry, pasture, and crops. The small patch size of each land cover type contributes to making them difficult to separate using satellite remote sensing data. We used Sentinel-2 data to conduct supervised classifications covering seven classes, including oil palm, rubber, and betel nut plantations in Southern Myanmar, based on an extensive training dataset derived from expert interpretation of WorldView-3 and UAV data. We used a Random Forest classifier with all 13 Sentinel-2 bands, as well as vegetation and texture indices, over an area of 13,330 ha. The median overall accuracy of 1000 iterations was >95% (95.5%–96.0%) against independent test data, even though the tree crop classes appear visually very similar at a 20 m resolution. We conclude that the Sentinel-2 data, which are freely available with very frequent (five day) revisits, are able to differentiate these similar tree crop types. We suspect that this is due to the large number of spectral bands in Sentinel-2 data, indicating great potential for the wider application of Sentinel-2 data for the classification of small land parcels without needing to resort to object-based classification of higher resolution data.

Author(s):  
Christina Corbane ◽  
Vasileios Syrris ◽  
Filip Sabo ◽  
Panagiotis Politis ◽  
Michele Melchiorri ◽  
...  

Abstract Spatially consistent and up-to-date maps of human settlements are crucial for addressing policies related to urbanization and sustainability, especially in the era of an increasingly urbanized world. The availability of open and free Sentinel-2 data of the Copernicus Earth Observation program offers a new opportunity for wall-to-wall mapping of human settlements at a global scale. This paper presents a deep-learning-based framework for a fully automated extraction of built-up areas at a spatial resolution of 10 m from a global composite of Sentinel-2 imagery. A multi-neuro modeling methodology building on a simple Convolution Neural Networks architecture for pixel-wise image classification of built-up areas is developed. The core features of the proposed model are the image patch of size 5 × 5 pixels adequate for describing built-up areas from Sentinel-2 imagery and the lightweight topology with a total number of 1,448,578 trainable parameters and 4 2D convolutional layers and 2 flattened layers. The deployment of the model on the global Sentinel-2 image composite provides the most detailed and complete map reporting about built-up areas for reference year 2018. The validation of the results with an independent reference dataset of building footprints covering 277 sites across the world establishes the reliability of the built-up layer produced by the proposed framework and the model robustness. The results of this study contribute to cutting-edge research in the field of automated built-up areas mapping from remote sensing data and establish a new reference layer for the analysis of the spatial distribution of human settlements across the rural–urban continuum.


2020 ◽  
Vol 12 (15) ◽  
pp. 2422
Author(s):  
Lisa Knopp ◽  
Marc Wieland ◽  
Michaela Rättich ◽  
Sandro Martinis

Wildfires have major ecological, social and economic consequences. Information about the extent of burned areas is essential to assess these consequences and can be derived from remote sensing data. Over the last years, several methods have been developed to segment burned areas with satellite imagery. However, these methods mostly require extensive preprocessing, while deep learning techniques—which have successfully been applied to other segmentation tasks—have yet to be fully explored. In this work, we combine sensor-specific and methodological developments from the past few years and suggest an automatic processing chain, based on deep learning, for burned area segmentation using mono-temporal Sentinel-2 imagery. In particular, we created a new training and validation dataset, which is used to train a convolutional neural network based on a U-Net architecture. We performed several tests on the input data and reached optimal network performance using the spectral bands of the visual, near infrared and shortwave infrared domains. The final segmentation model achieved an overall accuracy of 0.98 and a kappa coefficient of 0.94.


2018 ◽  
Vol 11 (1) ◽  
pp. 43 ◽  
Author(s):  
Masoud Mahdianpari ◽  
Bahram Salehi ◽  
Fariba Mohammadimanesh ◽  
Saeid Homayouni ◽  
Eric Gill

Wetlands are one of the most important ecosystems that provide a desirable habitat for a great variety of flora and fauna. Wetland mapping and modeling using Earth Observation (EO) data are essential for natural resource management at both regional and national levels. However, accurate wetland mapping is challenging, especially on a large scale, given their heterogeneous and fragmented landscape, as well as the spectral similarity of differing wetland classes. Currently, precise, consistent, and comprehensive wetland inventories on a national- or provincial-scale are lacking globally, with most studies focused on the generation of local-scale maps from limited remote sensing data. Leveraging the Google Earth Engine (GEE) computational power and the availability of high spatial resolution remote sensing data collected by Copernicus Sentinels, this study introduces the first detailed, provincial-scale wetland inventory map of one of the richest Canadian provinces in terms of wetland extent. In particular, multi-year summer Synthetic Aperture Radar (SAR) Sentinel-1 and optical Sentinel-2 data composites were used to identify the spatial distribution of five wetland and three non-wetland classes on the Island of Newfoundland, covering an approximate area of 106,000 km2. The classification results were evaluated using both pixel-based and object-based random forest (RF) classifications implemented on the GEE platform. The results revealed the superiority of the object-based approach relative to the pixel-based classification for wetland mapping. Although the classification using multi-year optical data was more accurate compared to that of SAR, the inclusion of both types of data significantly improved the classification accuracies of wetland classes. In particular, an overall accuracy of 88.37% and a Kappa coefficient of 0.85 were achieved with the multi-year summer SAR/optical composite using an object-based RF classification, wherein all wetland and non-wetland classes were correctly identified with accuracies beyond 70% and 90%, respectively. The results suggest a paradigm-shift from standard static products and approaches toward generating more dynamic, on-demand, large-scale wetland coverage maps through advanced cloud computing resources that simplify access to and processing of the “Geo Big Data.” In addition, the resulting ever-demanding inventory map of Newfoundland is of great interest to and can be used by many stakeholders, including federal and provincial governments, municipalities, NGOs, and environmental consultants to name a few.


Author(s):  
A. Tuzcu Kokal ◽  
A. F. Sunar ◽  
A. Dervisoglu ◽  
S. Berberoglu

Abstract. Turkey has favorable agricultural conditions (i.e. fertile soils, climate and rainfall) and can grow almost any type of crop in many regions, making it one of the leading sectors of the economy. For sustainable agriculture management, all factors affecting the agricultural products should be analyzed on a spatial-temporal basis. Therefore, nowadays space technologies such as remote sensing are important tools in providing an accurate mapping of the agricultural fields with timely monitoring and higher repetition frequency and accuracy. In this study, object based classification method was applied to 2017 Sentinel 2 Level 2A satellite image in order to map crop types in the Adana, Çukurova region in Turkey. Support Vector Machine (SVM) was used as a classifier. Texture information were incorporated to spectral wavebands of Sentinel-2 image, to increase the classification accuracy. In this context, all of the textural features of Gray-Level Co-occurrence Matrix (GLCM) were tested and Entropy, Standard deviation, and Mean textural features were found to be the most suitable among them. Multi-spectral and textural features were used as an input separately and/or in combination to evaluate the potential of texture in differentiating crop types and the accuracy of output thematic maps. As a result, with the addition of textural features, it was observed that the Overall Accuracy and Kappa coefficient increased by 7% and 8%, respectively.


Sensors ◽  
2019 ◽  
Vol 19 (10) ◽  
pp. 2401 ◽  
Author(s):  
Chuanliang Sun ◽  
Yan Bian ◽  
Tao Zhou ◽  
Jianjun Pan

Crop-type identification is very important in agricultural regions. Most researchers in this area have focused on exploring the ability of synthetic-aperture radar (SAR) sensors to identify crops. This paper uses multi-source (Sentinel-1, Sentinel-2, and Landsat-8) and multi-temporal data to identify crop types. The change detection method was used to analyze spectral and indices information in time series. Significant differences in crop growth status during the growing season were found. Then, three obviously differentiated time features were extracted. Three advanced machine learning algorithms (Support Vector Machine, Artificial Neural Network, and Random Forest, RF) were used to identify the crop types. The results showed that the detection of (Vertical-vertical) VV, (Vertical-horizontal) VH, and Cross Ratio (CR) changes was effective for identifying land cover. Moreover, the red-edge changes were obviously different according to crop growth periods. Sentinel-2 and Landsat-8 showed different normalized difference vegetation index (NDVI) changes also. By using single remote sensing data to classify crops, Sentinel-2 produced the highest overall accuracy (0.91) and Kappa coefficient (0.89). The combination of Sentinel-1, Sentinel-2, and Landsat-8 data provided the best overall accuracy (0.93) and Kappa coefficient (0.91). The RF method had the best performance in terms of identity classification. In addition, the indices feature dominated the classification results. The combination of phenological period information with multi-source remote sensing data can be used to explore a crop area and its status in the growing season. The results of crop classification can be used to analyze the density and distribution of crops. This study can also allow to determine crop growth status, improve crop yield estimation accuracy, and provide a basis for crop management.


2017 ◽  
Vol 14 (5) ◽  
pp. 778-782 ◽  
Author(s):  
Nataliia Kussul ◽  
Mykola Lavreniuk ◽  
Sergii Skakun ◽  
Andrii Shelestov

2021 ◽  
Vol 87 (7) ◽  
pp. 503-511
Author(s):  
Lei Zhang ◽  
Hongchao Liu ◽  
Xiaosong Li ◽  
Xinyu Qian

Image segmentation is a critical procedure in object-based identification and classification of remote sensing data. However, optimal scale-parameter selection presents a challenge, given the presence of complex landscapes and uncertain feature changes. This study proposes a local optimal segmentation approach that considers both intersegment heterogeneity and intrasegment homogeneity, uses the standard deviation and local Moran's index to explore each optimal segment across different scale parameters, and combines the optimal segments into a single layer. The optimal segment is measured by using high-spatial-resolution images. Results show that our approach out-performs and generates less error than the global optimal segmentation approach. The variety of land cover types or intrasegment homogeneity leads to segment matching with the geo-objects on different scales. Local optimal segmentation demonstrates sensitivity to land cover discrepancy and provides good performance on cross-scale segmentation.


Sign in / Sign up

Export Citation Format

Share Document