scholarly journals An evaluation of Landsat, Sentinel-2, Sentinel-1 and MODIS data for crop type mapping

2021 ◽  
Vol 3 ◽  
pp. 100018 ◽  
Author(s):  
Xiao-Peng Song ◽  
Wenli Huang ◽  
Matthew C. Hansen ◽  
Peter Potapov
Keyword(s):  
2021 ◽  
pp. 161-180
Author(s):  
Pranay Panjala ◽  
Murali Krishna Gumma ◽  
Pardhasaradhi Teluguntla

2021 ◽  
Vol 13 (22) ◽  
pp. 4668
Author(s):  
Stella Ofori-Ampofo ◽  
Charlotte Pelletier ◽  
Stefan Lang

Crop maps are key inputs for crop inventory production and yield estimation and can inform the implementation of effective farm management practices. Producing these maps at detailed scales requires exhaustive field surveys that can be laborious, time-consuming, and expensive to replicate. With a growing archive of remote sensing data, there are enormous opportunities to exploit dense satellite image time series (SITS), temporal sequences of images over the same area. Generally, crop type mapping relies on single-sensor inputs and is solved with the help of traditional learning algorithms such as random forests or support vector machines. Nowadays, deep learning techniques have brought significant improvements by leveraging information in both spatial and temporal dimensions, which are relevant in crop studies. The concurrent availability of Sentinel-1 (synthetic aperture radar) and Sentinel-2 (optical) data offers a great opportunity to utilize them jointly; however, optimizing their synergy has been understudied with deep learning techniques. In this work, we analyze and compare three fusion strategies (input, layer, and decision levels) to identify the best strategy that optimizes optical-radar classification performance. They are applied to a recent architecture, notably, the pixel-set encoder–temporal attention encoder (PSE-TAE) developed specifically for object-based classification of SITS and based on self-attention mechanisms. Experiments are carried out in Brittany, in the northwest of France, with Sentinel-1 and Sentinel-2 time series. Input and layer-level fusion competitively achieved the best overall F-score surpassing decision-level fusion by 2%. On a per-class basis, decision-level fusion increased the accuracy of dominant classes, whereas layer-level fusion improves up to 13% for minority classes. Against single-sensor baseline, multi-sensor fusion strategies identified crop types more accurately: for example, input-level outperformed Sentinel-2 and Sentinel-1 by 3% and 9% in F-score, respectively. We have also conducted experiments that showed the importance of fusion for early time series classification and under high cloud cover condition.


Author(s):  
M. Rußwurm ◽  
C. Pelletier ◽  
M. Zollner ◽  
S. Lefèvre ◽  
M. Körner

Abstract. We present BreizhCrops, a novel benchmark dataset for the supervised classification of field crops from satellite time series. We aggregated label data and Sentinel-2 top-of-atmosphere as well as bottom-of-atmosphere time series in the region of Brittany (Breizh in local language), north-east France. We compare seven recently proposed deep neural networks along with a Random Forest baseline. The dataset, model (re-)implementations and pre-trained model weights are available at the associated GitHub repository (https://github.com/dl4sits/breizhcrops) that has been designed with applicability for practitioners in mind. We plan to maintain the repository with additional data and welcome contributions of novel methods to build a state-of-the-art benchmark on methods for crop type mapping.


Author(s):  
Artur Nowakowski ◽  
Dario Spiller ◽  
Noelle Cremer ◽  
Rogerio Bonifacio ◽  
Michael Marszalek ◽  
...  
Keyword(s):  

2021 ◽  
Vol 13 (14) ◽  
pp. 2790
Author(s):  
Hongwei Zhao ◽  
Sibo Duan ◽  
Jia Liu ◽  
Liang Sun ◽  
Louis Reymondin

Accurate crop type maps play an important role in food security due to their widespread applicability. Optical time series data (TSD) have proven to be significant for crop type mapping. However, filling in missing information due to clouds in optical imagery is always needed, which will increase the workload and the risk of error transmission, especially for imagery with high spatial resolution. The development of optical imagery with high temporal and spatial resolution and the emergence of deep learning algorithms provide solutions to this problem. Although the one-dimensional convolutional neural network (1D CNN), long short-term memory (LSTM), and gate recurrent unit (GRU) models have been used to classify crop types in previous studies, their ability to identify crop types using optical TSD with missing information needs to be further explored due to their different mechanisms for handling invalid values in TSD. In this research, we designed two groups of experiments to explore the performances and characteristics of the 1D CNN, LSTM, GRU, LSTM-CNN, and GRU-CNN models for crop type mapping using unfilled Sentinel-2 (Sentinel-2) TSD and to discover the differences between unfilled and filled Sentinel-2 TSD based on the same algorithm. A case study was conducted in Hengshui City, China, of which 70.3% is farmland. The results showed that the 1D CNN, LSTM-CNN, and GRU-CNN models achieved acceptable classification accuracies (above 85%) using unfilled TSD, even though the total missing rate of the sample values was 43.5%; these accuracies were higher and more stable than those obtained using filled TSD. Furthermore, the models recalled more samples on crop types with small parcels when using unfilled TSD. Although LSTM and GRU models did not attain accuracies as high as the other three models using unfilled TSD, their results were almost close to those with filled TSD. This research showed that crop types could be identified by deep learning features in Sentinel-2 dense time series images with missing information due to clouds or cloud shadows randomly, which avoided spending a lot of time on missing information reconstruction.


2020 ◽  
Vol 12 (18) ◽  
pp. 2957 ◽  
Author(s):  
Sherrie Wang ◽  
Stefania Di Tommaso ◽  
Joey Faulkner ◽  
Thomas Friedel ◽  
Alexander Kennepohl ◽  
...  

High resolution satellite imagery and modern machine learning methods hold the potential to fill existing data gaps in where crops are grown around the world at a sub-field level. However, high resolution crop type maps have remained challenging to create in developing regions due to a lack of ground truth labels for model development. In this work, we explore the use of crowdsourced data, Sentinel-2 and DigitalGlobe imagery, and convolutional neural networks (CNNs) for crop type mapping in India. Plantix, a free app that uses image recognition to help farmers diagnose crop diseases, logged 9 million geolocated photos from 2017–2019 in India, 2 million of which are in the states of Andhra Pradesh and Telangana in India. Crop type labels based on farmer-submitted images were added by domain experts and deep CNNs. The resulting dataset of crop type at coordinates is high in volume, but also high in noise due to location inaccuracies, submissions from out-of-field, and labeling errors. We employed a number of steps to clean the dataset, which included training a CNN on very high resolution DigitalGlobe imagery to filter for points that are within a crop field. With this cleaned dataset, we extracted Sentinel time series at each point and trained another CNN to predict the crop type at each pixel. When evaluated on the highest quality subset of crowdsourced data, the CNN distinguishes rice, cotton, and “other” crops with 74% accuracy in a 3-way classification and outperforms a random forest trained on harmonic regression features. Furthermore, model performance remains stable when low quality points are introduced into the training set. Our results illustrate the potential of non-traditional, high-volume/high-noise datasets for crop type mapping, some improvements that neural networks can achieve over random forests, and the robustness of such methods against moderate levels of training set noise. Lastly, we caution that obstacles like the lack of good Sentinel-2 cloud mask, imperfect mobile device location accuracy, and preservation of privacy while improving data access will need to be addressed before crowdsourcing can widely and reliably be used to map crops in smallholder systems.


Author(s):  
D. Vijayasekaran

<p><strong>Abstract.</strong> Large-scale mapping and monitoring of agriculture land use are very important. It helps in forecast crop yields, assesses the factors influencing the crop stress and estimate the damage due to natural hazards. Also, more essentially, aids in calculating the irrigation water demand at the farm level and better water resource management. Recent developments in remote sensing satellite sensors spatial and temporal resolutions, global coverage and open access such as Sentinel-2, created new possibilities in mapping and monitoring land use/land cover features. The present study investigated the performance and applicability of Sen2-Agri system in the heterogeneous cropping system for operational crop type mapping at parcel resolution using time series Sentinel-2 multispectral satellite imagery. The parcel level crop type information was collected in the field by systematic sampling and used to train and validate the random forest (RF) classification in the system. The classification accuracy varied from 57% to 86% for different major crops. The overall classification accuracy was 70% with KAPPA index of 61%. The very small agriculture field size and persistent cloud cover are the major constraint to the improvement of classification accuracy. Combination of the time series imagery from multiple earth observation satellites for the monsoon cropping season and development of a robust system for in-situ data collection will further increase the mapping accuracy. Sen2-Agri system has the potential to handle a large amount of earth observation data and can be scaled up to the entire country, which will help in the efficient monitoring of crops.</p>


2020 ◽  
Vol 12 (1) ◽  
pp. 158 ◽  
Author(s):  
Luyi Sun ◽  
Jinsong Chen ◽  
Shanxin Guo ◽  
Xinping Deng ◽  
Yu Han

Timely and accurate crop type mapping is a critical prerequisite for the estimation of water availability and environmental carrying capacity. This research proposed a method to integrate time series Sentinel-1 (S1) and Sentinel-2 (S2) data for crop type mapping over oasis agricultural areas through a case study in Northwest China. Previous studies using synthetic aperture radar (SAR) data alone often yield quite limited accuracy in crop type identification due to speckles. To improve the quality of SAR features, we adopted a statistically homogeneous pixel (SHP) distributed scatterer interferometry (DSI) algorithm, originally proposed in the interferometric SAR (InSAR) community for distributed scatters (DSs) extraction, to identify statistically homogeneous pixel subsets (SHPs). On the basis of this algorithm, the SAR backscatter intensity was de-speckled, and the bias of coherence was mitigated. In addition to backscatter intensity, several InSAR products were extracted for crop type classification, including the interferometric coherence, master versus slave intensity ratio, and amplitude dispersion derived from SAR data. To explore the role of red-edge wavelengths in oasis crop type discrimination, we derived 11 red-edge indices and three red-edge bands from Sentinel-2 images, together with the conventional optical features, to serve as input features for classification. To deal with the high dimension of combined SAR and optical features, an automated feature selection method, i.e., recursive feature increment, was developed to obtain the optimal combination of S1 and S2 features to achieve the highest mapping accuracy. Using a random forest classifier, a distribution map of five major crop types was produced with an overall accuracy of 83.22% and kappa coefficient of 0.77. The contribution of SAR and optical features were investigated. SAR intensity in VH polarization was proved to be most important for crop type identification among all the microwave and optical features employed in this study. Some of the InSAR products, i.e., the amplitude dispersion, master versus slave intensity ratio, and coherence, were found to be beneficial for oasis crop type mapping. It was proved the inclusion of red-edge wavelengths improved the overall accuracy (OA) of crop type mapping by 1.84% compared with only using conventional optical features. In comparison, it was demonstrated that the synergistic use of time series Sentinel-1 and Sentinel-2 data achieved the best performance in the oasis crop type discrimination.


Sign in / Sign up

Export Citation Format

Share Document