scholarly journals Synergy of Sentinel-1 and Sentinel-2 Imagery for Early Seasonal Agricultural Crop Mapping

2021 ◽  
Vol 13 (23) ◽  
pp. 4891
Author(s):  
Silvia Valero ◽  
Ludovic Arnaud ◽  
Milena Planells ◽  
Eric Ceschia

The exploitation of the unprecedented capacity of Sentinel-1 (S1) and Sentinel-2 (S2) data offers new opportunities for crop mapping. In the framework of the SenSAgri project, this work studies the synergy of very high-resolution Sentinel time series to produce accurate early seasonal binary cropland mask and crop type map products. A crop classification processing chain is proposed to address the following: (1) high dimensionality challenges arising from the explosive growth in available satellite observations and (2) the scarcity of training data. The two-fold methodology is based on an S1-S2 classification system combining the so-called soft output predictions of two individually trained classifiers. The performances of the SenSAgri processing chain were assessed over three European test sites characterized by different agricultural systems. A large number of highly diverse and independent data sets were used for validation experiments. The agreement between independent classification algorithms of the Sentinel data was confirmed through different experiments. The presented results assess the interest of decision-level fusion strategies, such as the product of experts. Accurate crop map products were obtained over different countries in the early season with limited training data. The results highlight the benefit of fusion for early crop mapping and the interest of detecting cropland areas before the identification of crop types.

Sensors ◽  
2019 ◽  
Vol 19 (10) ◽  
pp. 2401 ◽  
Author(s):  
Chuanliang Sun ◽  
Yan Bian ◽  
Tao Zhou ◽  
Jianjun Pan

Crop-type identification is very important in agricultural regions. Most researchers in this area have focused on exploring the ability of synthetic-aperture radar (SAR) sensors to identify crops. This paper uses multi-source (Sentinel-1, Sentinel-2, and Landsat-8) and multi-temporal data to identify crop types. The change detection method was used to analyze spectral and indices information in time series. Significant differences in crop growth status during the growing season were found. Then, three obviously differentiated time features were extracted. Three advanced machine learning algorithms (Support Vector Machine, Artificial Neural Network, and Random Forest, RF) were used to identify the crop types. The results showed that the detection of (Vertical-vertical) VV, (Vertical-horizontal) VH, and Cross Ratio (CR) changes was effective for identifying land cover. Moreover, the red-edge changes were obviously different according to crop growth periods. Sentinel-2 and Landsat-8 showed different normalized difference vegetation index (NDVI) changes also. By using single remote sensing data to classify crops, Sentinel-2 produced the highest overall accuracy (0.91) and Kappa coefficient (0.89). The combination of Sentinel-1, Sentinel-2, and Landsat-8 data provided the best overall accuracy (0.93) and Kappa coefficient (0.91). The RF method had the best performance in terms of identity classification. In addition, the indices feature dominated the classification results. The combination of phenological period information with multi-source remote sensing data can be used to explore a crop area and its status in the growing season. The results of crop classification can be used to analyze the density and distribution of crops. This study can also allow to determine crop growth status, improve crop yield estimation accuracy, and provide a basis for crop management.


2021 ◽  
Vol 13 (5) ◽  
pp. 846
Author(s):  
Carole Planque ◽  
Richard Lucas ◽  
Suvarna Punalekar ◽  
Sebastien Chognard ◽  
Clive Hurford ◽  
...  

National-level mapping of crop types is important to monitor food security, understand environmental conditions, inform optimal use of the landscape, and contribute to agricultural policy. Countries or economic regions currently and increasingly use satellite sensor data for classifying crops over large areas. However, most methods have been based on machine learning algorithms, with these often requiring large training datasets that are not always available and may be costly to produce or collect. Focusing on Wales (United Kingdom), the research demonstrates how the knowledge that the agricultural community has gathered together over past decades can be used to develop algorithms for mapping different crop types. Specifically, we aimed to develop an alternative method for consistent and accurate crop type mapping where cloud cover is quite persistent and without the need for extensive in situ/ground datasets. The classification approach is parcel-based and informed by concomitant analysis of knowledge-based crop growth stages and Sentinel-1 C-band SAR time series. For 2018, crop type classifications were generated nationally for Wales, with regional overall accuracies ranging between 85.8% and 90.6%. The method was particularly successful in distinguishing barley from wheat, which is a major source of error in other crop products available for Wales. This study demonstrates that crops can be accurately identified and mapped across a large area (i.e., Wales) using Sentinel-1 C-band data and by capitalizing on knowledge of crop growth stages. The developed algorithm is flexible and, compared to the other methods that allow crop mapping in Wales, the approach provided more consistent discrimination and lower variability in accuracies between classes and regions.


2021 ◽  
Vol 13 (22) ◽  
pp. 4668
Author(s):  
Stella Ofori-Ampofo ◽  
Charlotte Pelletier ◽  
Stefan Lang

Crop maps are key inputs for crop inventory production and yield estimation and can inform the implementation of effective farm management practices. Producing these maps at detailed scales requires exhaustive field surveys that can be laborious, time-consuming, and expensive to replicate. With a growing archive of remote sensing data, there are enormous opportunities to exploit dense satellite image time series (SITS), temporal sequences of images over the same area. Generally, crop type mapping relies on single-sensor inputs and is solved with the help of traditional learning algorithms such as random forests or support vector machines. Nowadays, deep learning techniques have brought significant improvements by leveraging information in both spatial and temporal dimensions, which are relevant in crop studies. The concurrent availability of Sentinel-1 (synthetic aperture radar) and Sentinel-2 (optical) data offers a great opportunity to utilize them jointly; however, optimizing their synergy has been understudied with deep learning techniques. In this work, we analyze and compare three fusion strategies (input, layer, and decision levels) to identify the best strategy that optimizes optical-radar classification performance. They are applied to a recent architecture, notably, the pixel-set encoder–temporal attention encoder (PSE-TAE) developed specifically for object-based classification of SITS and based on self-attention mechanisms. Experiments are carried out in Brittany, in the northwest of France, with Sentinel-1 and Sentinel-2 time series. Input and layer-level fusion competitively achieved the best overall F-score surpassing decision-level fusion by 2%. On a per-class basis, decision-level fusion increased the accuracy of dominant classes, whereas layer-level fusion improves up to 13% for minority classes. Against single-sensor baseline, multi-sensor fusion strategies identified crop types more accurately: for example, input-level outperformed Sentinel-2 and Sentinel-1 by 3% and 9% in F-score, respectively. We have also conducted experiments that showed the importance of fusion for early time series classification and under high cloud cover condition.


2021 ◽  
Vol 13 (14) ◽  
pp. 2790
Author(s):  
Hongwei Zhao ◽  
Sibo Duan ◽  
Jia Liu ◽  
Liang Sun ◽  
Louis Reymondin

Accurate crop type maps play an important role in food security due to their widespread applicability. Optical time series data (TSD) have proven to be significant for crop type mapping. However, filling in missing information due to clouds in optical imagery is always needed, which will increase the workload and the risk of error transmission, especially for imagery with high spatial resolution. The development of optical imagery with high temporal and spatial resolution and the emergence of deep learning algorithms provide solutions to this problem. Although the one-dimensional convolutional neural network (1D CNN), long short-term memory (LSTM), and gate recurrent unit (GRU) models have been used to classify crop types in previous studies, their ability to identify crop types using optical TSD with missing information needs to be further explored due to their different mechanisms for handling invalid values in TSD. In this research, we designed two groups of experiments to explore the performances and characteristics of the 1D CNN, LSTM, GRU, LSTM-CNN, and GRU-CNN models for crop type mapping using unfilled Sentinel-2 (Sentinel-2) TSD and to discover the differences between unfilled and filled Sentinel-2 TSD based on the same algorithm. A case study was conducted in Hengshui City, China, of which 70.3% is farmland. The results showed that the 1D CNN, LSTM-CNN, and GRU-CNN models achieved acceptable classification accuracies (above 85%) using unfilled TSD, even though the total missing rate of the sample values was 43.5%; these accuracies were higher and more stable than those obtained using filled TSD. Furthermore, the models recalled more samples on crop types with small parcels when using unfilled TSD. Although LSTM and GRU models did not attain accuracies as high as the other three models using unfilled TSD, their results were almost close to those with filled TSD. This research showed that crop types could be identified by deep learning features in Sentinel-2 dense time series images with missing information due to clouds or cloud shadows randomly, which avoided spending a lot of time on missing information reconstruction.


2021 ◽  
Vol 13 (4) ◽  
pp. 700
Author(s):  
Daniel Kpienbaareh ◽  
Xiaoxuan Sun ◽  
Jinfei Wang ◽  
Isaac Luginaah ◽  
Rachel Bezner Kerr ◽  
...  

Mapping crop types and land cover in smallholder farming systems in sub-Saharan Africa remains a challenge due to data costs, high cloud cover, and poor temporal resolution of satellite data. With improvement in satellite technology and image processing techniques, there is a potential for integrating data from sensors with different spectral characteristics and temporal resolutions to effectively map crop types and land cover. In our Malawi study area, it is common that there are no cloud-free images available for the entire crop growth season. The goal of this experiment is to produce detailed crop type and land cover maps in agricultural landscapes using the Sentinel-1 (S-1) radar data, Sentinel-2 (S-2) optical data, S-2 and PlanetScope data fusion, and S-1 C2 matrix and S-1 H/α polarimetric decomposition. We evaluated the ability to combine these data to map crop types and land cover in two smallholder farming locations. The random forest algorithm, trained with crop and land cover type data collected in the field, complemented with samples digitized from Google Earth Pro and DigitalGlobe, was used for the classification experiments. The results show that the S-2 and PlanetScope fused image + S-1 covariance (C2) matrix + H/α polarimetric decomposition (an entropy-based decomposition method) fusion outperformed all other image combinations, producing higher overall accuracies (OAs) (>85%) and Kappa coefficients (>0.80). These OAs represent a 13.53% and 11.7% improvement on the Sentinel-2-only (OAs < 80%) experiment for Thimalala and Edundu, respectively. The experiment also provided accurate insights into the distribution of crop and land cover types in the area. The findings suggest that in cloud-dense and resource-poor locations, fusing high temporal resolution radar data with available optical data presents an opportunity for operational mapping of crop types and land cover to support food security and environmental management decision-making.


Author(s):  
V. Pandey ◽  
K. K. Choudhary ◽  
C. S. Murthy ◽  
M. K. Poddar

<p><strong>Abstract.</strong> The classification of agricultural crop types is an important application of remote sensing. With the improvement in spatial, temporal and spectral resolution of satellite data, a complete seasonal crop growth profile and separability between different crop classes can be studied by using ensemble-learning techniques. This study compares the performance of Random Forest (RF), which is a decision tree based ensemble learning method and Naïve Bayes ( a probabilistic learning technique) for crop classification of <i>Lekoda</i> gram panchayat, <i>Ujjain</i> district, using multi-temporal Sentinel 2 of Rabi 2017&amp;ndash;18. The study area contains seven different classes of crop types, and in each class, we have used 65% of the ground data for training and 35% to test the classifier. The performance of RF classifier was found to be better than NB classifier. Kappa coefficient of RF classifier in mid of the crop season (December&amp;ndash;January) was found to be 0.93. This result indicates that an accurate in-season crop map of the study area can be generated through integrated use of Sentinel 2 temporal data and RF classifier.</p>


Author(s):  
C. Karakizi ◽  
Z. Kandylakis ◽  
A. D. Vaiopoulos ◽  
K. Karantzalos

Abstract. In this work, we elaborate on the gained insights from various classification experiments towards detailed land cover mapping over four representative regions of different environmental characteristics in Greece. In particular, the proposed methodology exploits Sentinel-2 data at an annual basis, for the joint classification of 35 land cover and crop type classes. A number of pre-processing steps were employed on the satellite data, in order to address atmospheric and geometric effects, as well as clouds and pertinent shadows. Several classification set-ups were designed and performed using either time series of spectral features or temporal features. The latter consisted of statistical metrics, derived from the spectral time series, and therefore were significantly reduced in dimension. Experiments using the Random Forest algorithm were performed by building several per-tile models, as well as cross- regional models based on training data from all considered regions/tiles. Overall classification accuracy rates exceeded 90% for most experiments. Further analysis on the experimental results highlighted that crop types were classified more accurately when using the spectral time series features, compared to the temporal ones. Classification accuracy for non-crop classes proved much less affected by the type of employed features. The inclusion of auxiliary data layers was beneficial in all cases, both for overall and for per-class accuracy metrics. Qualitative evaluation on the predicted maps further affirmed the efficiency of the developed methodology.


2021 ◽  
Vol 10 (4) ◽  
pp. 251
Author(s):  
Christina Ludwig ◽  
Robert Hecht ◽  
Sven Lautenbach ◽  
Martin Schorcht ◽  
Alexander Zipf

Public urban green spaces are important for the urban quality of life. Still, comprehensive open data sets on urban green spaces are not available for most cities. As open and globally available data sets, the potential of Sentinel-2 satellite imagery and OpenStreetMap (OSM) data for urban green space mapping is high but limited due to their respective uncertainties. Sentinel-2 imagery cannot distinguish public from private green spaces and its spatial resolution of 10 m fails to capture fine-grained urban structures, while in OSM green spaces are not mapped consistently and with the same level of completeness everywhere. To address these limitations, we propose to fuse these data sets under explicit consideration of their uncertainties. The Sentinel-2 derived Normalized Difference Vegetation Index was fused with OSM data using the Dempster–Shafer theory to enhance the detection of small vegetated areas. The distinction between public and private green spaces was achieved using a Bayesian hierarchical model and OSM data. The analysis was performed based on land use parcels derived from OSM data and tested for the city of Dresden, Germany. The overall accuracy of the final map of public urban green spaces was 95% and was mainly influenced by the uncertainty of the public accessibility model.


2021 ◽  
Vol 16 (1) ◽  
pp. 1-24
Author(s):  
Yaojin Lin ◽  
Qinghua Hu ◽  
Jinghua Liu ◽  
Xingquan Zhu ◽  
Xindong Wu

In multi-label learning, label correlations commonly exist in the data. Such correlation not only provides useful information, but also imposes significant challenges for multi-label learning. Recently, label-specific feature embedding has been proposed to explore label-specific features from the training data, and uses feature highly customized to the multi-label set for learning. While such feature embedding methods have demonstrated good performance, the creation of the feature embedding space is only based on a single label, without considering label correlations in the data. In this article, we propose to combine multiple label-specific feature spaces, using label correlation, for multi-label learning. The proposed algorithm, mu lti- l abel-specific f eature space e nsemble (MULFE), takes consideration label-specific features, label correlation, and weighted ensemble principle to form a learning framework. By conducting clustering analysis on each label’s negative and positive instances, MULFE first creates features customized to each label. After that, MULFE utilizes the label correlation to optimize the margin distribution of the base classifiers which are induced by the related label-specific feature spaces. By combining multiple label-specific features, label correlation based weighting, and ensemble learning, MULFE achieves maximum margin multi-label classification goal through the underlying optimization framework. Empirical studies on 10 public data sets manifest the effectiveness of MULFE.


Sign in / Sign up

Export Citation Format

Share Document