scholarly journals Mapping Paddy Fields in Japan by Using a Sentinel-1 SAR Time Series Supplemented by Sentinel-2 Images on Google Earth Engine

2020 ◽  
Vol 12 (10) ◽  
pp. 1622 ◽  
Author(s):  
Shimpei Inoue ◽  
Akihiko Ito ◽  
Chinatsu Yonezawa

Paddy fields play very important environmental roles in food security, water resource management, biodiversity conservation, and climate change. Therefore, reliable broad-scale paddy field maps are essential for understanding these issues related to rice and paddy fields. Here, we propose a novel paddy field mapping method that uses Sentinel-1 synthetic aperture radar (SAR) time series that are robust for cloud cover, supplemented by Sentinel-2 optical images that are more reliable than SAR data for extracting irrigated paddy fields. Paddy fields were provisionally specified by using the Sentinel-1 SAR data and a conventional decision tree method. Then, an additional mask using water and vegetation indexes based on Sentinel-2 optical images was overlaid to remove non-paddy field areas. We used the proposed method to develop a paddy field map for Japan in 2018 with a 30 m spatial resolution. The producer’s accuracy of this map (92.4%) for non-paddy reference agricultural fields was much higher than that of a map developed by the conventional method (57.0%) using only Sentinel-1 data. Our proposed method also reproduced paddy field areas at the prefecture scale better than existing paddy field maps developed by a remote sensing approach.

2019 ◽  
Vol 11 (15) ◽  
pp. 1836 ◽  
Author(s):  
Hassan Bazzi ◽  
Nicolas Baghdadi ◽  
Dino Ienco ◽  
Mohammad El Hajj ◽  
Mehrez Zribi ◽  
...  

Mapping irrigated plots is essential for better water resource management. Today, the free and open access Sentinel-1 (S1) and Sentinel-2 (S2) data with high revisit time offers a powerful tool for irrigation mapping at plot scale. Up to date, few studies have used S1 and S2 data to provide approaches for mapping irrigated plots. This study proposes a method to map irrigated plots using S1 SAR (synthetic aperture radar) time series. First, a dense temporal series of S1 backscattering coefficients were obtained at plot scale in VV (Vertical-Vertical) and VH (Vertical-Horizontal) polarizations over a study site located in Catalonia, Spain. In order to remove the ambiguity between rainfall and irrigation events, the S1 signal obtained at plot scale was used conjointly to S1 signal obtained at a grid scale (10 km × 10 km). Later, two mathematical transformations, including the principal component analysis (PCA) and the wavelet transformation (WT), were applied to the several SAR temporal series obtained in both VV and VH polarization. Irrigated areas were then classified using the principal component (PC) dimensions and the WT coefficients in two different random forest (RF) classifiers. Another classification approach using one dimensional convolutional neural network (CNN) was also performed on the obtained S1 temporal series. The results derived from the RF classifiers with S1 data show high overall accuracy using the PC values (90.7%) and the WT coefficients (89.1%). By applying the CNN approach on SAR data, a significant overall accuracy of 94.1% was obtained. The potential of optical images to map irrigated areas by the mean of a normalized differential vegetation index (NDVI) temporal series was also tested in this study in both the RF and the CNN approaches. The overall accuracy obtained using the NDVI in RF classifier reached 89.5% while that in the CNN reached 91.6%. The combined use of optical and radar data slightly enhanced the classification in the RF classifier but did not significantly change the accuracy obtained in the CNN approach using S1 data.


2021 ◽  
Author(s):  
Luojia Hu ◽  
Wei Yao ◽  
Zhitong Yu ◽  
Yan Huang

<p>A high resolution mangrove map (e.g., 10-m), which can identify mangrove patches with small size (< 1 ha), is a central component to quantify ecosystem functions and help government take effective steps to protect mangroves, because the increasing small mangrove patches, due to artificial destruction and plantation of new mangrove trees, are vulnerable to climate change and sea level rise, and important for estimating mangrove habitat connectivity with adjacent coastal ecosystems as well as reducing the uncertainty of carbon storage estimation. However, latest national scale mangrove forest maps mainly derived from Landsat imagery with 30-m resolution are relatively coarse to accurately characterize the distribution of mangrove forests, especially those of small size (area < 1 ha). Sentinel imagery with 10-m resolution provide the opportunity for identifying these small mangrove patches and generating high-resolution mangrove forest maps. Here, we used spectral/backscatter-temporal variability metrics (quantiles) derived from Sentinel-1 SAR (Synthetic Aperture Radar) and sentinel-2 MSI (Multispectral Instrument) time-series imagery as input features for random forest to classify mangroves in China. We found that Sentinel-2 imagery is more effective than Sentinel-1 in mangrove extraction, and a combination of SAR and MSI imagery can get a better accuracy (F1-score of 0.94) than using them separately (F1-score of 0.88 using Sentinel-1 only and 0.895 using Sentinel-2 only). The 10-m mangrove map derived by combining SAR and MSI data identified 20,003 ha mangroves in China and the areas of small mangrove patches (< 1 ha) was 1741 ha, occupying 8.7% of the whole mangrove area. The largest area (819 ha) of small mangrove patches is located in Guangdong Province, and in Fujian the percentage of small mangrove patches in total mangrove area is the highest (11.4%). A comparison with existing 30-m mangrove products showed noticeable disagreement, indicating the necessity for generating mangrove extent product with 10-m resolution. This study demonstrates the significant potential of using Sentinel-1 and Sentinel-2 images to produce an accurate and high-resolution mangrove forest map with Google Earth Engine (GEE). The mangrove forest maps are expected to provide critical information to conservation managers, scientists, and other stakeholders in monitoring the dynamics of mangrove forest.</p>


2021 ◽  
Author(s):  
Iuliia Burdun ◽  
Michel Bechtold ◽  
Viacheslav Komisarenko ◽  
Annalea Lohila ◽  
Elyn Humphreys ◽  
...  

<p>Fluctuations of water table depth (WTD) affect many processes in peatlands, such as vegetation development and emissions of greenhouse gases. Here, we present the OPtical TRApezoid Model (OPTRAM) as a new method for satellite-based monitoring of the temporal variation of WTD in peatlands. OPTRAM is based on the response of short-wave infrared reflectance to the vegetation water status. For five northern peatlands with long-term in-situ WTD records, and with diverse vegetation cover and hydrological regimes, we generate a suite of OPTRAM index time series using (a) different procedures to parametrise OPTRAM (peatland-specific manual vs. globally applicable automatic parametrisation in Google Earth Engine), and (b) different satellite input data (Landsat vs. Sentinel-2). The results based on the manual parametrisation of OPTRAM indicate a high correlation with in-situ WTD time-series for pixels with most suitable vegetation for OPTRAM application (mean Pearson correlation of 0.7 across sites), and we will present the performance differences when moving from a manual to an automatic procedure. Furthermore, for the overlap period of Landsat and Sentinel-2, which have different ranges and widths of short-wave infrared bands used for OPTRAM calculation, the impact of the satellite input data to OPTRAM will be analysed. Eventually, the challenge of merging different satellite missions in the derivation of OPTRAM time series will be explored as an important step towards a global application of OPTRAM for the monitoring of WTD dynamics in northern peatlands.</p>


2020 ◽  
Vol 12 (19) ◽  
pp. 3120
Author(s):  
Luojia Hu ◽  
Nan Xu ◽  
Jian Liang ◽  
Zhichao Li ◽  
Luzhen Chen ◽  
...  

A high resolution mangrove map (e.g., 10-m), including mangrove patches with small size, is urgently needed for mangrove protection and ecosystem function estimation, because more small mangrove patches have disappeared with influence of human disturbance and sea-level rise. However, recent national-scale mangrove forest maps are mainly derived from 30-m Landsat imagery, and their spatial resolution is relatively coarse to accurately characterize the extent of mangroves, especially those with small size. Now, Sentinel imagery with 10-m resolution provides an opportunity for generating high-resolution mangrove maps containing these small mangrove patches. Here, we used spectral/backscatter-temporal variability metrics (quantiles) derived from Sentinel-1 SAR (Synthetic Aperture Radar) and/or Sentinel-2 MSI (Multispectral Instrument) time-series imagery as input features of random forest to classify mangroves in China. We found that Sentinel-2 (F1-Score of 0.895) is more effective than Sentinel-1 (F1-score of 0.88) in mangrove extraction, and a combination of SAR and MSI imagery can get the best accuracy (F1-score of 0.94). The 10-m mangrove map was derived by combining SAR and MSI data, which identified 20003 ha mangroves in China, and the area of small mangrove patches (<1 ha) is 1741 ha, occupying 8.7% of the whole mangrove area. At the province level, Guangdong has the largest area (819 ha) of small mangrove patches, and in Fujian, the percentage of small mangrove patches is the highest (11.4%). A comparison with existing 30-m mangrove products showed noticeable disagreement, indicating the necessity for generating mangrove extent product with 10-m resolution. This study demonstrates the significant potential of using Sentinel-1 and Sentinel-2 images to produce an accurate and high-resolution mangrove forest map with Google Earth Engine (GEE). The mangrove forest map is expected to provide critical information to conservation managers, scientists, and other stakeholders in monitoring the dynamics of the mangrove forest.


2019 ◽  
Vol 11 (13) ◽  
pp. 1619 ◽  
Author(s):  
Zhou Ya’nan ◽  
Luo Jiancheng ◽  
Feng Li ◽  
Zhou Xiaocheng

Spatial features retrieved from satellite data play an important role for improving crop classification. In this study, we proposed a deep-learning-based time-series analysis method to extract and organize spatial features to improve parcel-based crop classification using high-resolution optical images and multi-temporal synthetic aperture radar (SAR) data. Central to this method is the use of multiple deep convolutional networks (DCNs) to extract spatial features and to use the long short-term memory (LSTM) network to organize spatial features. First, a precise farmland parcel map was delineated from optical images. Second, hundreds of spatial features were retrieved using multiple DCNs from preprocessed SAR images and overlaid onto the parcel map to construct multivariate time-series of crop growth for parcels. Third, LSTM-based network structures for organizing these time-series features were constructed to produce a final parcel-based classification map. The method was applied to a dataset of high-resolution ZY-3 optical images and multi-temporal Sentinel-1A SAR data to classify crop types in the Hunan Province of China. The classification results, showing an improvement of greater than 5.0% in overall accuracy relative to methods without spatial features, demonstrated the effectiveness of the proposed method in extracting and organizing spatial features for improving parcel-based crop classification.


2020 ◽  
Vol 57 (8) ◽  
pp. 1005-1025
Author(s):  
Ya’nan Zhou ◽  
Xianzeng Yang ◽  
Li Feng ◽  
Wei Wu ◽  
Tianjun Wu ◽  
...  
Keyword(s):  

2019 ◽  
Vol 11 (7) ◽  
pp. 752 ◽  
Author(s):  
Zhongchang Sun ◽  
Ru Xu ◽  
Wenjie Du ◽  
Lei Wang ◽  
Dengsheng Lu

Accurate and timely urban land mapping is fundamental to supporting large area environmental and socio-economic research. Most of the available large-area urban land products are limited to a spatial resolution of 30 m. The fusion of optical and synthetic aperture radar (SAR) data for large-area high-resolution urban land mapping has not yet been widely explored. In this study, we propose a fast and effective urban land extraction method using ascending/descending orbits of Sentinel-1A SAR data and Sentinel-2 MSI (MultiSpectral Instrument, Level 1C) optical data acquired from 1 January 2015 to 30 June 2016. Potential urban land (PUL) was identified first through logical operations on yearly mean and standard deviation composites from a time series of ascending/descending orbits of SAR data. A Yearly Normalized Difference Vegetation Index (NDVI) maximum and modified Normalized Difference Water Index (MNDWI) mean composite were generated from Sentinel-2 imagery. The slope image derived from SRTM DEM data was used to mask mountain pixels and reduce the false positives in SAR data over these regions. We applied a region-specific threshold on PUL to extract the target urban land (TUL) and a global threshold on the MNDWI mean, and slope image to extract water bodies and high-slope regions. A majority filter with a three by three window was applied on previously extracted results and the main processing was carried out on the Google Earth Engine (GEE) platform. China was chosen as the testing region to validate the accuracy and robustness of our proposed method through 224,000 validation points randomly selected from high-resolution Google Earth imagery. Additionally, a total of 735 blocks with a size of 900 × 900 m were randomly selected and used to compare our product’s accuracy with the global human settlement layer (GHSL, 2014), GlobeLand30 (2010), and Liu (2015) products. Our method demonstrated the effectiveness of using a fusion of optical and SAR data for large area urban land extraction especially in areas where optical data fail to distinguish urban land from spectrally similar objects. Results show that the average overall, producer’s and user’s accuracies are 88.03%, 94.50% and 82.22%, respectively.


2019 ◽  
Vol 11 (7) ◽  
pp. 820 ◽  
Author(s):  
Haifeng Tian ◽  
Ni Huang ◽  
Zheng Niu ◽  
Yuchu Qin ◽  
Jie Pei ◽  
...  

Timely and accurate mapping of winter crop planting areas in China is important for food security assessment at a national level. Time-series of vegetation indices, such as the normalized difference vegetation index (NDVI), are widely used for crop mapping, as they can characterize the growth cycle of crops. However, with the moderate spatial resolution optical imagery acquired by Landsat and Sentinel-2, it is difficult to obtain complete time-series curves for vegetation indices due to the influence of the revisit cycle of the satellite and weather conditions. Therefore, in this study, we propose a method for compositing the multi-temporal NDVI, in order to map winter crop planting areas with the Landsat-7 and -8 and Sentinel-2 optical images. The algorithm composites the multi-temporal NDVI into three key values, according to two time-windows—a period of low NDVI values and a period of high NDVI values—for the winter crops. First, we identify the two time-windows, according to the time-series of the NDVI obtained from daily Moderate Resolution Imaging Spectroradiometer observations. Second, the 30 m spatial resolution multi-temporal NDVI curve, derived from the Landsat-7 and -8 and Sentinel-2 optical images, is composited by selecting the maximal value in the high NDVI value period, and the minimal and median values in the low NDVI value period, using an algorithm of the Google Earth Engine. Third, a decision tree classification method is utilized to perform the winter crop classification at a pixel level. The results indicate that this method is effective for the large-scale mapping of winter crops. In the study area, the area of winter crops in 2018 was determined to be 207,641 km2, with an overall accuracy of 96.22% and a kappa coefficient of 0.93. The method proposed in this paper is expected to contribute to the rapid and accurate mapping of winter crops in large-scale applications and analyses.


2020 ◽  
Vol 10 (14) ◽  
pp. 4764 ◽  
Author(s):  
Athos Agapiou

Monitoring vegetation cover is an essential parameter for assessing various natural and anthropogenic hazards that occur at the vicinity of archaeological sites and landscapes. In this study, we used free and open access to Copernicus Earth Observation datasets. In particular, the proportion of vegetation cover is estimated from the analysis of Sentinel-1 radar and Sentinel-2 optical images, upon their radiometric and geometric corrections. Here, the proportion of vegetation based on the Radar Vegetation Index and the Normalized Difference Vegetation Index is estimated. Due to the medium resolution of these datasets (10 m resolution), the crowdsourced OpenStreetMap service was used to identify fully and non-vegetated pixels. The case study is focused on the western part of Cyprus, whereas various open-air archaeological sites exist, such as the archaeological site of “Nea Paphos” and the “Tombs of the Kings”. A cross-comparison of the results between the optical and the radar images is presented, as well as a comparison with ready products derived from the Sentinel Hub service such as the Sentinel-1 Synthetic Aperture Radar Urban and Sentinel-2 Scene classification data. Moreover, the proportion of vegetation cover was evaluated with Google Earth red-green-blue free high-resolution optical images, indicating that a good correlation between the RVI and NDVI can be generated only over vegetated areas. The overall findings indicate that Sentinel-1 and -2 indices can provide a similar pattern only over vegetated areas, which can be further elaborated to estimate temporal changes using integrated optical and radar Sentinel data. This study can support future investigations related to hazard analysis based on the combined use of optical and radar sensors, especially in areas with high cloud-coverage.


Proceedings ◽  
2019 ◽  
Vol 24 (1) ◽  
pp. 19
Author(s):  
C. Dineshkumar ◽  
S. Nitheshnirmal ◽  
Ashutosh Bhardwaj ◽  
K. Nivedita Priyadarshini

Rice is an important staple food crop worldwide, especially in India. Accurate and timely prediction of rice phenology plays a significant role in the management of water resources, administrative planning, and food security. In addition to conventional methods, remotely sensed time series data can provide the necessary estimation of rice phenological stages over a large region. Thus, the present study utilizes the 16-day composite Enhanced Vegetation Index (EVI) product with a spatial resolution of 250 m from the Moderate Resolution Imaging Spectroradiometer (MODIS) to monitor the rice phenological stages over Karur district of Tamil Nadu, India, using the Google Earth Engine (GEE) platform. The rice fields in the study area were classified using the machine learning algorithm in GEE. The ground truth was obtained from the paddy fields during crop production which was used for classifying the paddy grown area. After the classification of paddy fields, local maxima, and local minima present in each pixel of time series, the EVI product was used to determine the paddy growing stages in the study area. The results show that in the initial stage the pixel value of EVI in the paddy field shows local minima (0.23), whereas local maxima (0.41) were obtained during the peak vegetative stage. The results derived from the present study using MODIS data were cross-validated using the field data.


Sign in / Sign up

Export Citation Format

Share Document