Deep learning-based framework for spatiotemporal data fusion: an instance of Landsat 8 and Sentinel 2 NDVI

2021 ◽  
Vol 15 (03) ◽  
Author(s):  
Bhogendra Mishra ◽  
Tej Bahadur Shahi
2019 ◽  
Vol 11 (24) ◽  
pp. 2927
Author(s):  
Hongcan Guan ◽  
Yanjun Su ◽  
Tianyu Hu ◽  
Jin Chen ◽  
Qinghua Guo

Spatiotemporal data fusion is a key technique for generating unified time-series images from various satellite platforms to support the mapping and monitoring of vegetation. However, the high similarity in the reflectance spectrum of different vegetation types brings an enormous challenge in the similar pixel selection procedure of spatiotemporal data fusion, which may lead to considerable uncertainties in the fusion. Here, we propose an object-based spatiotemporal data-fusion framework to replace the original similar pixel selection procedure with an object-restricted method to address this issue. The proposed framework can be applied to any spatiotemporal data-fusion algorithm based on similar pixels. In this study, we modified the spatial and temporal adaptive reflectance fusion model (STARFM), the enhanced spatial and temporal adaptive reflectance fusion model (ESTARFM) and the flexible spatiotemporal data-fusion model (FSDAF) using the proposed framework, and evaluated their performances in fusing Sentinel 2 and Landsat 8 images, Landsat 8 and Moderate-resolution Imaging Spectroradiometer (MODIS) images, and Sentinel 2 and MODIS images in a study site covered by grasslands, croplands, coniferous forests, and broadleaf forests. The results show that the proposed object-based framework can improve all three data-fusion algorithms significantly by delineating vegetation boundaries more clearly, and the improvements on FSDAF is the greatest among all three algorithms, which has an average decrease of 2.8% in relative root-mean-square error (rRMSE) in all sensor combinations. Moreover, the improvement on fusing Sentinel 2 and Landsat 8 images is more significant (an average decrease of 2.5% in rRMSE). By using the fused images generated from the proposed object-based framework, we can improve the vegetation mapping result by significantly reducing the “pepper-salt” effect. We believe that the proposed object-based framework has great potential to be used in generating time-series high-resolution remote-sensing data for vegetation mapping applications.


2021 ◽  
Vol 13 (21) ◽  
pp. 4400
Author(s):  
Rongkun Zhao ◽  
Yuechen Li ◽  
Jin Chen ◽  
Mingguo Ma ◽  
Lei Fan ◽  
...  

The timely and accurate mapping of paddy rice is important to ensure food security and to protect the environment for sustainable development. Existing paddy rice mapping methods are often remote sensing technologies based on optical images. However, the availability of high-quality remotely sensed paddy rice growing area data is limited due to frequent cloud cover and rain over the southwest China. In order to overcome these limitations, we propose a paddy rice field mapping method by combining a spatiotemporal fusion algorithm and a phenology-based algorithm. First, a modified neighborhood similar pixel interpolator (MNSPI) time series approach was used to remove clouds on Sentinel-2 and Landsat 8 OLI images in 2020. A flexible spatiotemporal data fusion (FSDAF) model was used to fuse Sentinel-2 data and MODIS data to obtain multi-temporal Sentinel-2 images. Then, the fused remote sensing data were used to construct fusion time series data to produce time series vegetation indices (NDVI\LSWI) having a high spatiotemporal resolution (10 m and ≤16 days). On this basis, the unique physical characteristics of paddy rice during the transplanting period and other auxiliary data were combined to map paddy rice in Yongchuan District, Chongqing, China. Our results were validated by field survey data and showed a high accuracy of the proposed method indicated by an overall accuracy of 93% and the Kappa coefficient of 0.85. The paddy rice planting area map was also consistent with the official data of the third national land survey; at the town level, the correlation between official survey data and paddy rice area was 92.5%. The results show that this method can effectively map paddy rice fields in a cloudy and rainy area.


2021 ◽  
Vol 13 (8) ◽  
pp. 1509
Author(s):  
Xikun Hu ◽  
Yifang Ban ◽  
Andrea Nascetti

Accurate burned area information is needed to assess the impacts of wildfires on people, communities, and natural ecosystems. Various burned area detection methods have been developed using satellite remote sensing measurements with wide coverage and frequent revisits. Our study aims to expound on the capability of deep learning (DL) models for automatically mapping burned areas from uni-temporal multispectral imagery. Specifically, several semantic segmentation network architectures, i.e., U-Net, HRNet, Fast-SCNN, and DeepLabv3+, and machine learning (ML) algorithms were applied to Sentinel-2 imagery and Landsat-8 imagery in three wildfire sites in two different local climate zones. The validation results show that the DL algorithms outperform the ML methods in two of the three cases with the compact burned scars, while ML methods seem to be more suitable for mapping dispersed burn in boreal forests. Using Sentinel-2 images, U-Net and HRNet exhibit comparatively identical performance with higher kappa (around 0.9) in one heterogeneous Mediterranean fire site in Greece; Fast-SCNN performs better than others with kappa over 0.79 in one compact boreal forest fire with various burn severity in Sweden. Furthermore, directly transferring the trained models to corresponding Landsat-8 data, HRNet dominates in the three test sites among DL models and can preserve the high accuracy. The results demonstrated that DL models can make full use of contextual information and capture spatial details in multiple scales from fire-sensitive spectral bands to map burned areas. Using only a post-fire image, the DL methods not only provide automatic, accurate, and bias-free large-scale mapping option with cross-sensor applicability, but also have potential to be used for onboard processing in the next Earth observation satellites.


2021 ◽  
Vol 54 (1) ◽  
pp. 182-208
Author(s):  
Sani M. Isa ◽  
Suharjito ◽  
Gede Putera Kusuma ◽  
Tjeng Wawan Cenggoro
Keyword(s):  

Author(s):  
O. Stocker ◽  
A. Le Bris

Abstract. Needs for fine-grained, accurate and up-to-date land cover (LC) data are important to answer both societal and scientific purposes. Several automatic products have already been proposed, but are mostly generated out of satellite sensors like Sentinel-2 (S2) or Landsat. Metric sensors, e.g. SPOT-6/7, have been less considered, while they enable (at least annual) acquisitions at country scale and can now be efficiently processed thanks to deep learning (DL) approaches. This study thus aimed at assessing whether such sensor can improve such land cover products. A custom simple yet effective U-net - Deconv-Net inspired DL architecture is developed and applied to SPOT-6/7 and S2 for different LC nomenclatures, aiming at comparing the relevance of their spatial/spectral configurations and investigating their complementarity. The proposed DL architecture is then extended to data fusion and applied to previous sensors. At the end, the proposed fusion framework is used to enrich an existing S2 based LC product, as it is generic enough to cope with fusion at distinct levels.


2019 ◽  
Vol 235 ◽  
pp. 111425 ◽  
Author(s):  
Zhenfeng Shao ◽  
Jiajun Cai ◽  
Peng Fu ◽  
Leiqiu Hu ◽  
Tao Liu

2021 ◽  
Vol 13 (5) ◽  
pp. 992
Author(s):  
Dan López-Puigdollers ◽  
Gonzalo Mateo-García ◽  
Luis Gómez-Chova

The systematic monitoring of the Earth using optical satellites is limited by the presence of clouds. Accurately detecting these clouds is necessary to exploit satellite image archives in remote sensing applications. Despite many developments, cloud detection remains an unsolved problem with room for improvement, especially over bright surfaces and thin clouds. Recently, advances in cloud masking using deep learning have shown significant boosts in cloud detection accuracy. However, these works are validated in heterogeneous manners, and the comparison with operational threshold-based schemes is not consistent among many of them. In this work, we systematically compare deep learning models trained on Landsat-8 images on different Landsat-8 and Sentinel-2 publicly available datasets. Overall, we show that deep learning models exhibit a high detection accuracy when trained and tested on independent images from the same Landsat-8 dataset (intra-dataset validation), outperforming operational algorithms. However, the performance of deep learning models is similar to operational threshold-based ones when they are tested on different datasets of Landsat-8 images (inter-dataset validation) or datasets from a different sensor with similar radiometric characteristics such as Sentinel-2 (cross-sensor validation). The results suggest that (i) the development of cloud detection methods for new satellites can be based on deep learning models trained on data from similar sensors and (ii) there is a strong dependence of deep learning models on the dataset used for training and testing, which highlights the necessity of standardized datasets and procedures for benchmarking cloud detection models in the future.


2021 ◽  
Author(s):  
◽  
Rostyslav-Mykola Tsenov

In recent years, a lot of remote sensing problems benefited from the improvements made in deep learning. In particular, deep learning semantic segmentation algorithms have provided improved frameworks for the automated production of land-use and land-cover (LULC) map generation. Automation of LULC map production can significantly increase its production frequency, which provides a great benefit to areas such as natural resource management, wildlife habitat protection, urban expansion, damage delineation, etc. In this thesis, many different convolutional neural networks (CNN) were examined in combination with various state-of-the-art semantic segmentation methods and extensions to improve the accuracy of predicted LULC maps. Most of the experiments were carried out using Landsat 5/7 and Landsat 8 satellite images. Additionally, unsupervised domain adaption (UDA) architectures were explored to transfer knowledge extracted from a labelled Landsat 8 dataset to unlabelled Sentinel-2 satellite images. The performance of various CNN and extension combinations were carefully assessed, where VGGNet with an output stride of 4, and modified U-Net architecture provided the best results. Additionally, an expanded analysis of the generated LULC maps for various sensors was provided. The contributions of this thesis are accurate automated LULC maps predictions that achieved ~92.4% of accuracy using deep neural networks; production of the model trained on the larger area, which is six times the size from the previous work, for both 8-bit Landsat 5/7, and 16-bit Landsat 8 sensors; and generation of the network architecture to produce LULC maps for the unlabelled 12-bit Sentinel-2 data with the knowledge extracted from the labelled Landsat 8 data.


Sign in / Sign up

Export Citation Format

Share Document