Using 250-m MODIS Data for Enhancing Spatiotemporal Fusion by Sparse Representation

2020 ◽  
Vol 86 (6) ◽  
pp. 383-392
Author(s):  
Liguo Wang ◽  
Xiaoyi Wang ◽  
Qunming Wang

Spatiotemporal fusion is an important technique to solve the problem of incompatibility between the temporal and spatial resolution of remote sensing data. In this article, we studied the fusion of Landsat data with fine spatial resolution but coarse temporal resolution and Moderate Resolution Imaging Spectroradiometer (MODIS) data with coarse spatial resolution but fine temporal resolution. The goal of fusion is to produce time-series data with the fine spatial resolution of Landsat and the fine temporal resolution of MODIS. In recent years, learning-based spatiotemporal fusion methods, in particular the sparse representation-based spatiotemporal reflectance fusion model (SPSTFM), have gained increasing attention because of their great restoration ability for heterogeneous landscapes. However, remote sensing data from different sensors differ greatly on spatial resolution, which limits the performance of the spatiotemporal fusion methods (including SPSTFM) to some extent. In order to increase the accuracy of spatiotemporal fusion, in this article we used existing 250-m MODISbands (i.e., red and near-infrared bands) to downscale the observed 500-m MODIS bands to 250 m before SPTSFM-based fusion of MODIS and Landsat data. The experimental results show that the fusion accuracy of SPTSFM is increased when using 250-m MODIS data, and the accuracy of SPSTFM coupled with 250-m MODIS data is greater than the compared benchmark methods.

2021 ◽  
Vol 973 (7) ◽  
pp. 21-31
Author(s):  
Е.А. Rasputina ◽  
A.S. Korepova

The mapping and analysis of the dates of onset and melting the snow cover in the Baikal region for 2000–2010 based on eight-day MODIS “snow cover” composites with a spatial resolution of 500 m, as well as their verification based on the data of 17 meteorological stations was carried out. For each year of the decennary under study, for each meteorological station, the difference in dates determined from the MODIS data and that of weather stations was calculated. Modulus of deviations vary from 0 to 36 days for onset dates and from 0 to 47 days – for those of stable snow cover melting, the average of the deviation modules for all meteorological stations and years is 9–10 days. It is assumed that 83 % of the cases for the onset dates can be considered admissible (with deviations up to 16 days), and 79 % of them for the end dates. Possible causes of deviations are analyzed. It was revealed that the largest deviations correspond to coastal meteorological stations and are associated with the inhomogeneity of the characteristics of the snow cover inside the pixels containing water and land. The dates of onset and melting of a stable snow cover from the images turned out to be later than those of weather stations for about 10 days. First of all (from the end of August to the middle of September), the snow is established on the tops of the ranges Barguzinsky, Baikalsky, Khamar-Daban, and later (in late November–December) a stable cover appears in the Barguzin valley, in the Selenga lowland, and in Priolkhonye. The predominant part of the Baikal region territory is covered with snow in October, and is released from it in the end of April till the middle of May.


2020 ◽  
Vol 2020 ◽  
pp. 1-11
Author(s):  
Xiaofei Wang ◽  
Xiaoyi Wang

High spatial and temporal resolution remote sensing data play an important role in monitoring the rapid change of the earth surface. However, there is an irreconcilable contradiction between the spatial and temporal resolutions of the remote sensing image acquired from a same sensor. The spatiotemporal fusion technology for remote sensing data is an effective way to solve the contradiction. In this paper, we will study the spatiotemporal fusion method based on the convolutional neural network, which can fuse the Landsat data with high spatial but low temporal resolution and MODIS data with low spatial but high temporal resolution, and generate time series data with high spatial resolution. In order to improve the accuracy of spatiotemporal fusion, a residual convolution neural network is proposed. MODIS image is used as the input to predict the residual image between MODIS and Landsat, and the sum of the predicted residual image and MODIS data is used as the predicted Landsat-like image. In this paper, the residual network not only increases the depth of the superresolution network but also avoids the problem of vanishing gradient due to the deep network structure. The experimental results show that the prediction accuracy by our method is greater than that of several mainstream methods.


2019 ◽  
Vol 2 (2) ◽  
pp. 96-104
Author(s):  
Suresh Kumar ◽  
Vijay Bhagat

Satellite remote sensing offers a unique opportunity in deriving various components of land information by integrating with ground based observation. Currently several remote sensing satellites are providing multispectral, hyperspectral and microwave data to cater the need of various land applications. Several old age remote sensing satellites have been updated with new generation satellites offering high spatial, spectral and temporal resolution. Microwave remote sensing data is now available with high spatial resolution and providing land information in cloudy weather condition that strengthening availability of remote sensing data in all days. Spatial resolution has significantly improved over the decades and temporal resolution has improved from months to daily. Indian Remote Sensing programs are providing state of the art satellite data in optical and microwave wavelength regions to meet large land applications in the country. Today several remote sensing data is available as open data sources. Upcoming satellite remote sensing data will help in precise characterization and quantification of land resources to support in sustainable land development planning to meet future challenges.


2019 ◽  
Vol 11 (22) ◽  
pp. 2701
Author(s):  
Yuhui Zheng ◽  
Huihui Song ◽  
Le Sun ◽  
Zebin Wu ◽  
Byeungwoo Jeon

Spatiotemporal fusion provides an effective way to fuse two types of remote sensing data featured by complementary spatial and temporal properties (typical representatives are Landsat and MODIS images) to generate fused data with both high spatial and temporal resolutions. This paper presents a very deep convolutional neural network (VDCN) based spatiotemporal fusion approach to effectively handle massive remote sensing data in practical applications. Compared with existing shallow learning methods, especially for the sparse representation based ones, the proposed VDCN-based model has the following merits: (1) explicitly correlating the MODIS and Landsat images by learning a non-linear mapping relationship; (2) automatically extracting effective image features; and (3) unifying the feature extraction, non-linear mapping, and image reconstruction into one optimization framework. In the training stage, we train a non-linear mapping between downsampled Landsat and MODIS data using VDCN, and then we train a multi-scale super-resolution (MSSR) VDCN between the original Landsat and downsampled Landsat data. The prediction procedure contains three layers, where each layer consists of a VDCN-based prediction and a fusion model. These layers achieve non-linear mapping from MODIS to downsampled Landsat data, the two-times SR of downsampled Landsat data, and the five-times SR of downsampled Landsat data, successively. Extensive evaluations are executed on two groups of commonly used Landsat–MODIS benchmark datasets. For the fusion results, the quantitative evaluations on all prediction dates and the visual effect on one key date demonstrate that the proposed approach achieves more accurate fusion results than sparse representation based methods.


2020 ◽  
Vol 40 (10) ◽  
pp. 1028001
Author(s):  
陈世涵 Chen Shihan ◽  
李玲 Li Ling ◽  
蒋弘凡 Jiang Hongfan ◽  
居伟杰 Ju Weijie ◽  
张曼玉 Zhang Manyu ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document