Spatiotemporal Data Fusion to Generate Synthetic High Spatial and Temporal Resolution Satellite Images

Author(s):  
Jin Chen ◽  
Yuhan Rao ◽  
Xiaolin Zhu
2020 ◽  
Author(s):  
Aojie Shen ◽  
Yanchen Bo ◽  
Duoduo Hu

<p>Scientific research of land surface dynamics in heterogeneous landscapes often require remote sensing data with high resolutions in both space and time. However, single sensor could not provide such data at both high resolutions. In addition, because of cloud pollution, images are often incomplete. Spatiotemporal data fusion methods is a feasible solution for the aforementioned data problem. However, for existing data fusion methods, it is difficult to address the problem constructed regular and cloud-free dense time-series images with high spatial resolution. To address these limitations of current spatiotemporal data fusion methods, in this paper, we presented a novel data fusion method for fusing multi-source satellite data to generate s a high-resolution, regular and cloud-free time series of satellite images.</p><p>We incorporates geostatistical theory into the fusion method, and takes the pixel value as a random variable which is composed of trend and a zero-mean second-order stationary residual. To fuse satellite images, we use the coarse-resolution image with high frequency observation to capture the trend in time, and use Kriging interpolation to obtain the residual in fine-resolution scale to provide the informative spatial information. In this paper, in order to avoid the smoothing effect caused by spatial interpolation, Kriging interpolation is performed only in time dimension. For certain region, the temporal correlation between pixels is fixed after the data reach stationary. So for getting the weight in temporal Kriging interpolation, we can use the residuals obtained from coarse-resolution images to construct the temporal covariance model. The predicted fine-resolution image can be obtained by returning the trend value of pixel to their own residual until the each pixel value was obtained. The advantage of the algorithm is to accurately predict fine-resolution images in heterogeneous areas by integrating all available information in the time-series images with fine spatial resolution.  </p><p>We tested our method to fuse NDVI of MODIS and Landsat at Bahia State where has heterogeneous landscape, and generated 8-day time series of NDVI for the whole year of 2016 at 30m resolution. By cross-validation, the average R<sup>2 </sup>and RMSE between NDVI from fused images and from observed images can reach 95% and 0.0411, respectively. In addition, experiments demonstrated that our method also can capture correct texture patterns. These promising results demonstrated this novel method can provide effective means to construct regular and cloud-free time series with high spatiotemporal resolution. Theoretically, the method can predict the fine-resolution data required on any given day. Such a capability is helpful for monitoring near-real-time land surface and ecological dynamics at the high-resolution scales most relevant to human activities.</p><p> </p>


2019 ◽  
Vol 11 (3) ◽  
pp. 324 ◽  
Author(s):  
Jie Xue ◽  
Yee Leung ◽  
Tung Fung

Studies of land surface dynamics in heterogeneous landscapes often require satellite images with a high resolution, both in time and space. However, the design of satellite sensors often inherently limits the availability of such images. Images with high spatial resolution tend to have relatively low temporal resolution, and vice versa. Therefore, fusion of the two types of images provides a useful way to generate data high in both spatial and temporal resolutions. A Bayesian data fusion framework can produce the target high-resolution image based on a rigorous statistical foundation. However, existing Bayesian data fusion algorithms, such as STBDF (spatio-temporal Bayesian data fusion) -I and -II, do not fully incorporate the mixed information contained in low-spatial-resolution pixels, which in turn might limit their fusion ability in heterogeneous landscapes. To enhance the capability of existing STBDF models in handling heterogeneous areas, this study proposes two improved Bayesian data fusion approaches, coined ISTBDF-I and ISTBDF-II, which incorporate an unmixing-based algorithm into the existing STBDF framework. The performance of the proposed algorithms is visually and quantitatively compared with STBDF-II using simulated data and real satellite images. Experimental results show that the proposed algorithms generate improved spatio-temporal-resolution images over STBDF-II, especially in heterogeneous areas. They shed light on the way to further enhance our fusion capability.


2015 ◽  
Vol 53 (11) ◽  
pp. 5853-5860 ◽  
Author(s):  
J. Malleswara Rao ◽  
C. V. Rao ◽  
A. Senthil Kumar ◽  
B. Lakshmi ◽  
V. K. Dadhwal

2020 ◽  
Vol 12 (23) ◽  
pp. 3900
Author(s):  
Bingxin Bai ◽  
Yumin Tan ◽  
Gennadii Donchyts ◽  
Arjen Haag ◽  
Albrecht Weerts

High spatio–temporal resolution remote sensing images are of great significance in the dynamic monitoring of the Earth’s surface. However, due to cloud contamination and the hardware limitations of sensors, it is difficult to obtain image sequences with both high spatial and temporal resolution. Combining coarse resolution images, such as the moderate resolution imaging spectroradiometer (MODIS), with fine spatial resolution images, such as Landsat or Sentinel-2, has become a popular means to solve this problem. In this paper, we propose a simple and efficient enhanced linear regression spatio–temporal fusion method (ELRFM), which uses fine spatial resolution images acquired at two reference dates to establish a linear regression model for each pixel and each band between the image reflectance and the acquisition date. The obtained regression coefficients are used to help allocate the residual error between the real coarse resolution image and the simulated coarse resolution image upscaled by the high spatial resolution result of the linear prediction. The developed method consists of four steps: (1) linear regression (LR), (2) residual calculation, (3) distribution of the residual and (4) singular value correction. The proposed method was tested in different areas and using different sensors. The results show that, compared to the spatial and temporal adaptive reflectance fusion model (STARFM) and the flexible spatio–temporal data fusion (FSDAF) method, the ELRFM performs better in capturing small feature changes at the fine image scale and has high prediction accuracy. For example, in the red band, the proposed method has the lowest root mean square error (RMSE) (ELRFM: 0.0123 vs. STARFM: 0.0217 vs. FSDAF: 0.0224 vs. LR: 0.0221). Furthermore, the lightweight algorithm design and calculations based on the Google Earth Engine make the proposed method computationally less expensive than the STARFM and FSDAF.


Sign in / Sign up

Export Citation Format

Share Document