scholarly journals An Improved Spatiotemporal Data Fusion Method Using Surface Heterogeneity Information Based on ESTARFM

2020 ◽  
Vol 12 (21) ◽  
pp. 3673
Author(s):  
Mengxue Liu ◽  
Xiangnan Liu ◽  
Xiaobin Dong ◽  
Bingyu Zhao ◽  
Xinyu Zou ◽  
...  

The use of the spatiotemporal data fusion method as an effective data interpolation method has received extensive attention in remote sensing (RS) academia. The enhanced spatial and temporal adaptive reflectance fusion model (ESTARFM) is one of the most famous spatiotemporal data fusion methods, as it is widely used to generate synthetic data. However, the ESTARFM algorithm uses moving windows with a fixed size to get the information around the central pixel, which hampers the efficiency and precision of spatiotemporal data fusion. In this paper, a modified ESTARFM data fusion algorithm that integrated the surface spatial information via a statistical method was developed. In the modified algorithm, the local variance of pixels around the central one was used as an index to adaptively determine the window size. Satellite images from two regions were acquired by employing the ESTARFM and modified algorithm. Results showed that the images predicted using the modified algorithm obtained more details than ESTARFM, as the frequency of pixels with the absolute difference of mean value of six bands’ reflectance between true observed image and predicted between 0 and 0.04 were 78% by ESTARFM and 85% by modified algorithm, respectively. In addition, the efficiency of the modified algorithm improved and the verification test showed the robustness of the modified algorithm. These promising results demonstrated the superiority of the modified algorithm to provide synthetic images compared with ESTARFM. Our research enriches the spatiotemporal data fusion method, and the automatic selection of moving window strategy lays the foundation of automatic processing of spatiotemporal data fusion on a large scale.

2021 ◽  
Vol 14 (3) ◽  
pp. 2041-2053
Author(s):  
Nicola Zoppetti ◽  
Simone Ceccherini ◽  
Bruno Carli ◽  
Samuele Del Bianco ◽  
Marco Gai ◽  
...  

Abstract. The new platforms for Earth observation from space are characterized by measurements made at great spatial and temporal resolutions. While this abundance of information makes it possible to detect and study localized phenomena, it may be difficult to manage this large amount of data for the study of global and large-scale phenomena. A particularly significant example is the use by assimilation systems of Level 2 products that represent gas profiles in the atmosphere. The models on which assimilation systems are based are discretized on spatial grids with horizontal dimensions of the order of tens of kilometres in which tens or hundreds of measurements may fall in the future. A simple procedure to overcome this problem is to extract a subset of the original measurements, but this involves a loss of information. Another option is the use of simple averages of the profiles, but this approach also has some limitations that we will discuss in the paper. A more advanced solution is to resort to the so-called fusion algorithms, capable of compressing the size of the dataset while limiting the information loss. A novel data fusion method, the Complete Data Fusion algorithm, was recently developed to merge a set of retrieved products in a single product a posteriori. In the present paper, we apply the Complete Data Fusion method to ozone profile measurements simulated in the thermal infrared and ultraviolet bands in a realistic scenario. Following this, the fused products are compared with the input profiles; comparisons show that the output products of data fusion have smaller total errors and higher information contents in general. The comparisons of the fused products with the fusing products are presented both at single fusion grid box scale and with a statistical analysis of the results obtained on large sets of fusion grid boxes of the same size. We also evaluate the grid box size impact, showing that the Complete Data Fusion method can be used with different grid box sizes even if this possibility is connected to the natural variability of the considered atmospheric molecule.


2020 ◽  
Vol 12 (23) ◽  
pp. 3979
Author(s):  
Shuwei Hou ◽  
Wenfang Sun ◽  
Baolong Guo ◽  
Cheng Li ◽  
Xiaobo Li ◽  
...  

Many spatiotemporal image fusion methods in remote sensing have been developed to blend highly resolved spatial images and highly resolved temporal images to solve the problem of a trade-off between the spatial and temporal resolution from a single sensor. Yet, none of the spatiotemporal fusion methods considers how the various temporal changes between different pixels affect the performance of the fusion results; to develop an improved fusion method, these temporal changes need to be integrated into one framework. Adaptive-SFSDAF extends the existing fusion method that incorporates sub-pixel class fraction change information in Flexible Spatiotemporal DAta Fusion (SFSDAF) by modifying spectral unmixing to select spectral unmixing adaptively in order to greatly improve the efficiency of the algorithm. Accordingly, the main contributions of the proposed adaptive-SFSDAF method are twofold. One is to address the detection of outliers of temporal change in the image during the period between the origin and prediction dates, as these pixels are the most difficult to estimate and affect the performance of the spatiotemporal fusion methods. The other primary contribution is to establish an adaptive unmixing strategy according to the guided mask map, thus effectively eliminating a great number of insignificant unmixed pixels. The proposed method is compared with the state-of-the-art Flexible Spatiotemporal DAta Fusion (FSDAF), SFSDAF, FIT-FC, and Unmixing-Based Data Fusion (UBDF) methods, and the fusion accuracy is evaluated both quantitatively and visually. The experimental results show that adaptive-SFSDAF achieves outstanding performance in balancing computational efficiency and the accuracy of the fusion results.


2020 ◽  
Author(s):  
Aojie Shen ◽  
Yanchen Bo ◽  
Duoduo Hu

<p>Scientific research of land surface dynamics in heterogeneous landscapes often require remote sensing data with high resolutions in both space and time. However, single sensor could not provide such data at both high resolutions. In addition, because of cloud pollution, images are often incomplete. Spatiotemporal data fusion methods is a feasible solution for the aforementioned data problem. However, for existing data fusion methods, it is difficult to address the problem constructed regular and cloud-free dense time-series images with high spatial resolution. To address these limitations of current spatiotemporal data fusion methods, in this paper, we presented a novel data fusion method for fusing multi-source satellite data to generate s a high-resolution, regular and cloud-free time series of satellite images.</p><p>We incorporates geostatistical theory into the fusion method, and takes the pixel value as a random variable which is composed of trend and a zero-mean second-order stationary residual. To fuse satellite images, we use the coarse-resolution image with high frequency observation to capture the trend in time, and use Kriging interpolation to obtain the residual in fine-resolution scale to provide the informative spatial information. In this paper, in order to avoid the smoothing effect caused by spatial interpolation, Kriging interpolation is performed only in time dimension. For certain region, the temporal correlation between pixels is fixed after the data reach stationary. So for getting the weight in temporal Kriging interpolation, we can use the residuals obtained from coarse-resolution images to construct the temporal covariance model. The predicted fine-resolution image can be obtained by returning the trend value of pixel to their own residual until the each pixel value was obtained. The advantage of the algorithm is to accurately predict fine-resolution images in heterogeneous areas by integrating all available information in the time-series images with fine spatial resolution.  </p><p>We tested our method to fuse NDVI of MODIS and Landsat at Bahia State where has heterogeneous landscape, and generated 8-day time series of NDVI for the whole year of 2016 at 30m resolution. By cross-validation, the average R<sup>2 </sup>and RMSE between NDVI from fused images and from observed images can reach 95% and 0.0411, respectively. In addition, experiments demonstrated that our method also can capture correct texture patterns. These promising results demonstrated this novel method can provide effective means to construct regular and cloud-free time series with high spatiotemporal resolution. Theoretically, the method can predict the fine-resolution data required on any given day. Such a capability is helpful for monitoring near-real-time land surface and ecological dynamics at the high-resolution scales most relevant to human activities.</p><p> </p>


2020 ◽  
Vol 58 (7) ◽  
pp. 5179-5194 ◽  
Author(s):  
Yang Chen ◽  
Ruyin Cao ◽  
Jin Chen ◽  
Xiaolin Zhu ◽  
Ji Zhou ◽  
...  

2019 ◽  
Vol 9 (18) ◽  
pp. 3693 ◽  
Author(s):  
Shi ◽  
Wang ◽  
Zhang ◽  
Liang ◽  
Niu ◽  
...  

Spatiotemporal fusion methods provide an effective way to generate both high temporal and high spatial resolution data for monitoring dynamic changes of land surface. But existing fusion methods face two main challenges of monitoring the abrupt change events and accurately preserving the spatial details of objects. The Flexible Spatiotemporal DAta Fusion method (FSDAF) was proposed, which can monitor the abrupt change events, but its predicted images lacked intra-class variability and spatial details. To overcome the above limitations, this study proposed a comprehensive and automated fusion method, the Enhanced FSDAF (EFSDAF) method and tested it for Landsat–MODIS image fusion. Compared with FSDAF, the EFSDAF has the following strengths: (1) it considers the mixed pixels phenomenon of a Landsat image, and the predicted images by EFSDAF have more intra-class variability and spatial details; (2) it adjusts the differences between Landsat images and MODIS images; and (3) it improves the fusion accuracy in the abrupt change area by introducing a new residual index (RI). Vegetation phenology and flood events were selected to evaluate the performance of EFSDAF. Its performance was compared with the Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM), the Spatial and Temporal Reflectance Unmixing Model (STRUM), and FSDAF. Results show that EFSDAF can monitor the changes of vegetation (gradual change) and flood (abrupt change), and the fusion images by EFSDAF are the best from both visual and quantitative evaluations. More importantly, EFSDAF can accurately generate the spatial details of the object and has strong robustness. Due to the above advantages of EFSDAF, it has great potential to monitor long-term dynamic changes of land surface.


2019 ◽  
Author(s):  
Nicola Zoppetti ◽  
Simone Ceccherini ◽  
Bruno Carli ◽  
Samuele Del Bianco ◽  
Marco Gai ◽  
...  

Abstract. The new platforms for Earth observation from space are characterized by measurements made with great spatial and temporal resolution. While this abundance of information makes it possible to detect and study localized phenomena, on the other hand it may be difficult to manage this large amount of data in the study of global and large scale phenomena. A particularly significant example is the use by assimilation systems of level 2 products that represent gas profiles in the atmosphere. The models on which assimilation systems are based are discretized on spatial grids with horizontal dimensions of the order of tens of kilometres in which tens or hundreds of measurements may fall. A simple procedure to overcome this problem is to extract a subset of the original measurements. However, this procedure involves a loss of information and is therefore justifiable only as a temporary solution. A more refined solution is to resort to the so-called fusion algorithms, capable of compressing the size of the dataset limiting the information loss. A novel data fusion method, the Complete Data Fusion, was recently developed to merge a-posteriori a set of retrieved products in a single one. In the present paper, the Complete Data Fusion method is applied to ozone profile measurements simulated in the thermal infrared and ultraviolet bands, in a realistic scenario, according to the specifications of the Sentinel 4 and 5 missions of the Copernicus programme. Then the fused products are compared with the input profiles; comparisons show that the output products of data fusion have in general smaller errors and higher information contents. The most significant improvement is an increased vertical resolution together with a reduction of the errors. The comparisons of the fused with the fusing products are presented both at single fusion grid-box scale and with a statistical analysis. The grid box size impact was also evaluated, showing that the Complete Data Fusion method can be used with a wide range of grid-box size, the quality of the products improving with larger grid boxes.


Sign in / Sign up

Export Citation Format

Share Document