scholarly journals MSNet: A Multi-Stream Fusion Network for Remote Sensing Spatiotemporal Fusion Based on Transformer and Convolution

2021 ◽  
Vol 13 (18) ◽  
pp. 3724
Author(s):  
Weisheng Li ◽  
Dongwen Cao ◽  
Yidong Peng ◽  
Chao Yang

Remote sensing products with high temporal and spatial resolution can be hardly obtained under the constrains of existing technology and cost. Therefore, the spatiotemporal fusion of remote sensing images has attracted considerable attention. Spatiotemporal fusion algorithms based on deep learning have gradually developed, but they also face some problems. For example, the amount of data affects the model’s ability to learn, and the robustness of the model is not high. The features extracted through the convolution operation alone are insufficient, and the complex fusion method also introduces noise. To solve these problems, we propose a multi-stream fusion network for remote sensing spatiotemporal fusion based on Transformer and convolution, called MSNet. We introduce the structure of the Transformer, which aims to learn the global temporal correlation of the image. At the same time, we also use a convolutional neural network to establish the relationship between input and output and to extract features. Finally, we adopt the fusion method of average weighting to avoid using complicated methods to introduce noise. To test the robustness of MSNet, we conducted experiments on three datasets and compared them with four representative spatiotemporal fusion algorithms to prove the superiority of MSNet (Spectral Angle Mapper (SAM) < 0.193 on the CIA dataset, erreur relative global adimensionnelle de synthese (ERGAS) < 1.687 on the LGC dataset, and root mean square error (RMSE) < 0.001 on the AHB dataset).

2020 ◽  
Vol 2020 ◽  
pp. 1-14
Author(s):  
Cheng Zhang ◽  
Dan He

The urban data provides a wealth of information that can support the life and work for people. In this work, we research the object saliency detection in optical remote sensing images, which is conducive to the interpretation of urban scenes. Saliency detection selects the regions with important information in the remote sensing images, which severely imitates the human visual system. It plays a powerful role in other image processing. It has successfully made great achievements in change detection, object tracking, temperature reversal, and other tasks. The traditional method has some disadvantages such as poor robustness and high computational complexity. Therefore, this paper proposes a deep multiscale fusion method via low-rank sparse decomposition for object saliency detection in optical remote sensing images. First, we execute multiscale segmentation for remote sensing images. Then, we calculate the saliency value, and the proposal region is generated. The superpixel blocks of the remaining proposal regions of the segmentation map are input into the convolutional neural network. By extracting the depth feature, the saliency value is calculated and the proposal regions are updated. The feature transformation matrix is obtained based on the gradient descent method, and the high-level semantic prior knowledge is obtained by using the convolutional neural network. The process is iterated continuously to obtain the saliency map at each scale. The low-rank sparse decomposition of the transformed matrix is carried out by robust principal component analysis. Finally, the weight cellular automata method is utilized to fuse the multiscale saliency graphs and the saliency map calculated according to the sparse noise obtained by decomposition. Meanwhile, the object priors knowledge can filter most of the background information, reduce unnecessary depth feature extraction, and meaningfully improve the saliency detection rate. The experiment results show that the proposed method can effectively improve the detection effect compared to other deep learning methods.


2020 ◽  
Author(s):  
Junming Yang ◽  
Yunjun Yao ◽  
Ke Shang ◽  
Xiaozheng Guo ◽  
Xiangyi Bei ◽  
...  

&lt;p&gt;The study of law of crop water consumption in small scale such as irrigation area requires remote sensing image data with high spatial and temporal resolution, however, remote sensing images that possess both high temporal and spatial resolution cannot be obtained for technical reasons. To solve the problem, this paper present a multisource remote sensing data spatial and temporal reflectance fusion method based on fuzzy C clustering model (FCMSTRFM) and multisource Vegetation index (VI) data spatial and temporal fusion model (VISTFM), the Landsat8 OLI and MOD09GA data are combined to generate high spatial and temporal resolution reflectance data and the landsat8 OLI, MOD09GA and MOD13Q1 data are combined to generate high spatial and temporal resolution normalized vegetation index (NDVI) and enhanced vegetation index (EVI) data.&lt;/p&gt;&lt;p&gt;The rice area is mapped by spectral correlation similarity (SCS) between standard series EVI curve that based the EVI generated by VISTFM and average value of each EVI class that generated by classing Multiphase EVI into several class, the extraction results are verified by two methods: ground sample and Google Earth image. high spatial and temporal resolution Leaf area index (LAI) that covered the mainly rice growth and development stages is generated by higher precision method between artificial neural network and equation fitting that establish the relationship between NDVI, EVI and LAI. The yield of rice in the spatial scale is generated by establishing the relationship between yield and LAI of the mainly growth and development stages that has the maximum correlation with yield. Daily high spatial resolution evapotranspiration is generated by using multisource remote sensing data spatial and temporal reflectance fusion method to fusion the MODIS-like scale and Landsat-like scale evapotranspiration that generated by The Surface Energy Balance Algorithm for Land (SEBAL). Based on the data, the evapotranspiration, LAI and yield of rice, obtained by remote sensing methods, rice water growth function is established by Jensen, Blank, Stewart and Singh model.&lt;/p&gt;


Sensors ◽  
2008 ◽  
Vol 8 (1) ◽  
pp. 520-528 ◽  
Author(s):  
Shaohui Chen ◽  
Hongbo Su ◽  
Renhua Zhang ◽  
Jing Tian ◽  
Lihu Yang

2018 ◽  
Vol 13 (1) ◽  
pp. 533-543 ◽  
Author(s):  
Jinqiu Zou ◽  
Yinlan Huang ◽  
Lina Chen ◽  
Shi Chen

AbstractThe aim of this study is to assess the winter wheat planting (WWP) area in Henan Province and investigate its temporal and spatial variations by using remote sensing (RS) technology. A spectral angle mapper (SAM) was adopted to identify the WWP area of each district divided by the hierarchical grades of land surface drought index during 2001-2015. The results obtained show the expediency of monitoring the WWP areas at the regional scale via drought regionalization, which provides a goodness-of-fit R2 =0.933, a mean relative error MRE=49,118 ha, and an overall accuracy up to 90.24%. The major WWP areas in Henan Province were located in Zhoukou, Zhumadian, Shangqiu, Nanyang, and Xinxiang prefecture-level cities. Two representative sites are mountainous districts, with rich water resources or high urbanization rate, which have a low probability of WWP. Both sites exhibited a strongly manifested evolution of WWP areas, which could be attributed to extremely cold weather conditions, crop alternation, the popularization of new varieties, and fast expansion of built-up areas. The results of this study are instrumental in the analysis of crop planting variation characteristics, which should be taken into account in the further decision-making process related to the crop planting strategies.


Sign in / Sign up

Export Citation Format

Share Document