scholarly journals A Distributed Fusion Framework of Multispectral and Panchromatic Images Based on Residual Network

2021 ◽  
Vol 13 (13) ◽  
pp. 2556
Author(s):  
Yuanyuan Wu ◽  
Mengxing Huang ◽  
Yuchun Li ◽  
Siling Feng ◽  
Di Wu

Remote sensing images have been widely applied in various industries; nevertheless, the resolution of such images is relatively low. Panchromatic sharpening (pan-sharpening) is a research focus in the image fusion domain of remote sensing. Pan-sharpening is used to generate high-resolution multispectral (HRMS) images making full use of low-resolution multispectral (LRMS) images and panchromatic (PAN) images. Traditional pan-sharpening has the problems of spectral distortion, ringing effect, and low resolution. The convolutional neural network (CNN) is gradually applied to pan-sharpening. Aiming at the aforementioned problems, we propose a distributed fusion framework based on residual CNN (RCNN), namely, RDFNet, which realizes the data fusion of three channels. It can make the most of the spectral information and spatial information of LRMS and PAN images. The proposed fusion network employs a distributed fusion architecture to make the best of the fusion outcome of the previous step in the fusion channel, so that the subsequent fusion acquires much more spectral and spatial information. Moreover, two feature extraction channels are used to extract the features of MS and PAN images respectively, using the residual module, and features of different scales are used for the fusion channel. In this way, spectral distortion and spatial information loss are reduced. Employing data from four different satellites to compare the proposed RDFNet, the results of the experiment show that the proposed RDFNet has superior performance in improving spatial resolution and preserving spectral information, and has good robustness and generalization in improving the fusion quality.

2019 ◽  
Vol 11 (5) ◽  
pp. 518 ◽  
Author(s):  
Bao-Di Liu ◽  
Jie Meng ◽  
Wen-Yang Xie ◽  
Shuai Shao ◽  
Ye Li ◽  
...  

At present, nonparametric subspace classifiers, such as collaborative representation-based classification (CRC) and sparse representation-based classification (SRC), are widely used in many pattern-classification and -recognition tasks. Meanwhile, the spatial pyramid matching (SPM) scheme, which considers spatial information in representing the image, is efficient for image classification. However, for SPM, the weights to evaluate the representation of different subregions are fixed. In this paper, we first introduce the spatial pyramid matching scheme to remote-sensing (RS)-image scene-classification tasks to improve performance. Then, we propose a weighted spatial pyramid matching collaborative-representation-based classification method, combining the CRC method with the weighted spatial pyramid matching scheme. The proposed method is capable of learning the weights of different subregions in representing an image. Finally, extensive experiments on several benchmark remote-sensing-image datasets were conducted and clearly demonstrate the superior performance of our proposed algorithm when compared with state-of-the-art approaches.


2020 ◽  
Vol 12 (10) ◽  
pp. 1660 ◽  
Author(s):  
Qiang Li ◽  
Qi Wang ◽  
Xuelong Li

Deep learning-based hyperspectral image super-resolution (SR) methods have achieved great success recently. However, there are two main problems in the previous works. One is to use the typical three-dimensional convolution analysis, resulting in more parameters of the network. The other is not to pay more attention to the mining of hyperspectral image spatial information, when the spectral information can be extracted. To address these issues, in this paper, we propose a mixed convolutional network (MCNet) for hyperspectral image super-resolution. We design a novel mixed convolutional module (MCM) to extract the potential features by 2D/3D convolution instead of one convolution, which enables the network to more mine spatial features of hyperspectral image. To explore the effective features from 2D unit, we design the local feature fusion to adaptively analyze from all the hierarchical features in 2D units. In 3D unit, we employ spatial and spectral separable 3D convolution to extract spatial and spectral information, which reduces unaffordable memory usage and training time. Extensive evaluations and comparisons on three benchmark datasets demonstrate that the proposed approach achieves superior performance in comparison to existing state-of-the-art methods.


Author(s):  
Jakaria Rabbi ◽  
Nilanjan Ray ◽  
Matthias Schubert ◽  
Subir Chowdhury ◽  
Dennis Chao

The detection performance of small objects in remote sensing images is not satisfactory compared to large objects, especially in low-resolution and noisy images. A generative adversarial network (GAN)-based model called enhanced super-resolution GAN (ESRGAN) shows remarkable image enhancement performance, but reconstructed images miss high-frequency edge information. Therefore, object detection performance degrades for the small objects on recovered noisy and low-resolution remote sensing images. Inspired by the success of edge enhanced GAN (EEGAN) and ESRGAN, we apply a new edge-enhanced super-resolution GAN (EESRGAN) to improve the image quality of remote sensing images and used different detector networks in an end-to-end manner where detector loss is backpropagated into the EESRGAN to improve the detection performance. We propose an architecture with three components: ESRGAN, Edge Enhancement Network (EEN), and Detection network. We use residual-in-residual dense blocks (RRDB) for both the GAN and EEN, and for the detector network, we use the faster region-based convolutional network (FRCNN) (two-stage detector) and single-shot multi-box detector (SSD) (one stage detector). Extensive experiments on car overhead with context and oil and gas storage tank (created by us) data sets show superior performance of our method compared to the standalone state-of-the-art object detectors.


2020 ◽  
Vol 12 (7) ◽  
pp. 1204
Author(s):  
Xinyu Dou ◽  
Chenyu Li ◽  
Qian Shi ◽  
Mengxi Liu

Hyperspectral remote sensing images (HSIs) have a higher spectral resolution compared to multispectral remote sensing images, providing the possibility for more reasonable and effective analysis and processing of spectral data. However, rich spectral information usually comes at the expense of low spatial resolution owing to the physical limitations of sensors, which brings difficulties for identifying and analyzing targets in HSIs. In the super-resolution (SR) field, many methods have been focusing on the restoration of the spatial information while ignoring the spectral aspect. To better restore the spectral information in the HSI SR field, a novel super-resolution (SR) method was proposed in this study. Firstly, we innovatively used three-dimensional (3D) convolution based on SRGAN (Super-Resolution Generative Adversarial Network) structure to not only exploit the spatial features but also preserve spectral properties in the process of SR. Moreover, we used the attention mechanism to deal with the multiply features from the 3D convolution layers, and we enhanced the output of our model by improving the content of the generator’s loss function. The experimental results indicate that the 3DASRGAN (3D Attention-based Super-Resolution Generative Adversarial Network) is both visually quantitatively better than the comparison methods, which proves that the 3DASRGAN model can reconstruct high-resolution HSIs with high efficiency.


2022 ◽  
Vol 2022 ◽  
pp. 1-14
Author(s):  
Mengxing Huang ◽  
Shi Liu ◽  
Zhenfeng Li ◽  
Siling Feng ◽  
Di Wu ◽  
...  

A two-stream remote sensing image fusion network (RCAMTFNet) based on the residual channel attention mechanism is proposed by introducing the residual channel attention mechanism (RCAM) in this paper. In the RCAMTFNet, the spatial features of PAN and the spectral features of MS are extracted, respectively, by a two-channel feature extraction layer. Multiresidual connections allow the network to adapt to a deeper network structure without the degradation. The residual channel attention mechanism is introduced to learn the interdependence between channels, and then the correlation features among channels are adapted on the basis of the dependency. In this way, image spatial information and spectral information are extracted exclusively. What is more, pansharpening images are reconstructed across the board. Experiments are conducted on two satellite datasets, GaoFen-2 and WorldView-2. The experimental results show that the proposed algorithm is superior to the algorithms to some existing literature in the comparison of the values of reference evaluation indicators and nonreference evaluation indicators.


Author(s):  
Jakaria Rabbi ◽  
Nilanjan Ray ◽  
Matthias Schubert ◽  
Subir Chowdhury ◽  
Dennis Chao

The detection performance of small objects in remote sensing images is not satisfactory compared to large objects, especially in low-resolution and noisy images. A generative adversarial network (GAN)-based model called enhanced super-resolution GAN (ESRGAN) shows remarkable image enhancement performance, but reconstructed images miss high-frequency edge information. Therefore, object detection performance degrades for small objects on recovered noisy and low-resolution remote sensing images. Inspired by the success of edge enhanced GAN (EEGAN) and ESRGAN, we apply a new edge-enhanced super-resolution GAN (EESRGAN) to improve the image quality of remote sensing images and use different detector networks in an end-to-end manner where detector loss is backpropagated into the EESRGAN to improve the detection performance. We propose an architecture with three components: ESRGAN, Edge Enhancement Network (EEN), and Detection network. We use residual-in-residual dense blocks (RRDB) for both the ESRGAN and EEN, and for the detector network, we use the faster region-based convolutional network (FRCNN) (two-stage detector) and single-shot multi-box detector (SSD) (one stage detector). Extensive experiments on a public (car overhead with context) and a self-assembled (oil and gas storage tank) satellite dataset show superior performance of our method compared to the standalone state-of-the-art object detectors.


Author(s):  
Y. Zheng ◽  
M. Guo ◽  
Q. Dai ◽  
L. Wang

The GaoFen-2 satellite (GF-2) is a self-developed civil optical remote sensing satellite of China, which is also the first satellite with the resolution of being superior to 1 meter in China. In this paper, we propose a pan-sharpening method based on guided image filtering, apply it to the GF-2 images and compare the performance to state-of-the-art methods. Firstly, a simulated low-resolution panchromatic band is yielded; thereafter, the resampled multispectral image is taken as the guidance image to filter the simulated low resolution panchromatic Pan image, and extracting the spatial information from the original Pan image; finally, the pan-sharpened result is synthesized by injecting the spatial details into each band of the resampled MS image according to proper weights. Three groups of GF-2 images acquired from water body, urban and cropland areas have been selected for assessments. Four evaluation metrics are employed for quantitative assessment. The experimental results show that, for GF-2 imagery acquired over different scenes, the proposed method can not only achieve high spectral fidelity, but also enhance the spatial details


Sensors ◽  
2020 ◽  
Vol 20 (7) ◽  
pp. 2064 ◽  
Author(s):  
Shuai Wang ◽  
Hui Yang ◽  
Qiangqiang Wu ◽  
Zhiteng Zheng ◽  
Yanlan Wu ◽  
...  

At present, deep-learning methods have been widely used in road extraction from remote-sensing images and have effectively improved the accuracy of road extraction. However, these methods are still affected by the loss of spatial features and the lack of global context information. To solve these problems, we propose a new network for road extraction, the coord-dense-global (CDG) model, built on three parts: a coordconv module by putting coordinate information into feature maps aimed at reducing the loss of spatial information and strengthening road boundaries, an improved dense convolutional network (DenseNet) that could make full use of multiple features through own dense blocks, and a global attention module designed to highlight high-level information and improve category classification by using pooling operation to introduce global information. When tested on a complex road dataset from Massachusetts, USA, CDG achieved clearly superior performance to contemporary networks such as DeepLabV3+, U-net, and D-LinkNet. For example, its mean IoU (intersection of the prediction and ground truth regions over their union) and mean F1 score (evaluation metric for the harmonic mean of the precision and recall metrics) were 61.90% and 76.10%, respectively, which were 1.19% and 0.95% higher than the results of D-LinkNet (the winner of a road-extraction contest). In addition, CDG was also superior to the other three models in solving the problem of tree occlusion. Finally, in universality research with the Gaofen-2 satellite dataset, the CDG model also performed well at extracting the road network in the test maps of Hefei and Tianjin, China.


Author(s):  
Jakaria Rabbi ◽  
Nilanjan Ray ◽  
Matthias Schubert ◽  
Subir Chowdhury ◽  
Dennis Chao

The detection performance of small objects in remote sensing images has not been satisfactory compared to large objects, especially in low-resolution and noisy images. A generative adversarial network (GAN)-based model called enhanced super-resolution GAN (ESRGAN) showed remarkable image enhancement performance, but reconstructed images usually miss high-frequency edge information. Therefore, object detection performance showed degradation for small objects on recovered noisy and low-resolution remote sensing images. Inspired by the success of edge enhanced GAN (EEGAN) and ESRGAN, we applied a new edge-enhanced super-resolution GAN (EESRGAN) to improve the quality of remote sensing images and used different detector networks in an end-to-end manner where detector loss was backpropagated into the EESRGAN to improve the detection performance. We proposed an architecture with three components: ESRGAN, EEN, and Detection network. We used residual-in-residual dense blocks (RRDB) for both the ESRGAN and EEN, and for the detector network, we used a faster region-based convolutional network (FRCNN) (two-stage detector) and a single-shot multibox detector (SSD) (one stage detector). Extensive experiments on a public (car overhead with context) dataset and another self-assembled (oil and gas storage tank) satellite dataset showed superior performance of our method compared to the standalone state-of-the-art object detectors.


Sensors ◽  
2018 ◽  
Vol 18 (10) ◽  
pp. 3428
Author(s):  
Siya Chen ◽  
Hongyan Zhang ◽  
Tieli Sun ◽  
Jianjun Zhao ◽  
Xiaoyi Guo

Among many types of efforts to improve the accuracy of remote sensing image classification, using spatial information is an effective strategy. The classification method integrates spatial information into spectral information, which is called the spectral-spatial classification approach, has better performance than traditional classification methods. Construct spectral-spatial distance used for classification is a common method to combine the spatial and spectral information. In order to improve the performance of spectral-spatial classification based on spectral-spatial distance, we introduce the information content (IC) in which two pixels are shared to measure spatial relation between them and propose a novel spectral-spatial distance measure method. The IC of two pixels shared was computed from the hierarchical tree constructed by the statistical region merging (SRM) segmentation. The distance we proposed was applied in two distance-based contextual classifiers, the k-nearest neighbors-statistical region merging (k-NN-SRM) and optimum-path forest-statistical region merging (OPF-SRM), to obtain two new contextual classifiers, the k-NN-SRM-IC and OPF-SRM-IC. The classifiers with the novel distance were implemented in four land cover images. The classification results of the classifier based on our spectral-spatial distance outperformed all the other competitive contextual classifiers, which demonstrated the validity of the proposed distance measure method.


Sign in / Sign up

Export Citation Format

Share Document