scholarly journals Pyramid Information Distillation Attention Network for Super-Resolution Reconstruction of Remote Sensing Images

2021 ◽  
Vol 13 (24) ◽  
pp. 5143
Author(s):  
Bo Huang ◽  
Zhiming Guo ◽  
Liaoni Wu ◽  
Boyong He ◽  
Xianjiang Li ◽  
...  

Image super-resolution (SR) technology aims to recover high-resolution images from low-resolution originals, and it is of great significance for the high-quality interpretation of remote sensing images. However, most present SR-reconstruction approaches suffer from network training difficulties and the challenge of increasing computational complexity with increasing numbers of network layers. This indicates that these approaches are not suitable for application scenarios with limited computing resources. Furthermore, the complex spatial distributions and rich details of remote sensing images increase the difficulty of their reconstruction. In this paper, we propose the pyramid information distillation attention network (PIDAN) to solve these issues. Specifically, we propose the pyramid information distillation attention block (PIDAB), which has been developed as a building block in the PIDAN. The key components of the PIDAB are the pyramid information distillation (PID) module and the hybrid attention mechanism (HAM) module. Firstly, the PID module uses feature distillation with parallel multi-receptive field convolutions to extract short- and long-path feature information, which allows the network to obtain more non-redundant image features. Then, the HAM module enhances the sensitivity of the network to high-frequency image information. Extensive validation experiments show that when compared with other advanced CNN-based approaches, the PIDAN achieves a better balance between image SR performance and model size.

2021 ◽  
Vol 13 (14) ◽  
pp. 2784
Author(s):  
Bo Huang ◽  
Boyong He ◽  
Liaoni Wu ◽  
Zhiming Guo

A super-resolution (SR) reconstruction of remote sensing images is becoming a highly active area of research. With increasing upscaling factors, richer and more abundant details can progressively be obtained. However, in comparison with natural images, the complex spatial distribution of remote sensing data increases the difficulty in its reconstruction. Furthermore, most SR reconstruction methods suffer from low feature information utilization and equal processing of all spatial regions of an image. To improve the performance of SR reconstruction of remote sensing images, this paper proposes a deep convolutional neural network (DCNN)-based approach, named the deep residual dual-attention network (DRDAN), which achieves the fusion of global and local information. Specifically, we have developed a residual dual-attention block (RDAB) as a building block in DRDAN. In the RDAB, we firstly use the local multi-level fusion module to fully extract and deeply fuse the features of the different convolution layers. This module can facilitate the flow of information in the network. After this, a dual-attention mechanism (DAM), which includes both a channel attention mechanism and a spatial attention mechanism, enables the network to adaptively allocate more attention to regions carrying high-frequency information. Extensive experiments indicate that the DRDAN outperforms other comparable DCNN-based approaches in both objective evaluation indexes and subjective visual quality.


2020 ◽  
Vol 12 (5) ◽  
pp. 758 ◽  
Author(s):  
Mengjiao Qin ◽  
Sébastien Mavromatis ◽  
Linshu Hu ◽  
Feng Zhang ◽  
Renyi Liu ◽  
...  

Super-resolution (SR) is able to improve the spatial resolution of remote sensing images, which is critical for many practical applications such as fine urban monitoring. In this paper, a new single-image SR method, deep gradient-aware network with image-specific enhancement (DGANet-ISE) was proposed to improve the spatial resolution of remote sensing images. First, DGANet was proposed to model the complex relationship between low- and high-resolution images. A new gradient-aware loss was designed in the training phase to preserve more gradient details in super-resolved remote sensing images. Then, the ISE approach was proposed in the testing phase to further improve the SR performance. By using the specific features of each test image, ISE can further boost the generalization capability and adaptability of our method on inexperienced datasets. Finally, three datasets were used to verify the effectiveness of our method. The results indicate that DGANet-ISE outperforms the other 14 methods in the remote sensing image SR, and the cross-database test results demonstrate that our method exhibits satisfactory generalization performance in adapting to new data.


2021 ◽  
Vol 13 (15) ◽  
pp. 2966
Author(s):  
Yunchuan Ma ◽  
Pengyuan Lv ◽  
Hao Liu ◽  
Xuehong Sun ◽  
Yanfei Zhong

In the recent years, convolutional neural networks (CNN)-based super resolution (SR) methods are widely used in the field of remote sensing. However, complicated remote sensing images contain abundant high-frequency details, which are difficult to capture and reconstruct effectively. To address this problem, we propose a dense channel attention network (DCAN) to reconstruct high-resolution (HR) remote sensing images. The proposed method learns multi-level feature information and pays more attention to the important and useful regions in order to better reconstruct the final image. Specifically, we construct a dense channel attention mechanism (DCAM), which densely uses the feature maps from the channel attention block via skip connection. This mechanism makes better use of multi-level feature maps which contain abundant high-frequency information. Further, we add a spatial attention block, which makes the network have more flexible discriminative ability. Experimental results demonstrate that the proposed DCAN method outperforms several state-of-the-art methods in both quantitative evaluation and visual quality.


2021 ◽  
Vol 13 (9) ◽  
pp. 1854
Author(s):  
Syed Muhammad Arsalan Bashir ◽  
Yi Wang

This paper deals with detecting small objects in remote sensing images from satellites or any aerial vehicle by utilizing the concept of image super-resolution for image resolution enhancement using a deep-learning-based detection method. This paper provides a rationale for image super-resolution for small objects by improving the current super-resolution (SR) framework by incorporating a cyclic generative adversarial network (GAN) and residual feature aggregation (RFA) to improve detection performance. The novelty of the method is threefold: first, a framework is proposed, independent of the final object detector used in research, i.e., YOLOv3 could be replaced with Faster R-CNN or any object detector to perform object detection; second, a residual feature aggregation network was used in the generator, which significantly improved the detection performance as the RFA network detected complex features; and third, the whole network was transformed into a cyclic GAN. The image super-resolution cyclic GAN with RFA and YOLO as the detection network is termed as SRCGAN-RFA-YOLO, which is compared with the detection accuracies of other methods. Rigorous experiments on both satellite images and aerial images (ISPRS Potsdam, VAID, and Draper Satellite Image Chronology datasets) were performed, and the results showed that the detection performance increased by using super-resolution methods for spatial resolution enhancement; for an IoU of 0.10, AP of 0.7867 was achieved for a scale factor of 16.


2018 ◽  
Vol 10 (12) ◽  
pp. 1934 ◽  
Author(s):  
Bao-Di Liu ◽  
Wen-Yang Xie ◽  
Jie Meng ◽  
Ye Li ◽  
Yanjiang Wang

In recent years, the collaborative representation-based classification (CRC) method has achieved great success in visual recognition by directly utilizing training images as dictionary bases. However, it describes a test sample with all training samples to extract shared attributes and does not consider the representation of the test sample with the training samples in a specific class to extract the class-specific attributes. For remote-sensing images, both the shared attributes and class-specific attributes are important for classification. In this paper, we propose a hybrid collaborative representation-based classification approach. The proposed method is capable of improving the performance of classifying remote-sensing images by embedding the class-specific collaborative representation to conventional collaborative representation-based classification. Moreover, we extend the proposed method to arbitrary kernel space to explore the nonlinear characteristics hidden in remote-sensing image features to further enhance classification performance. Extensive experiments on several benchmark remote-sensing image datasets were conducted and clearly demonstrate the superior performance of our proposed algorithm to state-of-the-art approaches.


Sensors ◽  
2018 ◽  
Vol 18 (2) ◽  
pp. 498 ◽  
Author(s):  
Hong Zhu ◽  
Xinming Tang ◽  
Junfeng Xie ◽  
Weidong Song ◽  
Fan Mo ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document