scholarly journals Multi-Image Super Resolution of Remotely Sensed Images Using Residual Attention Deep Neural Networks

2020 ◽  
Vol 12 (14) ◽  
pp. 2207 ◽  
Author(s):  
Francesco Salvetti ◽  
Vittorio Mazzia ◽  
Aleem Khaliq ◽  
Marcello Chiaberge

Convolutional Neural Networks (CNNs) consistently proved state-of-the-art results in image Super-resolution (SR), representing an exceptional opportunity for the remote sensing field to extract further information and knowledge from captured data. However, most of the works published in the literature focused on the Single-image Super-resolution problem so far. At present, satellite-based remote sensing platforms offer huge data availability with high temporal resolution and low spatial resolution. In this context, the presented research proposes a novel residual attention model (RAMS) that efficiently tackles the Multi-image Super-resolution task, simultaneously exploiting spatial and temporal correlations to combine multiple images. We introduce the mechanism of visual feature attention with 3D convolutions in order to obtain an aware data fusion and information extraction of the multiple low-resolution images, transcending limitations of the local region of convolutional operations. Moreover, having multiple inputs with the same scene, our representation learning network makes extensive use of nestled residual connections to let flow redundant low-frequency signals and focus the computation on more important high-frequency components. Extensive experimentation and evaluations against other available solutions, either for Single or Multi-image Super-resolution, demonstrated that the proposed deep learning-based solution can be considered state-of-the-art for Multi-image Super-resolution for remote sensing applications.

2019 ◽  
Vol 11 (23) ◽  
pp. 2857 ◽  
Author(s):  
Xiaoyu Dong ◽  
Zhihong Xi ◽  
Xu Sun ◽  
Lianru Gao

Image super-resolution (SR) reconstruction plays a key role in coping with the increasing demand on remote sensing imaging applications with high spatial resolution requirements. Though many SR methods have been proposed over the last few years, further research is needed to improve SR processes with regard to the complex spatial distribution of the remote sensing images and the diverse spatial scales of ground objects. In this paper, a novel multi-perception attention network (MPSR) is developed with performance exceeding those of many existing state-of-the-art models. By incorporating the proposed enhanced residual block (ERB) and residual channel attention group (RCAG), MPSR can super-resolve low-resolution remote sensing images via multi-perception learning and multi-level information adaptive weighted fusion. Moreover, a pre-train and transfer learning strategy is introduced, which improved the SR performance and stabilized the training procedure. Experimental comparisons are conducted using 13 state-of-the-art methods over a remote sensing dataset and benchmark natural image sets. The proposed model proved its excellence in both objective criterion and subjective perspective.


Author(s):  
L. Liebel ◽  
M. Körner

In optical remote sensing, spatial resolution of images is crucial for numerous applications. Space-borne systems are most likely to be affected by a lack of spatial resolution, due to their natural disadvantage of a large distance between the sensor and the sensed object. Thus, methods for <i>single-image super resolution</i> are desirable to exceed the limits of the sensor. Apart from assisting visual inspection of datasets, post-processing operations—e.g., segmentation or feature extraction—can benefit from detailed and distinguishable structures. In this paper, we show that recently introduced state-of-the-art approaches for single-image super resolution of conventional photographs, making use of <i>deep learning</i> techniques, such as <i>convolutional neural networks</i> (CNN), can successfully be applied to remote sensing data. With a huge amount of training data available, <i>end-to-end learning</i> is reasonably easy to apply and can achieve results unattainable using conventional handcrafted algorithms. <br><br> We trained our CNN on a specifically designed, domain-specific dataset, in order to take into account the special characteristics of multispectral remote sensing data. This dataset consists of publicly available SENTINEL-2 images featuring 13 spectral bands, a ground resolution of up to 10m, and a high radiometric resolution and thus satisfying our requirements in terms of quality and quantity. In experiments, we obtained results superior compared to competing approaches trained on generic image sets, which failed to reasonably scale satellite images with a high radiometric resolution, as well as conventional interpolation methods.


2019 ◽  
Vol 11 (15) ◽  
pp. 1817 ◽  
Author(s):  
Jun Gu ◽  
Xian Sun ◽  
Yue Zhang ◽  
Kun Fu ◽  
Lei Wang

Recently, deep convolutional neural networks (DCNN) have obtained promising results in single image super-resolution (SISR) of remote sensing images. Due to the high complexity of remote sensing image distribution, most of the existing methods are not good enough for remote sensing image super-resolution. Enhancing the representation ability of the network is one of the critical factors to improve remote sensing image super-resolution performance. To address this problem, we propose a new SISR algorithm called a Deep Residual Squeeze and Excitation Network (DRSEN). Specifically, we propose a residual squeeze and excitation block (RSEB) as a building block in DRSEN. The RSEB fuses the input and its internal features of current block, and models the interdependencies and relationships between channels to enhance the representation power. At the same time, we improve the up-sampling module and the global residual pathway in the network to reduce the parameters of the network. Experiments on two public remote sensing datasets (UC Merced and NWPU-RESISC45) show that our DRSEN achieves better accuracy and visual improvements against most state-of-the-art methods. The DRSEN is beneficial for the progress in the remote sensing images super-resolution field.


Author(s):  
L. Liebel ◽  
M. Körner

In optical remote sensing, spatial resolution of images is crucial for numerous applications. Space-borne systems are most likely to be affected by a lack of spatial resolution, due to their natural disadvantage of a large distance between the sensor and the sensed object. Thus, methods for <i>single-image super resolution</i> are desirable to exceed the limits of the sensor. Apart from assisting visual inspection of datasets, post-processing operations—e.g., segmentation or feature extraction—can benefit from detailed and distinguishable structures. In this paper, we show that recently introduced state-of-the-art approaches for single-image super resolution of conventional photographs, making use of <i>deep learning</i> techniques, such as <i>convolutional neural networks</i> (CNN), can successfully be applied to remote sensing data. With a huge amount of training data available, <i>end-to-end learning</i> is reasonably easy to apply and can achieve results unattainable using conventional handcrafted algorithms. <br><br> We trained our CNN on a specifically designed, domain-specific dataset, in order to take into account the special characteristics of multispectral remote sensing data. This dataset consists of publicly available SENTINEL-2 images featuring 13 spectral bands, a ground resolution of up to 10m, and a high radiometric resolution and thus satisfying our requirements in terms of quality and quantity. In experiments, we obtained results superior compared to competing approaches trained on generic image sets, which failed to reasonably scale satellite images with a high radiometric resolution, as well as conventional interpolation methods.


2021 ◽  
Author(s):  
Jiaoyue Li ◽  
Weifeng Liu ◽  
Kai Zhang ◽  
Baodi Liu

Remote sensing image super-resolution (SR) plays an essential role in many remote sensing applications. Recently, remote sensing image super-resolution methods based on deep learning have shown remarkable performance. However, directly utilizing the deep learning methods becomes helpless to recover the remote sensing images with a large number of complex objectives or scene. So we propose an edge-based dense connection generative adversarial network (SREDGAN), which minimizes the edge differences between the generated image and its corresponding ground truth. Experimental results on NWPU-VHR-10 and UCAS-AOD datasets demonstrate that our method improves 1.92 and 0.045 in PSNR and SSIM compared with SRGAN, respectively.


Author(s):  
Feng Li ◽  
Runmin Cong ◽  
Huihui Bai ◽  
Yifan He

Recently, Convolutional Neural Networks (CNN) based image super-resolution (SR) have shown significant success in the literature. However, these methods are implemented as single-path stream to enrich feature maps from the input for the final prediction, which fail to fully incorporate former low-level features into later high-level features. In this paper, to tackle this problem, we propose a deep interleaved network (DIN) to learn how information at different states should be combined for image SR where shallow information guides deep representative features prediction. Our DIN follows a multi-branch pattern allowing multiple interconnected branches to interleave and fuse at different states. Besides, the asymmetric co-attention (AsyCA) is proposed and attacked to the interleaved nodes to adaptively emphasize informative features from different states and improve the discriminative ability of networks. Extensive experiments demonstrate the superiority of our proposed DIN in comparison with the state-of-the-art SR methods.


Author(s):  
M. U. Müller ◽  
N. Ekhtiari ◽  
R. M. Almeida ◽  
C. Rieke

Abstract. Super-resolution aims at increasing image resolution by algorithmic means and has progressed over the recent years due to advances in the fields of computer vision and deep learning. Convolutional Neural Networks based on a variety of architectures have been applied to the problem, e.g. autoencoders and residual networks. While most research focuses on the processing of photographs consisting only of RGB color channels, little work can be found concentrating on multi-band, analytic satellite imagery. Satellite images often include a panchromatic band, which has higher spatial resolution but lower spectral resolution than the other bands. In the field of remote sensing, there is a long tradition of applying pan-sharpening to satellite images, i.e. bringing the multispectral bands to the higher spatial resolution by merging them with the panchromatic band. To our knowledge there are so far no approaches to super-resolution which take advantage of the panchromatic band. In this paper we propose a method to train state-of-the-art CNNs using pairs of lower-resolution multispectral and high-resolution pan-sharpened image tiles in order to create super-resolved analytic images. The derived quality metrics show that the method improves information content of the processed images. We compare the results created by four CNN architectures, with RedNet30 performing best.


Sensor Review ◽  
2019 ◽  
Vol 39 (5) ◽  
pp. 629-635 ◽  
Author(s):  
Haiqing He ◽  
Ting Chen ◽  
Minqiang Chen ◽  
Dajun Li ◽  
Penggen Cheng

Purpose This paper aims to present a novel approach of image super-resolution based on deep–shallow cascaded convolutional neural networks for reconstructing a clear and high-resolution (HR) remote sensing image from a low-resolution (LR) input. Design/methodology/approach The proposed approach directly learns the residuals and mapping between simulated LR and their corresponding HR remote sensing images based on deep and shallow end-to-end convolutional networks instead of assuming any specific restored models. Extra max-pooling and up-sampling are used to achieve a multiscale space by concatenating low- and high-level feature maps, and an HR image is generated by combining LR input and the residual image. This model ensures a strong response to spatially local input patterns by using a large filter and cascaded small filters. The authors adopt a strategy based on epochs to update the learning rate for boosting convergence speed. Findings The proposed deep network is trained to reconstruct high-quality images for low-quality inputs through a simulated dataset, which is generated with Set5, Set14, Berkeley Segmentation Data set and remote sensing images. Experimental results demonstrate that this model considerably enhances remote sensing images in terms of spatial detail and spectral fidelity and outperforms state-of-the-art SR methods in terms of peak signal-to-noise ratio, structural similarity and visual assessment. Originality/value The proposed method can reconstruct an HR remote sensing image from an LR input and significantly improve the quality of remote sensing images in terms of spatial detail and fidelity.


2017 ◽  
Vol 6 (4) ◽  
pp. 15
Author(s):  
JANARDHAN CHIDADALA ◽  
RAMANAIAH K.V. ◽  
BABULU K ◽  
◽  
◽  
...  

Sign in / Sign up

Export Citation Format

Share Document