Single image super-resolution based on compressive sensing and TV minimization sparse recovery for remote sensing images

Author(s):  
S J Sreeja ◽  
M. Wilscy
2019 ◽  
Vol 11 (15) ◽  
pp. 1817 ◽  
Author(s):  
Jun Gu ◽  
Xian Sun ◽  
Yue Zhang ◽  
Kun Fu ◽  
Lei Wang

Recently, deep convolutional neural networks (DCNN) have obtained promising results in single image super-resolution (SISR) of remote sensing images. Due to the high complexity of remote sensing image distribution, most of the existing methods are not good enough for remote sensing image super-resolution. Enhancing the representation ability of the network is one of the critical factors to improve remote sensing image super-resolution performance. To address this problem, we propose a new SISR algorithm called a Deep Residual Squeeze and Excitation Network (DRSEN). Specifically, we propose a residual squeeze and excitation block (RSEB) as a building block in DRSEN. The RSEB fuses the input and its internal features of current block, and models the interdependencies and relationships between channels to enhance the representation power. At the same time, we improve the up-sampling module and the global residual pathway in the network to reduce the parameters of the network. Experiments on two public remote sensing datasets (UC Merced and NWPU-RESISC45) show that our DRSEN achieves better accuracy and visual improvements against most state-of-the-art methods. The DRSEN is beneficial for the progress in the remote sensing images super-resolution field.


2021 ◽  
Vol 13 (19) ◽  
pp. 3835
Author(s):  
Wenzong Jiang ◽  
Lifei Zhao ◽  
Yanjiang Wang ◽  
Weifeng Liu ◽  
Baodi Liu

In recent years, the application of deep learning has achieved a huge leap in the performance of remote sensing image super-resolution (SR). However, most of the existing SR methods employ bicubic downsampling of high-resolution (HR) images to obtain low-resolution (LR) images and use the obtained LR and HR images as training pairs. This supervised method that uses ideal kernel (bicubic) downsampled images to train the network will significantly degrade performance when used in realistic LR remote sensing images, usually resulting in blurry images. The main reason is that the degradation process of real remote sensing images is more complicated. The training data cannot reflect the SR problem of real remote sensing images. Inspired by the self-supervised methods, this paper proposes a cross-dimension attention guided self-supervised remote sensing single-image super-resolution method (CASSISR). It does not require pre-training on a dataset, only utilizes the internal information reproducibility of a single image, and uses the lower-resolution image downsampled from the input image to train the cross-dimension attention network (CDAN). The cross-dimension attention module (CDAM) selectively captures more useful internal duplicate information by modeling the interdependence of channel and spatial features and jointly learning their weights. The proposed CASSISR adapts well to real remote sensing image SR tasks. A large number of experiments show that CASSISR has achieved superior performance to current state-of-the-art methods.


2021 ◽  
Vol 12 (6) ◽  
pp. 1-20
Author(s):  
Fayaz Ali Dharejo ◽  
Farah Deeba ◽  
Yuanchun Zhou ◽  
Bhagwan Das ◽  
Munsif Ali Jatoi ◽  
...  

Single Image Super-resolution (SISR) produces high-resolution images with fine spatial resolutions from a remotely sensed image with low spatial resolution. Recently, deep learning and generative adversarial networks (GANs) have made breakthroughs for the challenging task of single image super-resolution (SISR) . However, the generated image still suffers from undesirable artifacts such as the absence of texture-feature representation and high-frequency information. We propose a frequency domain-based spatio-temporal remote sensing single image super-resolution technique to reconstruct the HR image combined with generative adversarial networks (GANs) on various frequency bands (TWIST-GAN). We have introduced a new method incorporating Wavelet Transform (WT) characteristics and transferred generative adversarial network. The LR image has been split into various frequency bands by using the WT, whereas the transfer generative adversarial network predicts high-frequency components via a proposed architecture. Finally, the inverse transfer of wavelets produces a reconstructed image with super-resolution. The model is first trained on an external DIV2 K dataset and validated with the UC Merced Landsat remote sensing dataset and Set14 with each image size of 256 × 256. Following that, transferred GANs are used to process spatio-temporal remote sensing images in order to minimize computation cost differences and improve texture information. The findings are compared qualitatively and qualitatively with the current state-of-art approaches. In addition, we saved about 43% of the GPU memory during training and accelerated the execution of our simplified version by eliminating batch normalization layers.


Author(s):  
L. Liebel ◽  
M. Körner

In optical remote sensing, spatial resolution of images is crucial for numerous applications. Space-borne systems are most likely to be affected by a lack of spatial resolution, due to their natural disadvantage of a large distance between the sensor and the sensed object. Thus, methods for <i>single-image super resolution</i> are desirable to exceed the limits of the sensor. Apart from assisting visual inspection of datasets, post-processing operations—e.g., segmentation or feature extraction—can benefit from detailed and distinguishable structures. In this paper, we show that recently introduced state-of-the-art approaches for single-image super resolution of conventional photographs, making use of <i>deep learning</i> techniques, such as <i>convolutional neural networks</i> (CNN), can successfully be applied to remote sensing data. With a huge amount of training data available, <i>end-to-end learning</i> is reasonably easy to apply and can achieve results unattainable using conventional handcrafted algorithms. <br><br> We trained our CNN on a specifically designed, domain-specific dataset, in order to take into account the special characteristics of multispectral remote sensing data. This dataset consists of publicly available SENTINEL-2 images featuring 13 spectral bands, a ground resolution of up to 10m, and a high radiometric resolution and thus satisfying our requirements in terms of quality and quantity. In experiments, we obtained results superior compared to competing approaches trained on generic image sets, which failed to reasonably scale satellite images with a high radiometric resolution, as well as conventional interpolation methods.


2015 ◽  
Vol 7 (2) ◽  
pp. 1-11 ◽  
Author(s):  
Yicheng Sun ◽  
Guohua Gu ◽  
Xiubao Sui ◽  
Yuan Liu ◽  
Chengzhang Yang

Author(s):  
Naser karimi ◽  
Hamidreza Amindavar ◽  
Rodney Lynn Kirlin ◽  
Ahad Rajabi

Author(s):  
L. Liebel ◽  
M. Körner

In optical remote sensing, spatial resolution of images is crucial for numerous applications. Space-borne systems are most likely to be affected by a lack of spatial resolution, due to their natural disadvantage of a large distance between the sensor and the sensed object. Thus, methods for <i>single-image super resolution</i> are desirable to exceed the limits of the sensor. Apart from assisting visual inspection of datasets, post-processing operations—e.g., segmentation or feature extraction—can benefit from detailed and distinguishable structures. In this paper, we show that recently introduced state-of-the-art approaches for single-image super resolution of conventional photographs, making use of <i>deep learning</i> techniques, such as <i>convolutional neural networks</i> (CNN), can successfully be applied to remote sensing data. With a huge amount of training data available, <i>end-to-end learning</i> is reasonably easy to apply and can achieve results unattainable using conventional handcrafted algorithms. <br><br> We trained our CNN on a specifically designed, domain-specific dataset, in order to take into account the special characteristics of multispectral remote sensing data. This dataset consists of publicly available SENTINEL-2 images featuring 13 spectral bands, a ground resolution of up to 10m, and a high radiometric resolution and thus satisfying our requirements in terms of quality and quantity. In experiments, we obtained results superior compared to competing approaches trained on generic image sets, which failed to reasonably scale satellite images with a high radiometric resolution, as well as conventional interpolation methods.


Author(s):  
L. Wagner ◽  
L. Liebel ◽  
M. Körner

<p><strong>Abstract.</strong> Analyzing optical remote sensing imagery depends heavily on their spatial resolution. At the same time, this data is adversely affected by fixed sensor parameters and environmental influences. Methods for increasing the quality of such data and concomitantly optimizing its information content are, thus, in high demand. In particular, single-image super-resolution (SISR) approaches aim to achieve this goal solely by observing the individual images.</p><p>We propose to adapt a generic deep residual neural network architecture for SISR to deal with the special properties of remote sensing satellite imagery, especially taking into account the different spatial resolutions of individual Sentinel-2 bands, i.e., ground sampling distances of 20&amp;thinsp;m and 10&amp;thinsp;m. As a result, this method is able to increase the perceived resolution of the 20&amp;thinsp;m channels and mesh all spectral bands. Experimental evaluation and ablation studies on large datasets have shown superior performance compared to the state-of-the-art and that the model is not bound by its capacity.</p>


Sign in / Sign up

Export Citation Format

Share Document