scholarly journals Target Detection Method for Low-Resolution Remote Sensing Image Based on ESRGAN and ReDet

Photonics ◽  
2021 ◽  
Vol 8 (10) ◽  
pp. 431
Author(s):  
Yuwu Wang ◽  
Guobing Sun ◽  
Shengwei Guo

With the widespread use of remote sensing images, low-resolution target detection in remote sensing images has become a hot research topic in the field of computer vision. In this paper, we propose a Target Detection on Super-Resolution Reconstruction (TDoSR) method to solve the problem of low target recognition rates in low-resolution remote sensing images under foggy conditions. The TDoSR method uses the Enhanced Super-Resolution Generative Adversarial Network (ESRGAN) to perform defogging and super-resolution reconstruction of foggy low-resolution remote sensing images. In the target detection part, the Rotation Equivariant Detector (ReDet) algorithm, which has a higher recognition rate at this stage, is used to identify and classify various types of targets. While a large number of experiments have been carried out on the remote sensing image dataset DOTA-v1.5, the results of this paper suggest that the proposed method achieves good results in the target detection of low-resolution foggy remote sensing images. The principal result of this paper demonstrates that the recognition rate of the TDoSR method increases by roughly 20% when compared with low-resolution foggy remote sensing images.

2021 ◽  
Vol 13 (16) ◽  
pp. 3167
Author(s):  
Lize Zhang ◽  
Wen Lu ◽  
Yuanfei Huang ◽  
Xiaopeng Sun ◽  
Hongyi Zhang

Mainstream image super-resolution (SR) methods are generally based on paired training samples. As the high-resolution (HR) remote sensing images are difficult to collect with a limited imaging device, most of the existing remote sensing super-resolution methods try to down-sample the collected original images to generate an auxiliary low-resolution (LR) image and form a paired pseudo HR-LR dataset for training. However, the distribution of the generated LR images is generally inconsistent with the real images due to the limitation of remote sensing imaging devices. In this paper, we propose a perceptually unpaired super-resolution method by constructing a multi-stage aggregation network (MSAN). The optimization of the network depends on consistency losses. In particular, the first phase is to preserve the contents of the super-resolved results, by constraining the content consistency between the down-scaled SR results and the low-quality low-resolution inputs. The second stage minimizes perceptual feature loss between the current result and LR input to constrain perceptual-content consistency. The final phase employs the generative adversarial network (GAN) to adding photo-realistic textures by constraining perceptual-distribution consistency. Numerous experiments on synthetic remote sensing datasets and real remote sensing images show that our method obtains more plausible results than other SR methods quantitatively and qualitatively. The PSNR of our network is 0.06dB higher than the SOTA method—HAN on the UC Merced test set with complex degradation.


2021 ◽  
Vol 13 (9) ◽  
pp. 1858
Author(s):  
Xubin Feng ◽  
Wuxia Zhang ◽  
Xiuqin Su ◽  
Zhengpu Xu

High spatial quality (HQ) optical remote sensing images are very useful for target detection, target recognition and image classification. Due to the influence of imaging equipment accuracy and atmospheric environment, HQ images are difficult to acquire, while low spatial quality (LQ) remote sensing images are very easy to acquire. Hence, denoising and super-resolution (SR) reconstruction technology are the most important solutions to improve the quality of remote sensing images very effectively, which can lower the cost as much as possible. Most existing methods usually only employ denoising or SR technology to obtain HQ images. However, due to the complex structure and the large noise of remote sensing images, the quality of the remote sensing image obtained only by denoising method or SR method cannot meet the actual needs. To address these problems, a method of reconstructing HQ remote sensing images based on Generative Adversarial Network (GAN) named “Restoration Generative Adversarial Network with ResNet and DenseNet” (RRDGAN) is proposed, which can acquire better quality images by incorporating denoising and SR into a unified framework. The generative network is implemented by fusing Residual Neural Network (ResNet) and Dense Convolutional Network (DenseNet) in order to consider denoising and SR problems at the same time. Then, total variation (TV) regularization is used to furthermore enhance the edge details, and the idea of Relativistic GAN is explored to make the whole network converge better. Our RRDGAN is implemented in wavelet transform (WT) domain, since different frequency parts could be handled separately in the wavelet domain. The experimental results on three different remote sensing datasets shows the feasibility of our proposed method in acquiring remote sensing images.


2020 ◽  
Vol 12 (7) ◽  
pp. 1204
Author(s):  
Xinyu Dou ◽  
Chenyu Li ◽  
Qian Shi ◽  
Mengxi Liu

Hyperspectral remote sensing images (HSIs) have a higher spectral resolution compared to multispectral remote sensing images, providing the possibility for more reasonable and effective analysis and processing of spectral data. However, rich spectral information usually comes at the expense of low spatial resolution owing to the physical limitations of sensors, which brings difficulties for identifying and analyzing targets in HSIs. In the super-resolution (SR) field, many methods have been focusing on the restoration of the spatial information while ignoring the spectral aspect. To better restore the spectral information in the HSI SR field, a novel super-resolution (SR) method was proposed in this study. Firstly, we innovatively used three-dimensional (3D) convolution based on SRGAN (Super-Resolution Generative Adversarial Network) structure to not only exploit the spatial features but also preserve spectral properties in the process of SR. Moreover, we used the attention mechanism to deal with the multiply features from the 3D convolution layers, and we enhanced the output of our model by improving the content of the generator’s loss function. The experimental results indicate that the 3DASRGAN (3D Attention-based Super-Resolution Generative Adversarial Network) is both visually quantitatively better than the comparison methods, which proves that the 3DASRGAN model can reconstruct high-resolution HSIs with high efficiency.


2021 ◽  
Author(s):  
Jiaoyue Li ◽  
Weifeng Liu ◽  
Kai Zhang ◽  
Baodi Liu

Remote sensing image super-resolution (SR) plays an essential role in many remote sensing applications. Recently, remote sensing image super-resolution methods based on deep learning have shown remarkable performance. However, directly utilizing the deep learning methods becomes helpless to recover the remote sensing images with a large number of complex objectives or scene. So we propose an edge-based dense connection generative adversarial network (SREDGAN), which minimizes the edge differences between the generated image and its corresponding ground truth. Experimental results on NWPU-VHR-10 and UCAS-AOD datasets demonstrate that our method improves 1.92 and 0.045 in PSNR and SSIM compared with SRGAN, respectively.


Author(s):  
M. Cao ◽  
H. Ji ◽  
Z. Gao ◽  
T. Mei

Abstract. Vehicle detection in remote sensing image has been attracting remarkable attention over past years for its applications in traffic, security, military, and surveillance fields. Due to the stunning success of deep learning techniques in object detection community, we consider to utilize CNNs for vehicle detection task in remote sensing image. Specifically, we take advantage of deep residual network, multi-scale feature fusion, hard example mining and homography augmentation to realize vehicle detection, which almost integrates all the advanced techniques in deep learning community. Furthermore, we simultaneously address super-resolution (SR) and detection problems of low-resolution (LR) image in an end-to-end manner. In consideration of the absence of paired low-/highresolution data which are generally time-consuming and cumbersome to collect, we leverage generative adversarial network (GAN) for unsupervised SR. Detection loss is back-propagated to SR generator to boost detection performance. We conduct experiments on representative benchmark datasets and demonstrate that our model yields significant improvements over state-of-the-art methods in deep learning and remote sensing areas.


2019 ◽  
Vol 11 (21) ◽  
pp. 2578 ◽  
Author(s):  
Wen Ma ◽  
Zongxu Pan ◽  
Feng Yuan ◽  
Bin Lei

Single image super-resolution (SISR) has been widely studied in recent years as a crucial technique for remote sensing applications. In this paper, a dense residual generative adversarial network (DRGAN)-based SISR method is proposed to promote the resolution of remote sensing images. Different from previous super-resolution (SR) approaches based on generative adversarial networks (GANs), the novelty of our method mainly lies in the following factors. First, we made a breakthrough in terms of network architecture to improve performance. We designed a dense residual network as the generative network in GAN, which can make full use of the hierarchical features from low-resolution (LR) images. We also introduced a contiguous memory mechanism into the network to take advantage of the dense residual block. Second, we modified the loss function and altered the model of the discriminative network according to the Wasserstein GAN with a gradient penalty (WGAN-GP) for stable training. Extensive experiments were performed using the NWPU-RESISC45 dataset, and the results demonstrated that the proposed method outperforms state-of-the-art methods in terms of both objective evaluation and subjective perspective.


Sign in / Sign up

Export Citation Format

Share Document