scholarly journals Spatio-Temporal Super-Resolution Reconstruction of Remote-Sensing Images Based on Adaptive Multi-Scale Detail Enhancement

Sensors ◽  
2018 ◽  
Vol 18 (2) ◽  
pp. 498 ◽  
Author(s):  
Hong Zhu ◽  
Xinming Tang ◽  
Junfeng Xie ◽  
Weidong Song ◽  
Fan Mo ◽  
...  
2021 ◽  
Vol 13 (4) ◽  
pp. 666
Author(s):  
Hai Huan ◽  
Pengcheng Li ◽  
Nan Zou ◽  
Chao Wang ◽  
Yaqin Xie ◽  
...  

Remote-sensing images constitute an important means of obtaining geographic information. Image super-resolution reconstruction techniques are effective methods of improving the spatial resolution of remote-sensing images. Super-resolution reconstruction networks mainly improve the model performance by increasing the network depth. However, blindly increasing the network depth can easily lead to gradient disappearance or gradient explosion, increasing the difficulty of training. This report proposes a new pyramidal multi-scale residual network (PMSRN) that uses hierarchical residual-like connections and dilation convolution to form a multi-scale dilation residual block (MSDRB). The MSDRB enhances the ability to detect context information and fuses hierarchical features through the hierarchical feature fusion structure. Finally, a complementary block of global and local features is added to the reconstruction structure to alleviate the problem that useful original information is ignored. The experimental results showed that, compared with a basic multi-scale residual network, the PMSRN increased the peak signal-to-noise ratio by up to 0.44 dB and the structural similarity to 0.9776.


2019 ◽  
Vol 11 (13) ◽  
pp. 1588 ◽  
Author(s):  
Tao Lu ◽  
Jiaming Wang ◽  
Yanduo Zhang ◽  
Zhongyuan Wang ◽  
Junjun Jiang

Recently, the application of satellite remote sensing images is becoming increasingly popular, but the observed images from satellite sensors are frequently in low-resolution (LR). Thus, they cannot fully meet the requirements of object identification and analysis. To utilize the multi-scale characteristics of objects fully in remote sensing images, this paper presents a multi-scale residual neural network (MRNN). MRNN adopts the multi-scale nature of satellite images to reconstruct high-frequency information accurately for super-resolution (SR) satellite imagery. Different sizes of patches from LR satellite images are initially extracted to fit different scale of objects. Large-, middle-, and small-scale deep residual neural networks are designed to simulate differently sized receptive fields for acquiring relative global, contextual, and local information for prior representation. Then, a fusion network is used to refine different scales of information. MRNN fuses the complementary high-frequency information from differently scaled networks to reconstruct the desired high-resolution satellite object image, which is in line with human visual experience (“look in multi-scale to see better”). Experimental results on the SpaceNet satellite image and NWPU-RESISC45 databases show that the proposed approach outperformed several state-of-the-art SR algorithms in terms of objective and subjective image qualities.


2021 ◽  
Vol 10 (7) ◽  
pp. 488
Author(s):  
Peng Li ◽  
Dezheng Zhang ◽  
Aziguli Wulamu ◽  
Xin Liu ◽  
Peng Chen

A deep understanding of our visual world is more than an isolated perception on a series of objects, and the relationships between them also contain rich semantic information. Especially for those satellite remote sensing images, the span is so large that the various objects are always of different sizes and complex spatial compositions. Therefore, the recognition of semantic relations is conducive to strengthen the understanding of remote sensing scenes. In this paper, we propose a novel multi-scale semantic fusion network (MSFN). In this framework, dilated convolution is introduced into a graph convolutional network (GCN) based on an attentional mechanism to fuse and refine multi-scale semantic context, which is crucial to strengthen the cognitive ability of our model Besides, based on the mapping between visual features and semantic embeddings, we design a sparse relationship extraction module to remove meaningless connections among entities and improve the efficiency of scene graph generation. Meanwhile, to further promote the research of scene understanding in remote sensing field, this paper also proposes a remote sensing scene graph dataset (RSSGD). We carry out extensive experiments and the results show that our model significantly outperforms previous methods on scene graph generation. In addition, RSSGD effectively bridges the huge semantic gap between low-level perception and high-level cognition of remote sensing images.


2021 ◽  
Vol 13 (12) ◽  
pp. 2333
Author(s):  
Lilu Zhu ◽  
Xiaolu Su ◽  
Yanfeng Hu ◽  
Xianqing Tai ◽  
Kun Fu

It is extremely important to extract valuable information and achieve efficient integration of remote sensing data. The multi-source and heterogeneous nature of remote sensing data leads to the increasing complexity of these relationships, and means that the processing mode based on data ontology cannot meet requirements any more. On the other hand, the multi-dimensional features of remote sensing data bring more difficulties in data query and analysis, especially for datasets with a lot of noise. Therefore, data quality has become the bottleneck of data value discovery, and a single batch query is not enough to support the optimal combination of global data resources. In this paper, we propose a spatio-temporal local association query algorithm for remote sensing data (STLAQ). Firstly, we design a spatio-temporal data model and a bottom-up spatio-temporal correlation network. Then, we use the method of partition-based clustering and the method of spectral clustering to measure the correlation between spatio-temporal correlation networks. Finally, we construct a spatio-temporal index to provide joint query capabilities. We carry out local association query efficiency experiments to verify the feasibility of STLAQ on multi-scale datasets. The results show that the STLAQ weakens the barriers between remote sensing data, and improves their application value effectively.


2021 ◽  
Vol 13 (9) ◽  
pp. 1854
Author(s):  
Syed Muhammad Arsalan Bashir ◽  
Yi Wang

This paper deals with detecting small objects in remote sensing images from satellites or any aerial vehicle by utilizing the concept of image super-resolution for image resolution enhancement using a deep-learning-based detection method. This paper provides a rationale for image super-resolution for small objects by improving the current super-resolution (SR) framework by incorporating a cyclic generative adversarial network (GAN) and residual feature aggregation (RFA) to improve detection performance. The novelty of the method is threefold: first, a framework is proposed, independent of the final object detector used in research, i.e., YOLOv3 could be replaced with Faster R-CNN or any object detector to perform object detection; second, a residual feature aggregation network was used in the generator, which significantly improved the detection performance as the RFA network detected complex features; and third, the whole network was transformed into a cyclic GAN. The image super-resolution cyclic GAN with RFA and YOLO as the detection network is termed as SRCGAN-RFA-YOLO, which is compared with the detection accuracies of other methods. Rigorous experiments on both satellite images and aerial images (ISPRS Potsdam, VAID, and Draper Satellite Image Chronology datasets) were performed, and the results showed that the detection performance increased by using super-resolution methods for spatial resolution enhancement; for an IoU of 0.10, AP of 0.7867 was achieved for a scale factor of 16.


2021 ◽  
Vol 13 (3) ◽  
pp. 433
Author(s):  
Junge Shen ◽  
Tong Zhang ◽  
Yichen Wang ◽  
Ruxin Wang ◽  
Qi Wang ◽  
...  

Remote sensing images contain complex backgrounds and multi-scale objects, which pose a challenging task for scene classification. The performance is highly dependent on the capacity of the scene representation as well as the discriminability of the classifier. Although multiple models possess better properties than a single model on these aspects, the fusion strategy for these models is a key component to maximize the final accuracy. In this paper, we construct a novel dual-model architecture with a grouping-attention-fusion strategy to improve the performance of scene classification. Specifically, the model employs two different convolutional neural networks (CNNs) for feature extraction, where the grouping-attention-fusion strategy is used to fuse the features of the CNNs in a fine and multi-scale manner. In this way, the resultant feature representation of the scene is enhanced. Moreover, to address the issue of similar appearances between different scenes, we develop a loss function which encourages small intra-class diversities and large inter-class distances. Extensive experiments are conducted on four scene classification datasets include the UCM land-use dataset, the WHU-RS19 dataset, the AID dataset, and the OPTIMAL-31 dataset. The experimental results demonstrate the superiority of the proposed method in comparison with the state-of-the-arts.


Sign in / Sign up

Export Citation Format

Share Document