Remote Sensing Image Retrieval via Symmetric Normal Inverse Gaussian Modeling of Nonsubsampled Shearlet Transform Coefficients

Author(s):  
Hilly Gohain Baruah ◽  
Vijay Kumar Nath ◽  
Deepika Hazarika
Sensors ◽  
2021 ◽  
Vol 21 (5) ◽  
pp. 1756
Author(s):  
Liangliang Li ◽  
Hongbing Ma

The rapid development of remote sensing and space technology provides multisource remote sensing image data for earth observation in the same area. Information provided by these images, however, is often complementary and cooperative, and multisource image fusion is still challenging. This paper proposes a novel multisource remote sensing image fusion algorithm. It integrates the contrast saliency map (CSM) and the sum-modified-Laplacian (SML) in the nonsubsampled shearlet transform (NSST) domain. The NSST is utilized to decompose the source images into low-frequency sub-bands and high-frequency sub-bands. Low-frequency sub-bands reflect the contrast and brightness of the source images, while high-frequency sub-bands reflect the texture and details of the source images. Using this information, the contrast saliency map and SML fusion rules are introduced into the corresponding sub-bands. Finally, the inverse NSST reconstructs the fusion image. Experimental results demonstrate that the proposed multisource remote image fusion technique performs well in terms of contrast enhancement and detail preservation.


2021 ◽  
Vol 13 (5) ◽  
pp. 869
Author(s):  
Zheng Zhuo ◽  
Zhong Zhou

In recent years, the amount of remote sensing imagery data has increased exponentially. The ability to quickly and effectively find the required images from massive remote sensing archives is the key to the organization, management, and sharing of remote sensing image information. This paper proposes a high-resolution remote sensing image retrieval method with Gabor-CA-ResNet and a split-based deep feature transform network. The main contributions include two points. (1) For the complex texture, diverse scales, and special viewing angles of remote sensing images, A Gabor-CA-ResNet network taking ResNet as the backbone network is proposed by using Gabor to represent the spatial-frequency structure of images, channel attention (CA) mechanism to obtain stronger representative and discriminative deep features. (2) A split-based deep feature transform network is designed to divide the features extracted by the Gabor-CA-ResNet network into several segments and transform them separately for reducing the dimensionality and the storage space of deep features significantly. The experimental results on UCM, WHU-RS, RSSCN7, and AID datasets show that, compared with the state-of-the-art methods, our method can obtain competitive performance, especially for remote sensing images with rare targets and complex textures.


2018 ◽  
Vol 10 (8) ◽  
pp. 1243 ◽  
Author(s):  
Xu Tang ◽  
Xiangrong Zhang ◽  
Fang Liu ◽  
Licheng Jiao

Due to the specific characteristics and complicated contents of remote sensing (RS) images, remote sensing image retrieval (RSIR) is always an open and tough research topic in the RS community. There are two basic blocks in RSIR, including feature learning and similarity matching. In this paper, we focus on developing an effective feature learning method for RSIR. With the help of the deep learning technique, the proposed feature learning method is designed under the bag-of-words (BOW) paradigm. Thus, we name the obtained feature deep BOW (DBOW). The learning process consists of two parts, including image descriptor learning and feature construction. First, to explore the complex contents within the RS image, we extract the image descriptor in the image patch level rather than the whole image. In addition, instead of using the handcrafted feature to describe the patches, we propose the deep convolutional auto-encoder (DCAE) model to deeply learn the discriminative descriptor for the RS image. Second, the k-means algorithm is selected to generate the codebook using the obtained deep descriptors. Then, the final histogrammic DBOW features are acquired by counting the frequency of the single code word. When we get the DBOW features from the RS images, the similarities between RS images are measured using L1-norm distance. Then, the retrieval results can be acquired according to the similarity order. The encouraging experimental results counted on four public RS image archives demonstrate that our DBOW feature is effective for the RSIR task. Compared with the existing RS image features, our DBOW can achieve improved behavior on RSIR.


Sign in / Sign up

Export Citation Format

Share Document