scholarly journals Improved WαSH Feature Matching Based on 2D-DWT for Stereo Remote Sensing Images

Sensors ◽  
2018 ◽  
Vol 18 (10) ◽  
pp. 3494 ◽  
Author(s):  
Mei Yu ◽  
Kazhong Deng ◽  
Huachao Yang ◽  
Changbiao Qin

Image matching is an outstanding issue because of the existing of geometric and radiometric distortion in stereo remote sensing images. Weighted α-shape (WαSH) local invariant features are tolerant to image rotation, scale change, affine deformation, illumination change, and blurring. However, since the number of WαSH features is small, it is difficult to get enough matches to estimate the satisfactory homography matrix or fundamental matrix. In addition, the WαSH detector is extremely sensitive to image noise because it is built on sampled edges. Considering the shortcomings of the WαSH detector, this paper improves the WαSH feature matching method based on the 2D discrete wavelet transform (2D-DWT). The method firstly performs 2D-DWT on the image, and then detects WαSH features on the transformed images. According to the methods of descriptor construction for WαSH features, three matching methods on the basis of wavelet transform WαSH features (WWF), improved wavelet transform WαSH features (IWWF), and layered IWWF (LIWWF) are distinguished with respect to the character of the sub-images. The experimental results on the dataset containing affine distortion, scale distortion, illumination change, and noise images, showed that the proposed methods acquired more matches and better stableness than WαSH. Experimentation on remote sensing images with less affine distortion and slight noise showed that the proposed methods obtained the correct matching rate greater than 90%. For images containing severe distortion, KAZE obtained a 35.71% correct matching rate, which is unacceptable for calculating the homography matrix, while IWWF achieved a 71.42% correct matching rate. IWWF was the only method that achieved the correct matching rate of no less than 50% for all four test stereo remote sensing image pairs and was the most stable compared to MSER, DWT-MSER, WαSH, DWT-WαSH, KAZE, WWF, and LIWWF.

2021 ◽  
Vol 13 (24) ◽  
pp. 4971
Author(s):  
Congcong Wang ◽  
Wenbin Sun ◽  
Deqin Fan ◽  
Xiaoding Liu ◽  
Zhi Zhang

The characteristics of a wide variety of scales about objects and complex texture features of high-resolution remote sensing images make deep learning-based change detection methods the mainstream method. However, existing deep learning methods have problems with spatial information loss and insufficient feature representation, resulting in unsatisfactory effects of small objects detection and boundary positioning in high-resolution remote sensing images change detection. To address the problems, a network architecture based on 2-dimensional discrete wavelet transform and adaptive feature weighted fusion is proposed. The proposed network takes Siamese network and Nested U-Net as the backbone; 2-dimensional discrete wavelet transform is used to replace the pooling layer; and the inverse transform is used to replace the upsampling to realize image reconstruction, reduce the loss of spatial information, and fully retain the original image information. In this way, the proposed network can accurately detect changed objects of different scales and reconstruct change maps with clear boundaries. Furthermore, different feature fusion methods of different stages are proposed to fully integrate multi-scale and multi-level features and improve the comprehensive representation ability of features, so as to achieve a more refined change detection effect while reducing pseudo-changes. To verify the effectiveness and advancement of the proposed method, it is compared with seven state-of-the-art methods on two datasets of Lebedev and SenseTime from the three aspects of quantitative analysis, qualitative analysis, and efficiency analysis, and the effectiveness of proposed modules is validated by an ablation study. The results of quantitative analysis and efficiency analysis show that, under the premise of taking into account the operation efficiency, our method can improve the recall while ensuring the detection precision, and realize the improvement of the overall detection performance. Specifically, it shows an average improvement of 37.9% and 12.35% on recall, and 34.76% and 11.88% on F1 with the Lebedev and SenseTime datasets, respectively, compared to other methods. The qualitative analysis shows that our method has better performance on small objects detection and boundary positioning than other methods, and a more refined change map can be obtained.


2009 ◽  
Vol 2009 ◽  
pp. 1-11 ◽  
Author(s):  
K. Parvathi ◽  
B. S. Prakasa Rao ◽  
M. Mariya Das ◽  
T. V. Rao

The watershed transformation is a useful morphological segmentation tool for a variety of grey-scale images. However, over segmentation and under segmentation have become the key problems for the conventional algorithm. In this paper, an efficient segmentation method for high-resolution remote sensing image analysis is presented. Wavelet analysis is one of the most popular techniques that can be used to detect local intensity variation and hence the wavelet transformation is used to analyze the image. Wavelet transform is applied to the image, producing detail (horizontal, vertical, and diagonal) and Approximation coefficients. The image gradient with selective regional minima is estimated with the grey-scale morphology for the Approximation image at a suitable resolution, and then the watershed is applied to the gradient image to avoid over segmentation. The segmented image is projected up to high resolutions using the inverse wavelet transform. The watershed segmentation is applied to small subset size image, demanding less computational time. We have applied our new approach to analyze remote sensing images. The algorithm was implemented in MATLAB. Experimental results demonstrated the method to be effective.


Sensors ◽  
2019 ◽  
Vol 19 (17) ◽  
pp. 3802 ◽  
Author(s):  
Ahmed F. Fadhil ◽  
Raghuveer Kanneganti ◽  
Lalit Gupta ◽  
Henry Eberle ◽  
Ravi Vaidyanathan

Networked operation of unmanned air vehicles (UAVs) demands fusion of information from disparate sources for accurate flight control. In this investigation, a novel sensor fusion architecture for detecting aircraft runway and horizons as well as enhancing the awareness of surrounding terrain is introduced based on fusion of enhanced vision system (EVS) and synthetic vision system (SVS) images. EVS and SVS image fusion has yet to be implemented in real-world situations due to signal misalignment. We address this through a registration step to align EVS and SVS images. Four fusion rules combining discrete wavelet transform (DWT) sub-bands are formulated, implemented, and evaluated. The resulting procedure is tested on real EVS-SVS image pairs and pairs containing simulated turbulence. Evaluations reveal that runways and horizons can be detected accurately even in poor visibility. Furthermore, it is demonstrated that different aspects of EVS and SVS images can be emphasized by using different DWT fusion rules. The procedure is autonomous throughout landing, irrespective of weather. The fusion architecture developed in this study holds promise for incorporation into manned heads-up displays (HUDs) and UAV remote displays to assist pilots landing aircraft in poor lighting and varying weather. The algorithm also provides a basis for rule selection in other signal fusion applications.


Sign in / Sign up

Export Citation Format

Share Document