A Multitemporal Remote Sensing Image Registration Method Based on Water Bodies for the Lake-Rich Region

Author(s):  
Zhanfeng Shen ◽  
Junli Li ◽  
Yongwei Sheng ◽  
Timothy A. Warner ◽  
Lifang Zhao
2020 ◽  
Vol 12 (18) ◽  
pp. 2937
Author(s):  
Song Cui ◽  
Miaozhong Xu ◽  
Ailong Ma ◽  
Yanfei Zhong

The nonlinear radiation distortions (NRD) among multimodal remote sensing images bring enormous challenges to image registration. The traditional feature-based registration methods commonly use the image intensity or gradient information to detect and describe the features that are sensitive to NRD. However, the nonlinear mapping of the corresponding features of the multimodal images often results in failure of the feature matching, as well as the image registration. In this paper, a modality-free multimodal remote sensing image registration method (SRIFT) is proposed for the registration of multimodal remote sensing images, which is invariant to scale, radiation, and rotation. In SRIFT, the nonlinear diffusion scale (NDS) space is first established to construct a multi-scale space. A local orientation and scale phase congruency (LOSPC) algorithm are then used so that the features of the images with NRD are mapped to establish a one-to-one correspondence, to obtain sufficiently stable key points. In the feature description stage, a rotation-invariant coordinate (RIC) system is adopted to build a descriptor, without requiring estimation of the main direction. The experiments undertaken in this study included one set of simulated data experiments and nine groups of experiments with different types of real multimodal remote sensing images with rotation and scale differences (including synthetic aperture radar (SAR)/optical, digital surface model (DSM)/optical, light detection and ranging (LiDAR) intensity/optical, near-infrared (NIR)/optical, short-wave infrared (SWIR)/optical, classification/optical, and map/optical image pairs), to test the proposed algorithm from both quantitative and qualitative aspects. The experimental results showed that the proposed method has strong robustness to NRD, being invariant to scale, radiation, and rotation, and the achieved registration precision was better than that of the state-of-the-art methods.


Author(s):  
Kun Yang ◽  
Anning Pan ◽  
Yang Yang ◽  
Su Zhang ◽  
Sim Heng Ong ◽  
...  

Remote sensing image registration plays an important role in military and civilian fields, such as natural disaster damage assessment, military damage assessment and ground targets identification, etc. However, due to the ground relief variations and imaging viewpoint changes, non-rigid geometric distortion occurs between remote sensing images with different viewpoint, which further increases the difficulty of remote sensing image registration. To address the problem, we propose a multi-viewpoint remote sensing image registration method which contains the following contributions. (i) A multiple features based finite mixture model is constructed for dealing with different types of image features. (ii) Three features are combined and substituted into the mixture model to form a feature complementation, i.e., the Euclidean distance and shape context are used to measure the similarity of geometric structure, and the SIFT (scale-invariant feature transform) distance which is endowed with the intensity information is used to measure the scale space extrema. (iii) To prevent the ill-posed problem, a geometric constraint term is introduced into the L2E-based energy function for better behaving the non-rigid transformation. We evaluated the performances of the proposed method by three series of remote sensing images obtained from the unmanned aerial vehicle (UAV) and Google Earth, and compared with five state-of-the-art methods where our method shows the best alignments in most cases.


2019 ◽  
Vol 85 (10) ◽  
pp. 725-736 ◽  
Author(s):  
Ming Hao ◽  
Jian Jin ◽  
Mengchao Zhou ◽  
Yi Tian ◽  
Wenzhong Shi

Image registration is an indispensable component of remote sensing applications, such as disaster monitoring, change detection, and classification. Grayscale differences and geometric distortions often occur among multisource images due to their different imaging mechanisms, thus making it difficult to acquire feature points and match corresponding points. This article proposes a scene shape similarity feature (SSSF) descriptor based on scene shape features and shape context algorithms. A new similarity measure called SSSFncc is then defined by computing the normalized correlation coefficient of the SSSF descriptors between multisource remote sensing images. Furthermore, the tie points between the reference and the sensed image are extracted via a template matching strategy. A global consistency check method is then used to remove the mismatched tie points. Finally, a piecewise linear transform model is selected to rectify the remote sensing image. The proposed SSSFncc aims to extract the scene shape similarity between multisource images. The accuracy of the proposed SSSFncc is evaluated using five pairs of experimental images from optical, synthetic aperture radar, and map data. Registration results demonstrate that the SSSFncc similarity measure is robust enough for complex nonlinear grayscale differences among multisource remote sensing images. The proposed method achieves more reliable registration outcomes compared with other popular methods.


Sign in / Sign up

Export Citation Format

Share Document