scholarly journals Phase Correlation based Local Illumination-invariant Method for Multi-Tempro Remote Sensing Image Matching

Author(s):  
X. Wan ◽  
J. Liu ◽  
H. Yan

This paper aims at image matching under significantly different illumination conditions, especially illumination angle changes, without prior knowledge of lighting conditions. We investigated the illumination impact on Phase Correlation (PC) matrix by mathematical derivation and from which, we decomposed PC matrix as the multiplication product of the illumination impact matrix and the translation matrix. Thus the robustness to illumination variation of the widely used Absolute Dirichlet Curve-fitting (AD-CF) algorithm for pixel-wise disparity estimation is proved. Further, an improved PC matching algorithm is proposed: Absolute Dirichlet SVD (AD-SVD), to achieve illumination invariant image alignment. Experiments of matching DEM simulated terrain shading images under very different illumination angles demonstrated that AD-SVD achieved 1/20 pixels accuracy for image alignment and it is nearly entirely invariant to daily and seasonal solar position variation. The AD-CF algorithm was tested for generating disparity map from multi-illumination angle stereo pairs and the results demonstrated high fidelity to the original DEM and the Normalised Correlation Coefficient (NCC) between the two is 0.96.

Author(s):  
X. Wan ◽  
J. Liu ◽  
M. Qin ◽  
S. Y. Li

Multi-temporal Earth Observation and Mars orbital imagery data with frequent repeat coverage provide great capability for planetary surface change detection. When comparing two images taken at different times of day or in different seasons for change detection, the variation of topographic shades and shadows caused by the change of sunlight angle can be so significant that it overwhelms the real object and environmental changes, making automatic detection unreliable. An effective change detection algorithm therefore has to be robust to the illumination variation. This paper presents our research on developing and testing an Illumination Invariant Change Detection (IICD) method based on the robustness of phase correlation (PC) to the variation of solar illumination for image matching.<br> The IICD is based on two key functions: i) initial change detection based on a saliency map derived from pixel-wise dense PC matching and ii) change quantization which combines change type identification, motion estimation and precise appearance change identification. Experiment using multi-temporal Landsat 7 ETM+ satellite images, Rapid eye satellite images and Mars HiRiSE images demonstrate that our frequency based image matching method can reach sub-pixel accuracy and thus the proposed IICD method can effectively detect and precisely segment large scale change such as landslide as well as small object change such as Mars rover, under daily and seasonal sunlight changes.


2017 ◽  
Vol 2017 ◽  
pp. 1-14 ◽  
Author(s):  
Ning Ma ◽  
Peng-fei Sun ◽  
Yu-bo Men ◽  
Chao-guang Men ◽  
Xiang Li

In this paper, an accurate and efficient image matching method based on phase correlation is proposed to estimate disparity with subpixel precision, which is used for the stereovision of narrow baseline remotely sensed images. The multistep strategy is adopted in our technical frame; thus the disparity estimation is divided into two steps: integer-pixel prematching and subpixel matching. Firstly, integer-pixel disparity is estimated by employing a cross-based local matching method. Then the relationship of corresponding points is established under the guidance of integer-pixel disparity. The subimages are extracted through selecting the corresponding points as the center. Finally, the subpixel disparity is obtained by matching the subimages utilizing a novel variant of phase correlation approach. The experiment results show that the proposed method can match different kinds of large-sized narrow baseline remotely sensed images and estimate disparity with subpixel precision automatically.


2021 ◽  
Vol 297 ◽  
pp. 01055
Author(s):  
Mohamed El Ansari ◽  
Ilyas El Jaafari ◽  
Lahcen Koutti

This paper proposes a new edge based stereo matching approach for road applications. The new approach consists in matching the edge points extracted from the input stereo images using temporal constraints. At the current frame, we propose to estimate a disparity range for each image line based on the disparity map of its preceding one. The stereo images are divided into multiple parts according to the estimated disparity ranges. The optimal solution of each part is independently approximated via the state-of-the-art energy minimization approach Graph cuts. The disparity search space at each image part is very small compared to the global one, which improves the results and reduces the execution time. Furthermore, as a similarity criterion between corresponding edge points, we propose a new cost function based on the intensity, the gradient magnitude and gradient orientation. The proposed method has been tested on virtual stereo images, and it has been compared to a recently proposed method and the results are satisfactory.


2020 ◽  
Vol 12 (4) ◽  
pp. 696 ◽  
Author(s):  
Zhen Ye ◽  
Yusheng Xu ◽  
Hao Chen ◽  
Jingwei Zhu ◽  
Xiaohua Tong ◽  
...  

Dense image matching is a crucial step in many image processing tasks. Subpixel accuracy and fractional measurement are commonly pursued, considering the image resolution and application requirement, especially in the field of remote sensing. In this study, we conducted a practical analysis and comparative study on area-based dense image matching with subpixel accuracy for remote sensing applications, with a specific focus on the subpixel capability and robustness. Twelve representative matching algorithms with two types of correlation-based similarity measures and seven types of subpixel methods were selected. The existing matching algorithms were compared and evaluated in a simulated experiment using synthetic image pairs with varying amounts of aliasing and two real applications of attitude jitter detection and disparity estimation. The experimental results indicate that there are two types of systematic errors: displacement-dependent errors, depending on the fractional values of displacement, and displacement-independent errors represented as unexpected wave artifacts in this study. In addition, the strengths and limitations of different matching algorithms on the robustness to these two types of systematic errors were investigated and discussed.


2014 ◽  
Vol 25 (7) ◽  
pp. 1558-1565 ◽  
Author(s):  
Jinchang Ren ◽  
Theodore Vlachos ◽  
Yi Zhang ◽  
Jiangbin Zheng ◽  
Jianmin Jiang

Author(s):  
Alfonso Alba ◽  
Ruth M. Aguilar-Ponce ◽  
Javier Flavio Vigueras-Gómez ◽  
Edgar Arce-Santana

IEEE Access ◽  
2020 ◽  
Vol 8 ◽  
pp. 63202-63213 ◽  
Author(s):  
Changhui Hu ◽  
Fei Wu ◽  
Jian Yu ◽  
Xiaoyuan Jing ◽  
Xiaobo Lu ◽  
...  

2014 ◽  
Vol 989-994 ◽  
pp. 1555-1560
Author(s):  
Ze Tao Jiang ◽  
Le Zhou ◽  
Lei Zhou

The paper proposes a new method that utilizes Parallel Estimation of Distribution (PED) to obtain disparity map of two images and fuses 3D point-cloud by the disparity. Estimation of Distribution (ED) has several advantages such as low complexity and high efficiency while it also has the shortcoming that it’s sensitive to initial samples and the final solution is local optimum but global optimum. In order to exert its merit and overcome the shortcoming, this paper will improve the ED by parallel sampling to diminish the sensitivity with Adaptive Support-Weight method. We called it Parallel Estimation of Distribution (PED). After disparity map is obtained, the two images will be divided into high frequency and low frequency by lifting wavelet respectively and the frequency coefficient of each image will be averaged into fusion frequency. Finally, the fused frequency coefficient will be transformed by inverse lifting wavelet to the final fused image. The experiment has proved that the matching speed is outstanding without losing precision.


2020 ◽  
Vol 2020 (1) ◽  
pp. 82-86
Author(s):  
Sorour Mohajerani ◽  
Mark S. Drew ◽  
Parvaneh Saeedi

Removing the effect of illumination variation in images has been proved to be beneficial in many computer vision applications such as object recognition and semantic segmentation. Although generating illumination-invariant images has been studied in the literature before, it has not been investigated on real 4-channel (4D) data. In this study, we examine the quality of illumination-invariant images generated from red, green, blue, and near-infrared (RGBN) data. Our experiments show that the near-infrared channel substantively contributes toward removing illumination. As shown in our numerical and visual results, the illumination-invariant image obtained by RGBN data is superior compared to that obtained by RGB alone.


Sign in / Sign up

Export Citation Format

Share Document