In this study, we propose a feature-point matching method that is robust to viewpoint, scale, and illumination changes between aerial and ground images, to improve matching performance. First, a 3D rendering strategy is adopted to synthesize ground-view images from the 3D mesh model
reconstructed from aerial images and overcome the global geometric distortion between aerial and ground images. We do not directly match feature points between the synthesized and ground images, but extract line-segment correspondences by designing a line-segment matching method that can adapt
to the local geometric deformation, holes, and blurred textures on the synthesized image. Then, on the basis of the line-segment matches, local-region correspondences are constructed, and local regions on the synthesized image are propagated back to the original aerial images. Lastly, feature-point
matching is performed between the aerial and ground images with the constraints of the local-region correspondences. Experimental results demonstrate that the proposed method can obtain more correct matches and higher matching precision than state-of-the-art methods. Specifically, the proposed
method increases the average number of correct matches and average matching precision of the second-best method by more than five times and 40%, respectively.