Feature points matching method of UAV images based on local RANSAC

Author(s):  
Lei Ye ◽  
Jianhua Gong ◽  
Yi Li ◽  
Zhongjin Shan ◽  
Ming Li
Author(s):  
Youssef Ouadid ◽  
Abderrahmane Elbalaoui ◽  
Mehdi Boutaounte ◽  
Mohamed Fakir ◽  
Brahim Minaoui

<p>In this paper, a graph based handwritten Tifinagh character recognition system is presented. In preprocessing Zhang Suen algorithm is enhanced. In features extraction, a novel key point extraction algorithm is presented. Images are then represented by adjacency matrices defining graphs where nodes represent feature points extracted by a novel algorithm. These graphs are classified using a graph matching method. Experimental results are obtained using two databases to test the effectiveness. The system shows good results in terms of recognition rate.</p>


2011 ◽  
Vol 383-390 ◽  
pp. 5193-5199 ◽  
Author(s):  
Jian Ying Yuan ◽  
Xian Yong Liu ◽  
Zhi Qiang Qiu

In optical measuring system with a handheld digital camera, image points matching is very important for 3-dimensional(3D) reconstruction. The traditional matching algorithms are usually based on epipolar geometry or multi-base lines. Mistaken matching points can not be eliminated by epipolar geometry and many matching points will be lost by multi-base lines. In this paper, a robust algorithm is presented to eliminate mistaken matching feature points in the process of 3D reconstruction from multiple images. The algorithm include three steps: (1) pre-matching the feature points using constraints of epipolar geometry and image topological structure firstly; (2) eliminating the mistaken matching points by the principle of triangulation in multi-images; (3) refining camera external parameters by bundle adjustment. After the external parameters of every image refined, repeat step (1) to step (3) until all the feature points been matched. Comparative experiments with real image data have shown that mistaken matching feature points can be effectively eliminated, and nearly no matching points have been lost, which have a better performance than traditonal matching algorithms do.


2018 ◽  
Vol 55 (4) ◽  
pp. 041005
Author(s):  
赵夫群 Zhao Fuqun ◽  
耿国华 Geng Guohua

Sensors ◽  
2019 ◽  
Vol 19 (11) ◽  
pp. 2553 ◽  
Author(s):  
Jingwen Cui ◽  
Jianping Zhang ◽  
Guiling Sun ◽  
Bowen Zheng

Based on computer vision technology, this paper proposes a method for identifying and locating crops in order to successfully capture crops in the process of automatic crop picking. This method innovatively combines the YOLOv3 algorithm under the DarkNet framework with the point cloud image coordinate matching method, and can achieve the goal of this paper very well. Firstly, RGB (RGB is the color representing the three channels of red, green and blue) images and depth images are obtained by using the Kinect v2 depth camera. Secondly, the YOLOv3 algorithm is used to identify the various types of target crops in the RGB images, and the feature points of the target crops are determined. Finally, the 3D coordinates of the feature points are displayed on the point cloud images. Compared with other methods, this method of crop identification has high accuracy and small positioning error, which lays a good foundation for the subsequent harvesting of crops using mechanical arms. In summary, the method used in this paper can be considered effective.


Sensors ◽  
2020 ◽  
Vol 20 (7) ◽  
pp. 2007
Author(s):  
Ruizhe Shao ◽  
Chun Du ◽  
Hao Chen ◽  
Jun Li

With the development of unmanned aerial vehicle (UAV) techniques, UAV images are becoming more widely used. However, as an essential step of UAV image application, the computation of stitching remains time intensive, especially for emergency applications. Addressing this issue, we propose a novel approach to use the position and pose information of UAV images to speed up the process of image stitching, called FUIS (fast UAV image stitching). This stitches images by feature points. However, unlike traditional approaches, our approach rapidly finds several anchor-matches instead of a lot of feature matches to stitch the image. Firstly, from a large number of feature points, we design a method to select a small number of them that are more helpful for stitching as anchor points. Then, a method is proposed to more quickly and accurately match these anchor points, using position and pose information. Experiments show that our method significantly reduces the time consumption compared with the-state-of-art approaches with accuracy guaranteed.


Sign in / Sign up

Export Citation Format

Share Document