A Robust Feature Matching Method for Robot Localization in a Dynamic Indoor Environment

Author(s):  
Tsung-Yen Tsou ◽  
Shih-Hung Wu
2012 ◽  
Vol 220-223 ◽  
pp. 1356-1361
Author(s):  
Xi Jie Tian ◽  
Jing Yu ◽  
Chang Chun Li

In this paper, the idea identify the hook on investment casting shell line based on machine vision has been proposed. According to the characteristic of the hook, we do the image acquisition and preprocessing, we adopt Hough transform to narrow the target range, and find the target area based on the method combining the level projection and vertical projection, use feature matching method SIFT to do the image matching. Finally, we get the space information of the target area of the hook.


2021 ◽  
Vol 5 (4) ◽  
pp. 783-793
Author(s):  
Muhammad Muttabi Hudaya ◽  
Siti Saadah ◽  
Hendy Irawan

needs a solid validation that has verification and matching uploaded images. To solve this problem, this paper implementing a detection model using Faster R-CNN and a matching method using ORB (Oriented FAST and Rotated BRIEF) and KNN-BFM (K-Nearest Neighbor Brute Force Matcher). The goal of the implementations is to reach both an 80% mark of accuracy and prove matching using ORB only can be a replaced OCR technique. The implementation accuracy results in the detection model reach mAP (Mean Average Precision) of 94%. But, the matching process only achieves an accuracy of 43,46%. The matching process using only image feature matching underperforms the previous OCR technique but improves processing time from 4510ms to 60m). Image matching accuracy has proven to increase by using a high-quality dan high quantity dataset, extracting features on the important area of EKTP card images.


Automatic image registration (IR) is very challenging and very important in the field of hyperspectral remote sensing data. Efficient autonomous IR method is needed with high precision, fast, and robust. A key operation of IR is to align the multiple images in single co-ordinate system for extracting and identifying variation between images considered. In this paper, presented a feature descriptor by combining features from both Feature from Accelerated Segment Test (FAST) and Binary Robust Invariant Scalable Key point (BRISK). The proposed hybrid invariant local features (HILF) descriptor extract useful and similar feature sets from reference and source images. The feature matching method allows finding precise relationship or matching among two feature sets. An experimental analysis described the outcome BRISK, FASK and proposed HILF in terms of inliers ratio and repeatability evaluation metrics.


Measurement ◽  
2020 ◽  
Vol 156 ◽  
pp. 107581
Author(s):  
Ming Lu ◽  
Duan Liu ◽  
Yupeng Deng ◽  
Lianghong Wu ◽  
Yongfang Xie ◽  
...  

2006 ◽  
Vol 17 (5) ◽  
pp. 1026
Author(s):  
Ya-Qian ZHOU

2017 ◽  
Vol 2017 ◽  
pp. 1-16 ◽  
Author(s):  
Tianyang Cao ◽  
Haoyuan Cai ◽  
Dongming Fang ◽  
Hui Huang ◽  
Chang Liu

Self-localization and mapping are important for indoor mobile robot. We report a robust algorithm for map building and subsequent localization especially suited for indoor floor-cleaning robots. Common methods, for example, SLAM, can easily be kidnapped by colliding or disturbed by similar objects. Therefore, keyframes global map establishing method for robot localization in multiple rooms and corridors is needed. Content-based image matching is the core of this method. It is designed for the situation, by establishing keyframes containing both floor and distorted wall images. Image distortion, caused by robot view angle and movement, is analyzed and deduced. And an image matching solution is presented, consisting of extraction of overlap regions of keyframes extraction and overlap region rebuild through subblocks matching. For improving accuracy, ceiling points detecting and mismatching subblocks checking methods are incorporated. This matching method can process environment video effectively. In experiments, less than 5% frames are extracted as keyframes to build global map, which have large space distance and overlap each other. Through this method, robot can localize itself by matching its real-time vision frames with our keyframes map. Even with many similar objects/background in the environment or kidnapping robot, robot localization is achieved with position RMSE <0.5 m.


Author(s):  
Hongmin Liu ◽  
Hongya Zhang ◽  
Zhiheng Wang ◽  
Yiming Zheng

For images with distortions or repetitive patterns, the existing matching methods usually work well just on one of the two kinds of images. In this paper, we present novel triangle guidance and constraints (TGC)-based feature matching method, which can achieve good results on both kinds of images. We first extract stable matched feature points and combine these points into triangles as the initial matched triangles, and triangles combined by feature points are as the candidates to be matched. Then, triangle guidance based on the connection relationship via the shared feature point between the matched triangles and the candidates is defined to find the potential matching triangles. Triangle constraints, specially the location of a vertex relative to the inscribed circle center of the triangle, the scale represented by the ratio of corresponding side lengths of two matching triangles and the included angles between the sides of two triangles with connection relationship, are subsequently used to verify the potential matches and obtain the correct ones. Comparative experiments show that the proposed TGC can increase the number of the matched points with high accuracy under various image transformations, especially more effective on images with distortions or repetitive patterns due to the fact that the triangular structure are not only stable to image transformations but also provides more geometric constraints.


Sign in / Sign up

Export Citation Format

Share Document