Fast tracking of feature points in full-view image

Author(s):  
Xuebin Qin ◽  
Shigang Li
2013 ◽  
Vol 552 ◽  
pp. 542-547
Author(s):  
Ying Li ◽  
Jing Jiang Yu ◽  
Yue Gang Fu ◽  
Zheng Ping Xu ◽  
Xun Zhou

In order to design a moving target fast tracking system with respect to a real-time and stable tracking process, especially when the shape of moving target or its environment condition changes, a new algorithm named SURF-KMs is proposed. SURF-KMs combines the advantages of SURF algorithm with a cluster analysis of K-means method. First, feature points are collected and then they generate the matching template vectors based on the SURF algorithm. Second, the feature points and the center of the target are estimated by using the K-means method to determine the target’s cluster scope and update the tracking window. Finally, a self-adapting updating strategy for matching template is also proposed in order to track moving target automatically. Experimental results indicate that SURF-KMs is mostly able to achieve a stable tracking while with the monitored target rotating, scale changing, and also the environment illumination glittering. Moreover, it can satisfy the system requirements of tracking stability, higher precision and anti-jamming.


2018 ◽  
Vol 8 (11) ◽  
pp. 2268 ◽  
Author(s):  
Jianfeng Li ◽  
Xiaowei Wang ◽  
Shigang Li

As we know, SLAM (Simultaneous Localization and Mapping) relies on surroundings. A full-view image provides more benefits to SLAM than a limited-view image. In this paper, we present a spherical-model-based SLAM on full-view images for indoor environments. Unlike traditional limited-view images, the full-view image has its own specific imaging principle (which is nonlinear), and is accompanied by distortions. Thus, specific techniques are needed for processing a full-view image. In the proposed method, we first use a spherical model to express the full-view image. Then, the algorithms are implemented based on the spherical model, including feature points extraction, feature points matching, 2D-3D connection, and projection and back-projection of scene points. Thanks to the full field of view, the experiments show that the proposed method effectively handles sparse-feature or partially non-feature environments, and also achieves high accuracy in localization and mapping. An experiment is conducted to prove that the accuracy is affected by the view field.


2011 ◽  
Vol 23 (6) ◽  
pp. 1012-1023 ◽  
Author(s):  
Tsuyoshi Tasaki ◽  
◽  
Seiji Tokura ◽  
Takafumi Sonoura ◽  
Fumio Ozaki ◽  
...  

For a mobile robot self-localization and knowledge of the location of all obstacles around it is essential. Moreover, classification of the obstacles as stable or unstable and fast self-localization using a single sensor such as an omnidirectional camera are also important to achieve smooth movements and to reduce the cost of the robot. However, there are few studies on locating and classifying all obstacles around the robot and localizing its self-position fast during its motion by using only one omnidirectional camera. In order to locate obstacles and localize the robot, we have developed a new method that uses two kinds of points that can be detected and tracked fast even in omnidirectional images. In the obstacle location and classification process, we use floor boundary points where the distance from the robot can be measured using an omnidirectional camera. By tracking those points, we can classify obstacles by comparing the movement of each tracked point with odometry data. Our method changes a threshold to detect the points based on the result of this comparison in order to enhance classification. In the self-localization process, we use tracked scale and rotation invariant feature points as new landmarks that are detected for a long time by using both a fast tracking method and a slow Speed Up Robust Features (SURF) method. Once landmarks are detected, they can be tracked fast. Therefore, we can achieve fast self-localization. The classification ratio of our method is 85.0%, which is four times higher than that of a previous method. Our robot can localize 2.9 times faster and 4.2 times more accurately by using our method, in comparison to the use of the SURF method alone.


2016 ◽  
Vol E99.B (7) ◽  
pp. 1416-1425 ◽  
Author(s):  
Tadao NAKAGAWA ◽  
Takayuki KOBAYASHI ◽  
Koichi ISHIHARA ◽  
Yutaka MIYAMOTO

Author(s):  
Jiayong Yu ◽  
Longchen Ma ◽  
Maoyi Tian, ◽  
Xiushan Lu

The unmanned aerial vehicle (UAV)-mounted mobile LiDAR system (ULS) is widely used for geomatics owing to its efficient data acquisition and convenient operation. However, due to limited carrying capacity of a UAV, sensors integrated in the ULS should be small and lightweight, which results in decrease in the density of the collected scanning points. This affects registration between image data and point cloud data. To address this issue, the authors propose a method for registering and fusing ULS sequence images and laser point clouds, wherein they convert the problem of registering point cloud data and image data into a problem of matching feature points between the two images. First, a point cloud is selected to produce an intensity image. Subsequently, the corresponding feature points of the intensity image and the optical image are matched, and exterior orientation parameters are solved using a collinear equation based on image position and orientation. Finally, the sequence images are fused with the laser point cloud, based on the Global Navigation Satellite System (GNSS) time index of the optical image, to generate a true color point cloud. The experimental results show the higher registration accuracy and fusion speed of the proposed method, thereby demonstrating its accuracy and effectiveness.


2009 ◽  
Vol 8 (3) ◽  
pp. 887-897
Author(s):  
Vishal Paika ◽  
Er. Pankaj Bhambri

The face is the feature which distinguishes a person. Facial appearance is vital for human recognition. It has certain features like forehead, skin, eyes, ears, nose, cheeks, mouth, lip, teeth etc which helps us, humans, to recognize a particular face from millions of faces even after a large span of time and despite large changes in their appearance due to ageing, expression, viewing conditions and distractions such as disfigurement of face, scars, beard or hair style. A face is not merely a set of facial features but is rather but is rather something meaningful in its form.In this paper, depending on the various facial features, a system is designed to recognize them. To reveal the outline of the face, eyes, ears, nose, teeth etc different edge detection techniques have been used. These features are extracted in the term of distance between important feature points. The feature set obtained is then normalized and are feed to artificial neural networks so as to train them for reorganization of facial images.


Sign in / Sign up

Export Citation Format

Share Document