Tracking Feature Points of Fisheye Full-View Image by Normalized Image Patch

2012 ◽  
Vol 132 (9) ◽  
pp. 1516-1523 ◽  
Author(s):  
Xuebin Qin ◽  
Shigang Li
Keyword(s):  
2018 ◽  
Vol 8 (11) ◽  
pp. 2268 ◽  
Author(s):  
Jianfeng Li ◽  
Xiaowei Wang ◽  
Shigang Li

As we know, SLAM (Simultaneous Localization and Mapping) relies on surroundings. A full-view image provides more benefits to SLAM than a limited-view image. In this paper, we present a spherical-model-based SLAM on full-view images for indoor environments. Unlike traditional limited-view images, the full-view image has its own specific imaging principle (which is nonlinear), and is accompanied by distortions. Thus, specific techniques are needed for processing a full-view image. In the proposed method, we first use a spherical model to express the full-view image. Then, the algorithms are implemented based on the spherical model, including feature points extraction, feature points matching, 2D-3D connection, and projection and back-projection of scene points. Thanks to the full field of view, the experiments show that the proposed method effectively handles sparse-feature or partially non-feature environments, and also achieves high accuracy in localization and mapping. An experiment is conducted to prove that the accuracy is affected by the view field.


Author(s):  
Jiayong Yu ◽  
Longchen Ma ◽  
Maoyi Tian, ◽  
Xiushan Lu

The unmanned aerial vehicle (UAV)-mounted mobile LiDAR system (ULS) is widely used for geomatics owing to its efficient data acquisition and convenient operation. However, due to limited carrying capacity of a UAV, sensors integrated in the ULS should be small and lightweight, which results in decrease in the density of the collected scanning points. This affects registration between image data and point cloud data. To address this issue, the authors propose a method for registering and fusing ULS sequence images and laser point clouds, wherein they convert the problem of registering point cloud data and image data into a problem of matching feature points between the two images. First, a point cloud is selected to produce an intensity image. Subsequently, the corresponding feature points of the intensity image and the optical image are matched, and exterior orientation parameters are solved using a collinear equation based on image position and orientation. Finally, the sequence images are fused with the laser point cloud, based on the Global Navigation Satellite System (GNSS) time index of the optical image, to generate a true color point cloud. The experimental results show the higher registration accuracy and fusion speed of the proposed method, thereby demonstrating its accuracy and effectiveness.


2009 ◽  
Vol 8 (3) ◽  
pp. 887-897
Author(s):  
Vishal Paika ◽  
Er. Pankaj Bhambri

The face is the feature which distinguishes a person. Facial appearance is vital for human recognition. It has certain features like forehead, skin, eyes, ears, nose, cheeks, mouth, lip, teeth etc which helps us, humans, to recognize a particular face from millions of faces even after a large span of time and despite large changes in their appearance due to ageing, expression, viewing conditions and distractions such as disfigurement of face, scars, beard or hair style. A face is not merely a set of facial features but is rather but is rather something meaningful in its form.In this paper, depending on the various facial features, a system is designed to recognize them. To reveal the outline of the face, eyes, ears, nose, teeth etc different edge detection techniques have been used. These features are extracted in the term of distance between important feature points. The feature set obtained is then normalized and are feed to artificial neural networks so as to train them for reorganization of facial images.


Sensors ◽  
2021 ◽  
Vol 21 (15) ◽  
pp. 5235
Author(s):  
Jiri Nemecek ◽  
Martin Polasek

Among other things, passive methods based on the processing of images of feature points or beacons captured by an image sensor are used to measure the relative position of objects. At least two cameras usually have to be used to obtain the required information, or the cameras are combined with other sensors working on different physical principles. This paper describes the principle of passively measuring three position coordinates of an optical beacon using a simultaneous method and presents the results of corresponding experimental tests. The beacon is represented by an artificial geometric structure, consisting of several semiconductor light sources. The sources are suitably arranged to allow, all from one camera, passive measurement of the distance, two position angles, the azimuth, and the beacon elevation. The mathematical model of this method consists of working equations containing measured coordinates, geometric parameters of the beacon, and geometric parameters of the beacon image captured by the camera. All the results of these experimental tests are presented.


Sensors ◽  
2021 ◽  
Vol 21 (11) ◽  
pp. 3586
Author(s):  
Wenqing Wang ◽  
Han Liu ◽  
Guo Xie

The spectral mismatch between a multispectral (MS) image and its corresponding panchromatic (PAN) image affects the pansharpening quality, especially for WorldView-2 data. To handle this problem, a pansharpening method based on graph regularized sparse coding (GRSC) and adaptive coupled dictionary is proposed in this paper. Firstly, the pansharpening process is divided into three tasks according to the degree of correlation among the MS and PAN channels and the relative spectral response of WorldView-2 sensor. Then, for each task, the image patch set from the MS channels is clustered into several subsets, and the sparse representation of each subset is estimated through the GRSC algorithm. Besides, an adaptive coupled dictionary pair for each task is constructed to effectively represent the subsets. Finally, the high-resolution image subsets for each task are obtained by multiplying the estimated sparse coefficient matrix by the corresponding dictionary. A variety of experiments are conducted on the WorldView-2 data, and the experimental results demonstrate that the proposed method achieves better performance than the existing pansharpening algorithms in both subjective analysis and objective evaluation.


2021 ◽  
Vol 11 (4) ◽  
pp. 1373
Author(s):  
Jingyu Zhang ◽  
Zhen Liu ◽  
Guangjun Zhang

Pose measurement is a necessary technology for UAV navigation. Accurate pose measurement is the most important guarantee for a UAV stable flight. UAV pose measurement methods mostly use image matching with aircraft models or 2D points corresponding with 3D points. These methods will lead to pose measurement errors due to inaccurate contour and key feature point extraction. In order to solve these problems, a pose measurement method based on the structural characteristics of aircraft rigid skeleton is proposed in this paper. The depth information is introduced to guide and label the 2D feature points to eliminate the feature mismatch and segment the region. The space points obtained from the marked feature points fit the space linear equation of the rigid skeleton, and the UAV attitude is calculated by combining with the geometric model. This method does not need cooperative identification of the aircraft model, and can stably measure the position and attitude of short-range UAV in various environments. The effectiveness and reliability of the proposed method are verified by experiments on a visual simulation platform. The method proposed can prevent aircraft collision and ensure the safety of UAV navigation in autonomous refueling or formation flight.


2021 ◽  
Vol 2 (3) ◽  
Author(s):  
Uttam U. Deshpande ◽  
V. S. Malemath ◽  
Shivanand M. Patil ◽  
Sushma. V. Chaugule

Sign in / Sign up

Export Citation Format

Share Document