scholarly journals Self-Localization of Mobile Robots Using a Single Catadioptric Camera with Line Feature Extraction

Sensors ◽  
2021 ◽  
Vol 21 (14) ◽  
pp. 4719
Author(s):  
Huei-Yung Lin ◽  
Yuan-Chi Chung ◽  
Ming-Liang Wang

This paper presents a novel self-localization technique for mobile robots using a central catadioptric camera. A unified sphere model for the image projection is derived by the catadioptric camera calibration. The geometric property of the camera projection model is utilized to obtain the intersections of the vertical lines and ground plane in the scene. Different from the conventional stereo vision techniques, the feature points are projected onto a known planar surface, and the plane equation is used for depth computation. The 3D coordinates of the base points on the ground are calculated using the consecutive image frames. The derivation of motion trajectory is then carried out based on the computation of rotation and translation between the robot positions. We develop an algorithm for feature correspondence matching based on the invariability of the structure in the 3D space. The experimental results obtained using the real scene images have demonstrated the feasibility of the proposed method for mobile robot localization applications.

Author(s):  
N. Zeller ◽  
C. A. Noury ◽  
F. Quint ◽  
C. Teulière ◽  
U. Stilla ◽  
...  

In this paper we present a new calibration approach for focused plenoptic cameras. We derive a new mathematical projection model of a focused plenoptic camera which considers lateral as well as depth distortion. Therefore, we derive a new depth distortion model directly from the theory of depth estimation in a focused plenoptic camera. In total the model consists of five intrinsic parameters, the parameters for radial and tangential distortion in the image plane and two new depth distortion parameters. In the proposed calibration we perform a complete bundle adjustment based on a 3D calibration target. The residual of our optimization approach is three dimensional, where the depth residual is defined by a scaled version of the inverse virtual depth difference and thus conforms well to the measured data. Our method is evaluated based on different camera setups and shows good accuracy. For a better characterization of our approach we evaluate the accuracy of virtual image points projected back to 3D space.


Author(s):  
S. Wei ◽  
B. Li ◽  
Z. Guo ◽  
S. Guo ◽  
L. Cheng

<p><strong>Abstract.</strong> With the development of urbanization, the building structures are more and more complex, and various moving objects, such as human beings, robots and unmanned aerial vehicles, often travel through indoor and outdoor 3D space, which puts forward higher requirements for the accurate search and location in indoor and outdoor space. At present, most of the spatial location methods for indoor entities are carried out through 2D maps. However, the indoor environment is a complex 3D space, which increases the difficulty of the search process. In addition, 2D map cannot accurately display the 3D spatial position of the entity. Therefore, it is difficult for 2D maps to search and locate in complex environment. Therefore, how to quickly and effectively carry out spatial location query in complex indoor environment has become an urgent problem to be solved. Taking the library of Beijing University of Civil Engineering and Architecture as an example, this paper obtains the indoor 3D information of the library based on SLAM, processes and publishes the acquired 3D information on IndoorViewer, and uses its API in the book retrieval system. Finally, a book retrieval and location system based on real-scene 3D is finished.</p>


Author(s):  
N. Zeller ◽  
C. A. Noury ◽  
F. Quint ◽  
C. Teulière ◽  
U. Stilla ◽  
...  

In this paper we present a new calibration approach for focused plenoptic cameras. We derive a new mathematical projection model of a focused plenoptic camera which considers lateral as well as depth distortion. Therefore, we derive a new depth distortion model directly from the theory of depth estimation in a focused plenoptic camera. In total the model consists of five intrinsic parameters, the parameters for radial and tangential distortion in the image plane and two new depth distortion parameters. In the proposed calibration we perform a complete bundle adjustment based on a 3D calibration target. The residual of our optimization approach is three dimensional, where the depth residual is defined by a scaled version of the inverse virtual depth difference and thus conforms well to the measured data. Our method is evaluated based on different camera setups and shows good accuracy. For a better characterization of our approach we evaluate the accuracy of virtual image points projected back to 3D space.


2020 ◽  
Vol 10 (21) ◽  
pp. 7814
Author(s):  
Ladislav Karrach ◽  
Elena Pivarčiová ◽  
Pavol Bozek

QR (Quick Response) codes are one of the most famous types of two-dimensional (2D) matrix barcodes, which are the descendants of well-known 1D barcodes. The mobile robots which move in certain operational space can use information and landmarks from environment for navigation and such information may be provided by QR Codes. We have proposed algorithm, which localizes a QR Code in an image in a few sequential steps. We start with image binarization, then we continue with QR Code localization, where we utilize characteristic Finder Patterns, which are located in three corners of a QR Code, and finally we identify perspective distortion. The presented algorithm is able to deal with a damaged Finder Pattern, works well for low-resolution images and is computationally efficient.


Author(s):  
T. Luhmann

This paper discusses a feature of projective geometry which causes eccentricity in the image measurement of circular and spherical targets. While it is commonly known that flat circular targets can have a significant displacement of the elliptical image centre with respect to the true imaged circle centre, it can also be shown that the a similar effect exists for spherical targets. Both types of targets are imaged with an elliptical contour. As a result, if measurement methods based on ellipses are used to detect the target (e.g. best-fit ellipses), the calculated ellipse centre does not correspond to the desired target centre in 3D space. This paper firstly discusses the use and measurement of circular and spherical targets. It then describes the geometrical projection model in order to demonstrate the eccentricity in image space. Based on numerical simulations, the eccentricity in the image is further quantified and investigated. Finally, the resulting effect in 3D space is estimated for stereo and multi-image intersections. It can be stated that the eccentricity is larger than usually assumed, and must be compensated for high-accuracy applications. Spherical targets do not show better results than circular targets. The paper is an updated version of Luhmann (2014) new experimental investigations on the effect of length measurement errors.


Perception ◽  
10.1068/p3261 ◽  
2002 ◽  
Vol 31 (9) ◽  
pp. 1047-1059 ◽  
Author(s):  
Craig W Sauer ◽  
Myron L Braunstein ◽  
Asad Saidpour ◽  
George J Andersen

The effects of regions with local linear perspective on judgments of the depth separation between two objects in a scene were investigated for scenes consisting of a ground plane, a quadrilateral region, and two poles separated in depth. The poles were either inside or outside the region. Two types of displays were used: motion-parallax dot displays, and a still photograph of a real scene on which computer-generated regions and objects were superimposed. Judged depth separations were greater for regions with greater linear perspective, both for objects inside and outside the region. In most cases, the effect of the region's shape was reduced for objects outside the region. Some systematic differences were found between the two types of displays. For example, adding a region with any shape increased judged depth in motion-parallax displays, but only high-perspective regions increased judged depth in real-scene displays. We conclude that depth information present in local regions affects perceived depth within the region, and that these effects propagate, to a lesser degree, outside the region.


Sign in / Sign up

Export Citation Format

Share Document