scholarly journals POSE ESTIMATION AND MAPPING USING CATADIOPTRIC CAMERAS WITH SPHERICAL MIRRORS

Author(s):  
Grigory Ilizirov ◽  
Sagi Filin

Catadioptric cameras have the advantage of broadening the field of view and revealing otherwise occluded object parts. However, they differ geometrically from standard central perspective cameras because of light reflection from the mirror surface which alters the collinearity relation and introduces severe non-linear distortions of the imaged scene. Accommodating for these features, we present in this paper a novel modeling for pose estimation and reconstruction while imaging through spherical mirrors. We derive a closed-form equivalent to the collinearity principle via which we estimate the system’s parameters. Our model yields a resection-like solution which can be developed into a linear one. We show that accurate estimates can be derived with only a small set of control points. Analysis shows that control configuration in the orientation scheme is rather flexible and that high levels of accuracy can be reached in both pose estimation and mapping. Clearly, the ability to model objects which fall outside of the immediate camera field-of-view offers an appealing means to supplement 3-D reconstruction and modeling.

Author(s):  
Grigory Ilizirov ◽  
Sagi Filin

Catadioptric cameras have the advantage of broadening the field of view and revealing otherwise occluded object parts. However, they differ geometrically from standard central perspective cameras because of light reflection from the mirror surface which alters the collinearity relation and introduces severe non-linear distortions of the imaged scene. Accommodating for these features, we present in this paper a novel modeling for pose estimation and reconstruction while imaging through spherical mirrors. We derive a closed-form equivalent to the collinearity principle via which we estimate the system’s parameters. Our model yields a resection-like solution which can be developed into a linear one. We show that accurate estimates can be derived with only a small set of control points. Analysis shows that control configuration in the orientation scheme is rather flexible and that high levels of accuracy can be reached in both pose estimation and mapping. Clearly, the ability to model objects which fall outside of the immediate camera field-of-view offers an appealing means to supplement 3-D reconstruction and modeling.


2020 ◽  
Vol 86 (1) ◽  
pp. 33-44
Author(s):  
Sagi Filin ◽  
Grigory Ilizirov ◽  
Bashar Elnashef

Catadioptric cameras broaden the field of view and reveal otherwise occluded object parts. They differ geometrically from central-perspective cameras because of light reflection from the mirror surface. To handle these effects, we present new pose-estimation and reconstruction models for imaging through spherical mirrors. We derive a closed-form equivalent to the collinearity principle via which three methods are established to estimate the system parameters: a resection-based one, a trilateration-based one that introduces novel constraints that enhance accuracy, and a direct and linear transform-based one. The estimated system parameters exhibit improved accuracy compared to the state of the art, and analysis shows intrinsic robustness to the presence of a high fraction of outliers. We then show that 3D point reconstruction can be performed at accurate levels. Thus, we provide an in-depth look into the geometrical modeling of spherical catadioptric systems and practical enhancements of accuracies and requirements to reach them.


Sensors ◽  
2021 ◽  
Vol 21 (12) ◽  
pp. 4008
Author(s):  
Xuanrui Gong ◽  
Yaowen Lv ◽  
Xiping Xu ◽  
Yuxuan Wang ◽  
Mengdi Li

The omnidirectional camera, having the advantage of broadening the field of view, realizes 360° imaging in the horizontal direction. Due to light reflection from the mirror surface, the collinearity relation is altered and the imaged scene has severe nonlinear distortions. This makes it more difficult to estimate the pose of the omnidirectional camera. To solve this problem, we derive the mapping from omnidirectional camera to traditional camera and propose an omnidirectional camera linear imaging model. Based on the linear imaging model, we improve the EPnP algorithm to calculate the omnidirectional camera pose. To validate the proposed solution, we conducted simulations and physical experiments. Results show that the algorithm has a good performance in resisting noise.


2021 ◽  
Vol 19 (1) ◽  
pp. 643-662
Author(s):  
Zhiqiang Wang ◽  
◽  
Jinzhu Peng ◽  
Shuai Ding

<abstract><p>In this paper, a novel bio-inspired trajectory planning method is proposed for robotic systems based on an improved bacteria foraging optimization algorithm (IBFOA) and an improved intrinsic Tau jerk (named Tau-J*) guidance strategy. Besides, the adaptive factor and elite-preservation strategy are employed to facilitate the IBFOA, and an improved Tau-J* with higher-order of intrinsic guidance movement is used to avoid the nonzero initial and final jerk, so as to overcome the computational burden and unsmooth trajectory problems existing in the optimization algorithm and traditional interpolation algorithm. The IBFOA is utilized to determine a small set of optimal control points, and Tau-J* is then invoked to generate smooth trajectories between the control points. Finally, the results of simulation tests demonstrate the eminent stability, optimality, and rapidity capability of the proposed bio-inspired trajectory planning method.</p></abstract>


Author(s):  
J. Li-Chee-Ming ◽  
C. Armenakis

This paper presents a novel application of the Visual Servoing Platform’s (ViSP) for pose estimation in indoor and GPS-denied outdoor environments. Our proposed solution integrates the trajectory solution from RGBD-SLAM into ViSP’s pose estimation process. Li-Chee-Ming and Armenakis (2015) explored the application of ViSP in mapping large outdoor environments, and tracking larger objects (i.e., building models). Their experiments revealed that tracking was often lost due to a lack of model features in the camera’s field of view, and also because of rapid camera motion. Further, the pose estimate was often biased due to incorrect feature matches. This work proposes a solution to improve ViSP’s pose estimation performance, aiming specifically to reduce the frequency of tracking losses and reduce the biases present in the pose estimate. This paper explores the integration of ViSP with RGB-D SLAM. We discuss the performance of the combined tracker in mapping indoor environments and tracking 3D wireframe indoor building models, and present preliminary results from our experiments.


2014 ◽  
Vol 2014 ◽  
pp. 1-23 ◽  
Author(s):  
Francisco Amorós ◽  
Luis Payá ◽  
Oscar Reinoso ◽  
Walterio Mayol-Cuevas ◽  
Andrew Calway

In this work we present a topological map building and localization system for mobile robots based on global appearance of visual information. We include a comparison and analysis of global-appearance techniques applied to wide-angle scenes in retrieval tasks. Next, we define multiscale analysis, which permits improving the association between images and extracting topological distances. Then, a topological map-building algorithm is proposed. At first, the algorithm has information only of some isolated positions of the navigation area in the form of nodes. Each node is composed of a collection of images that covers the complete field of view from a certain position. The algorithm solves the node retrieval and estimates their spatial arrangement. With these aims, it uses the visual information captured along some routes that cover the navigation area. As a result, the algorithm builds a graph that reflects the distribution and adjacency relations between nodes (map). After the map building, we also propose a route path estimation system. This algorithm takes advantage of the multiscale analysis. The accuracy in the pose estimation is not reduced to the nodes locations but also to intermediate positions between them. The algorithms have been tested using two different databases captured in real indoor environments under dynamic conditions.


2003 ◽  
Vol 15 (3) ◽  
pp. 293-303
Author(s):  
Haiquan Yang ◽  
◽  
Nobuyuki Kita ◽  
Yasuyo Kita

A method is proposed to correct the initial position and pose estimates of a camera-head by aligning a 3D model of its surrounding environment with an observed 2D image that is captured by a foveated wideangle lens in the camera. Because of the wide field of view of the lens, the algorithm can converge even when the initial error is large, and the precision of the result is high since the resolution of the fovea of the lens is high.


2015 ◽  
Vol 782 ◽  
pp. 261-270
Author(s):  
Jin Bo Liu ◽  
Gu Can Long ◽  
Xin Li

Pose estimation is a thoroughly studied problem in computer vision. But in some realistic scenarios, reference points cannot lie within camera’s field of view (non-intervisible). In this article, planar mirror is placed in front of body, allowing the camera to observe reflection of reference points which can characterize body coordinate. We can form an equation system related to transformation between camera coordinate and body coordinate. We propose an unattached and linear approach to solve and optimize transformation of camera-to-body without any prior information of planar mirror configuration. Additionally, we analyze the sensitivity of our algorithm. We present a number of simulations and experiments to prove that our formulation significantly improves accuracy and robust.


Complexity ◽  
2018 ◽  
Vol 2018 ◽  
pp. 1-9 ◽  
Author(s):  
Sheng Liu ◽  
Yuan Feng ◽  
Kang Shen ◽  
Yangqing Wang ◽  
Shengyong Chen

Estimating the real-time pose of a free flight aircraft in a complex wind tunnel environment is extremely difficult. Due to the high dynamic testing environment, complicated illumination condition, and the unpredictable motion of target, most general pose estimating methods will fail. In this paper, we introduce a cross-field of view (FOV) real-time pose estimation system, which provides high precision pose estimation of the free flight aircraft in the wind tunnel environment. Multiview live RGB-D streams are used in the system as input to ensure the measurement area can be fully covered. First, a multimodal initialization method is developed to measure the spatial relationship between the RGB-D camera and the aircraft. Based on all the input multimodal information, a so-called cross-FOV model is proposed to recognize the dominating sensor and accurately extract the foreground region in an automatic manner. Second, we develop an RGB-D-based pose estimation method for a single target, by which the 3D sparse points and the pose of the target can be simultaneously obtained in real time. Many experiments have been conducted, and an RGB-D image simulation based on 3D modeling is implemented to verify the effectiveness of our algorithm. Both the real scene’s and simulation scene’s experimental results demonstrate the effectiveness of our method.


Sign in / Sign up

Export Citation Format

Share Document