catadioptric camera
Recently Published Documents


TOTAL DOCUMENTS

56
(FIVE YEARS 4)

H-INDEX

9
(FIVE YEARS 0)

Sensors ◽  
2021 ◽  
Vol 21 (14) ◽  
pp. 4719
Author(s):  
Huei-Yung Lin ◽  
Yuan-Chi Chung ◽  
Ming-Liang Wang

This paper presents a novel self-localization technique for mobile robots using a central catadioptric camera. A unified sphere model for the image projection is derived by the catadioptric camera calibration. The geometric property of the camera projection model is utilized to obtain the intersections of the vertical lines and ground plane in the scene. Different from the conventional stereo vision techniques, the feature points are projected onto a known planar surface, and the plane equation is used for depth computation. The 3D coordinates of the base points on the ground are calculated using the consecutive image frames. The derivation of motion trajectory is then carried out based on the computation of rotation and translation between the robot positions. We develop an algorithm for feature correspondence matching based on the invariability of the structure in the 3D space. The experimental results obtained using the real scene images have demonstrated the feasibility of the proposed method for mobile robot localization applications.


2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
Yue Zhao ◽  
Xin Yang

This paper presents an approach for calibrating omnidirectional single-viewpoint sensors using the central catadioptric projection properties of parallel lines. Single-viewpoint sensors are widely used in robot navigation and driverless cars; thus, a high degree of calibration accuracy is needed. In the unit viewing sphere model of central catadioptric cameras, a line in a three-dimensional space is projected to a great circle, resulting in the projections of a group of parallel lines intersecting only at the endpoints of the diameter of the great circle. Based on this property, when there are multiple groups of parallel lines, a group of orthogonal directions can be determined by a rectangle constructed by two groups of parallel lines in different directions. When there is a single group of parallel lines in space, the diameter and tangents at their endpoints determine a group of orthogonal directions for the plane containing the great circle. The intrinsic parameters of the camera can be obtained from the orthogonal vanishing points in the central catadioptric image plane. An optimization algorithm for line image fitting based on the properties of antipodal points is proposed. The performance of the algorithm is verified using simulated setups. Our calibration method was validated though simulations and real experiments with a catadioptric camera.


Author(s):  
ZENDI ZAKARIA RAGA PERMANA ◽  
SUSIJANTO TRI RASMANA ◽  
IRA PUSPASARI

ABSTRAKSaat ini, kecerdasan buatan memungkinkan untuk dikembangkan dalam dunia robotika, khususnya untuk pengaturan gerakan robot berdasarkan pengolahan citra. Penelitian ini mengembangkan sebuah mobile robot yang dilengkapi dengan kamera katadioptrik dengan sudut pandang 3600. Citra yang didapatkan, dikonversi dari RGB menjadi HSV. Selanjutnya disesuaikan dengan proses morfologi. Nilai jarak yang terbaca oleh kamera (piksel) dengan jarak sebenarnya (cm) dihitung menggunakan Euclidean Distance. Nilai ini sebagai ekstraksi ciri data jarak yang dilatihkan pada sistem. Sistem yang dibuat pada penelitian ini memiliki iterasi sebanyak 1.000.000, dengan tingkat kelinieran R2=0.9982 dan keakuratan prediksi sebesar 99,03%.Kata kunci: Robot, HSV, Euclidean Distance, Kamera katadioptrik, Artifical Neural NetworkABSTRACTRecently, artificial intelligence is possible to be developed in robotic, specifically for robot movements control based on image processing. This research develops a mobile robot with a 3600 perspective catadioptric camera is equipped. The camera captured images were converting from RGB to HSV. Furthermore, it adapted to the morphological process. The distance value read by the camera (pixels) to the actual distance (cm) is measured using Euclidean Distance. This value is a feature extraction of distance data that has training on the system. The system built in this study has 1,000,000 iterations, with a linearity level of R2 = 0.9982 and prediction accuracy of 99.03%.Keywords: Robot, HSV, Euclidean Distance, Catadioptric Camera, Artifical Neural Network


2021 ◽  
pp. 141-145
Author(s):  
Srikumar Ramalingam
Keyword(s):  

Author(s):  
M. Khurana ◽  
C. Armenakis

This work details the development of an indoor navigation and mapping system using a non-central catadioptric omnidirectional camera and its implementation for mobile applications. Omnidirectional catadioptric cameras find their use in navigation and mapping of robotic platforms, owing to their wide field of view. Having a wider field of view, or rather a potential 360° field of view, allows the system to “see and move” more freely in the navigation space. A catadioptric camera system is a low cost system which consists of a mirror and a camera. Any perspective camera can be used. A platform was constructed in order to combine the mirror and a camera to build a catadioptric system. A calibration method was developed in order to obtain the relative position and orientation between the two components so that they can be considered as one monolithic system. The mathematical model for localizing the system was determined using conditions based on the reflective properties of the mirror. The obtained platform positions were then used to map the environment using epipolar geometry. Experiments were performed to test the mathematical models and the achieved location and mapping accuracies of the system. An iterative process of positioning and mapping was applied to determine object coordinates of an indoor environment while navigating the mobile platform. Camera localization and 3D coordinates of object points obtained decimetre level accuracies.


2017 ◽  
Vol 22 (S1) ◽  
pp. 781-793
Author(s):  
Huixian Duan ◽  
Yihong Wu ◽  
Lei Song ◽  
Jun Wang ◽  
Na Liu
Keyword(s):  

2015 ◽  
Vol 69 (2) ◽  
pp. 391-413 ◽  
Author(s):  
Krzysztof Naus ◽  
Mariusz Waz

This paper summarises research that evaluates the precision of determining a ship's position by comparing an omnidirectional map to a visual image of the coastline. The first part of the paper describes the equipment and associated software employed in obtaining such estimates. The system uses a spherical catadioptric camera to collect positional data that is analysed by comparing it to spherical images from a digital navigational chart. Methods of collecting positional data from a ship are described, and the algorithms used to determine the statistical precision of such position estimates are explained. The second section analyses the results of research to determine the precision of position estimates based on this system. It focuses on average error values and distance fluctuations of position estimates from referential positions, and describes the primary factors influencing the correlation between spherical map images and coastline visual images.


2014 ◽  
Vol 25 (8) ◽  
pp. 085005 ◽  
Author(s):  
Zhiyu Xiang ◽  
Xing Dai ◽  
Yanbing Zhou ◽  
Xiaojin Gong

2014 ◽  
pp. 85-89
Author(s):  
Srikumar Ramalingam
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document