A Parallel Programming Approach for Estimation of Depth in World Coordinate System Using Single Camera

Author(s):  
C. Rashmi ◽  
G. Hemantha Kumar
2011 ◽  
Vol 50-51 ◽  
pp. 468-472
Author(s):  
Chun Feng Liu ◽  
Shan Shan Kong ◽  
Hai Ming Wu

Digital cameras have been widely used in the areas of road transportation, railway transportation as well as security system. To address the position of digital camera in these fields this paper proposed a geometry calibration method based on feature point extraction of arbitrary target. Under the meaning of the questions, this paper first defines four kinds of coordinate system, that is the world coordinate system. The camera's optical center of the coordinate system is the camera coordinate system, using the same point in different coordinate system of the coordinate transformation to determine the relationship between world coordinate system and camera coordinate. And thus determine the camera's internal parameters and external parameters, available transformation matrix and translation vector indicated by the camera's internal parameters of the external parameters and the establishment of a single camera location model. According to the model, using the camera's external parameters to be on the target circle center point in the image plane coordinates.


Solar Physics ◽  
2009 ◽  
Vol 261 (1) ◽  
pp. 215-222 ◽  
Author(s):  
W. T. Thompson ◽  
K. Wei

1999 ◽  
Author(s):  
Chunhe Gong ◽  
Jingxia Yuan ◽  
Jun Ni

Abstract Robot calibration plays an increasingly important role in manufacturing. For robot calibration on the manufacturing floor, it is desirable that the calibration technique be easy and convenient to implement. This paper presents a new self-calibration method to calibrate and compensate for robot system kinematic errors. Compared with the traditional calibration methods, this calibration method has several unique features. First, it is not necessary to apply an external measurement system to measure the robot end-effector position for the purpose of kinematic identification since the robot measurement system has a sensor as its integral part. Second, this self-calibration is based on distance measurement rather than absolute position measurement for kinematic identification; therefore the calibration of the transformation from the world coordinate system to the robot base coordinate system, known as base calibration, is not necessary. These features not only greatly facilitate the robot system calibration but also shorten the error propagation chain, therefore, increase the accuracy of parameter estimation. An integrated calibration system is designed to validate the effectiveness of this calibration method. Experimental results show that after calibration there is a significant improvement of robot accuracy over a typical robot workspace.


Sensors ◽  
2021 ◽  
Vol 21 (7) ◽  
pp. 2265
Author(s):  
Jung Hyun Lee ◽  
Dong-Wook Lee

An around view monitoring (AVM) system acquires the front, rear, left, and right-side information of a vehicle using four cameras and transforms the four images into one image coordinate system to monitor around the vehicle with one image. Conventional AVM calibration utilizes the maximum likelihood estimation (MLE) to determine the parameters that can transform the captured four images into one AVM image. The MLE requires reference data of the image coordinate system and the world coordinate system to estimate these parameters. In conventional AVM calibration, many aligned calibration boards are placed around the vehicle and are measured to extract the reference sample data. However, accurately placing and measuring the calibration boards around a vehicle is an exhaustive procedure. To remediate this problem, we propose a novel AVM calibration method that requires only four randomly placed calibration boards by estimating the location of each calibration board. First, we define the AVM errors and determine the parameters that minimize the error in estimating the location. We then evaluate the accuracy of the proposed method through experiments using a real-sized vehicle and an electric vehicle for children to show that the proposed method can generate an AVM image similar to the conventional AVM calibration method regardless of a vehicle’s size.


2013 ◽  
Vol 274 ◽  
pp. 336-339
Author(s):  
Y. Chen ◽  
M.Y. He

Lane model, lane geometric structure calculation, and vehicle deviation angle and position calculation is a crucial part in automatic drive of intelligent vehicle. In this paper, we introduce a lane model and calculation equations of lane geometric structure and vehicle deviation angle and position in lane. Firstly, we establish the world coordinate system and derive the lane boundary equation based on the actual lane alignment. Secondly, we derive the lane boundary equation in the camera coordinate system through the coordinate transformation. Then we build up the image coordinate system and the pixel coordinate system and derive the lane projective parameter model. Finally, we derive the calculation equations of lane geometric structure and vehicle deviation angle and position in lane. For we introduce the camera roll angle, the lane curvature change rate, and the reasonable assumption in the derivation, the lane model and the calculation are more general and accurate.


2016 ◽  
Vol 42 (6) ◽  
pp. 361-366 ◽  
Author(s):  
E. V. Shal’nov ◽  
A. D. Gringauz ◽  
A. S. Konushin

Author(s):  
Brian Stancil ◽  
Hsiang-Wen Hsieh ◽  
Tsuhan Chen ◽  
Hung-Hsiu Yu

Localization is one of the critical issues in the field of multi-robot navigation. With an accurate estimate of the robot pose, robots will be able to navigate in their environment autonomously with the aid of flexible path planning. In this paper, the infrastructure of a Distributed Vision System (DVS) for multi-robot localization is presented. The main difference between traditional DVSs and the proposed one is that multiple overhead cameras can simultaneously localize a network of robots. The proposed infrastructure is comprised of a Base Process and Coordinate Transform Process. The Base Process receives images from various cameras mounted in the environment and then utilizes this information to localize multiple robots. Coordinate Transform Process is designed to transform from Image Reference Plane to world coordinate system. ID tags are used to locate each robot within the overhead image and camera intrinsic and extrinsic parameters are used to estimate a global pose for each robot. The presented infrastructure was recently implemented by a network of small robot platforms with several overhead cameras mounted in the environment. The results show that the proposed infrastructure could simultaneously localize multiple robots in a global world coordinate system with localization errors within 0.1 meters.


Metals ◽  
2021 ◽  
Vol 11 (9) ◽  
pp. 1401
Author(s):  
Siyuan Fang ◽  
Xiaowan Zheng ◽  
Gang Zheng ◽  
Boyang Zhang ◽  
Bicheng Guo ◽  
...  

More and more attention has been given in the field of mechanical engineering to a material’s R-value, a parameter that characterizes the ability of sheet metal to resist thickness strain. Conventional methods used to determine R-value are based on experiments and an assumption of constant volume. Because the thickness strain cannot be directly measured, the R-value is currently determined using experimentally measured strains in the width, and loading directions in combination with the constant volume assumption, to determine the thickness strain indirectly. This paper provides an alternative method for determining the R-value without any assumptions. This method is based on the use of a multi-camera DIC system to measure strains in three directions simultaneously. Two sets of stereo-vision DIC measurement systems, each comprised of two GigE cameras, are placed on the front and back sides of the sample. Use of the double-sided calibration strategy unifies the world coordinate system of the front and back DIC measurement systems to one coordinate system, allowing for the measurement of thickness strain and direct calculation of R-value. The Random Sample Consensus (RANSAC) algorithm is used to eliminate noise in the thickness strain data, resulting in a more accurate R-value measurement.


2003 ◽  
Vol 36 (5) ◽  
pp. 521-552 ◽  
Author(s):  
Dorit Naishlos ◽  
Joseph Nuzman ◽  
Chau-Wen Tseng ◽  
Uzi Vishkin

Sign in / Sign up

Export Citation Format

Share Document