scholarly journals Fast automatic camera network calibration through human mesh recovery

2020 ◽  
Vol 17 (6) ◽  
pp. 1757-1768
Author(s):  
Nicola Garau ◽  
Francesco G. B. De Natale ◽  
Nicola Conci

Abstract Camera calibration is a necessary preliminary step in computer vision for the estimation of the position of objects in the 3D world. Despite the intrinsic camera parameters can be easily computed offline, extrinsic parameters need to be computed each time a camera changes its position, thus not allowing for fast and dynamic network re-configuration. In this paper we present an unsupervised and automatic framework for the estimation of the extrinsic parameters of a camera network, which leverages on optimised 3D human mesh recovery from a single image, and which does not require the use of additional markers. We show how it is possible to retrieve the real-world position of the cameras in the network together with the floor plane, exploiting regular RGB images and with a weak prior knowledge of the internal parameters. Our framework can also work with a single camera and in real-time, allowing the user to add, re-position, or remove cameras from the network in a dynamic fashion.

Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 2232
Author(s):  
Antonio Albiol ◽  
Alberto Albiol ◽  
Carlos Sánchez de Merás

Automated fruit inspection using cameras involves the analysis of a collection of views of the same fruit obtained by rotating a fruit while it is transported. Conventionally, each view is analyzed independently. However, in order to get a global score of the fruit quality, it is necessary to match the defects between adjacent views to prevent counting them more than once and assert that the whole surface has been examined. To accomplish this goal, this paper estimates the 3D rotation undergone by the fruit using a single camera. A 3D model of the fruit geometry is needed to estimate the rotation. This paper proposes to model the fruit shape as a 3D spheroid. The spheroid size and pose in each view is estimated from the silhouettes of all views. Once the geometric model has been fitted, a single 3D rotation for each view transition is estimated. Once all rotations have been estimated, it is possible to use them to propagate defects to neighbor views or to even build a topographic map of the whole fruit surface, thus opening the possibility to analyze a single image (the map) instead of a collection of individual views. A large effort was made to make this method as fast as possible. Execution times are under 0.5 ms to estimate each 3D rotation on a standard I7 CPU using a single core.


2011 ◽  
Vol 50-51 ◽  
pp. 468-472
Author(s):  
Chun Feng Liu ◽  
Shan Shan Kong ◽  
Hai Ming Wu

Digital cameras have been widely used in the areas of road transportation, railway transportation as well as security system. To address the position of digital camera in these fields this paper proposed a geometry calibration method based on feature point extraction of arbitrary target. Under the meaning of the questions, this paper first defines four kinds of coordinate system, that is the world coordinate system. The camera's optical center of the coordinate system is the camera coordinate system, using the same point in different coordinate system of the coordinate transformation to determine the relationship between world coordinate system and camera coordinate. And thus determine the camera's internal parameters and external parameters, available transformation matrix and translation vector indicated by the camera's internal parameters of the external parameters and the establishment of a single camera location model. According to the model, using the camera's external parameters to be on the target circle center point in the image plane coordinates.


ACTA IMEKO ◽  
2018 ◽  
Vol 7 (2) ◽  
pp. 102 ◽  
Author(s):  
Silvio Del Pizzo ◽  
Umberto Papa ◽  
Salvatore Gaglione ◽  
Salvatore Troisi ◽  
Giuseppe Del Core

An autonomous vision-based landing system was designed and its performance is analysed and measured by an UAS. The system equipment is based on a single camera to determine its position and attitude with respect to a well-defined landing pattern. The developed procedure is based on photogrammetric Space Resection Solution, which provides the position and camera attitude reckoning starting from at least three, not aligned, reference control points whose image coordinates may be measured in the image camera frame. Five circular coloured targets were placed on a specific landing pattern, their 2D image frame coordinates was extracted through a particular algorithm. The aim of this work is to compute UAS precise position and attitude from single image, in order to have a good approach to landing field. This procedure can be used in addition or for replacement of GPS tracking and can be applied when the landing field is movable or located on a moving platform, the UAS will follow the landing pattern until the landing phase will be closed.


2019 ◽  
Vol 16 (5) ◽  
pp. 172988141986446
Author(s):  
Xiaojun Wu ◽  
XingCan Tang

Light changes its direction of propagation before entering a camera enclosed in a waterproof housing owing to refraction, which means that perspective imaging models in the air cannot be directly used underwater. In this article, we propose an accurate binocular stereo measurement system in an underwater environment. First, based on the physical underwater imaging model without approximation and Tsai’s calibration method, the proposed system is calibrated to acquire the extrinsic parameters, as the internal parameters can be pre-calibrated in air. Then, based on the calibrated camera parameters, an image correction method is proposed to convert the underwater images to air images. Thus, the epipolar constraint can be used to search the matching point directly. The experimental results show that the proposed method in this article can effectively eliminate the effect of refraction in the binocular vision and the measurement accuracy can be compared with the measurement result in air.


Author(s):  
Punarjay Chakravarty ◽  
Tom Roussel ◽  
Gaurav Pandey ◽  
Tinne Tuytelaars

Abstract We describe a Deep-Geometric Localizer that is able to estimate the full six degrees-of-freedom (DoF) global pose of the camera from a single image in a previously mapped environment. Our map is a topo-metric one, with discrete topological nodes whose 6DOF poses are known. Each topo-node in our map also comprises of a set of points, whose 2D features and 3D locations are stored as part of the mapping process. For the mapping phase, we utilise a stereo camera and a regular stereo visual SLAM pipeline. During the localization phase, we take a single camera image, localize it to a topological node using Deep Learning, and use a geometric algorithm (PnP) on the matched 2D features (and their 3D positions in the topo map) to determine the full 6DOF globally consistent pose of the camera. Our method divorces the mapping and the localization algorithms and sensors (stereo and mono), and allows accurate 6DOF pose estimation in a previously mapped environment using a single camera. With results in simulated and real environments, our hybrid algorithm is particularly useful for autonomous vehicles (AVs) and shuttles that might repeatedly traverse the same route.


2014 ◽  
Vol 496-500 ◽  
pp. 1869-1872
Author(s):  
Ye Tian ◽  
Zhen Wei Wang ◽  
Feng Chen

Human vision is generally regarded as a complicated process from feeling to consciousness. In other words, it refers to a projection form 3-D object to 2-D image, as well as a cognition of real objects according to 2-D image,The process that a real object is modeled through some images is called 3-D reconstruction. Presently, camera calibration attracts many researchers, and it includes the internal parameters and the external parameters, such as coordinate of main point, parameters of rotation and translation. Some researchers have pointed out that parallelepiped has a strict topological structure and geometric constraints. Therefore, it is suitable for the self-calibration of camera. This paper briefly explains parallelepiped methodsand tries to apply this method to deal with self-calibration. The experiments show that this method is flexible and available. image.


2008 ◽  
Vol 381-382 ◽  
pp. 379-382
Author(s):  
Rong Sheng Lu ◽  
Z.J. Liu ◽  
X.M. Dang ◽  
P.H. Hu

In this paper, we present a method of measuring a freeform surface profile from a single image taken by a vision system consisting of a digital camera and a pattern projector. The measurement can be implemented without calibrating the camera’s parameters, provided that the intrinsic and extrinsic parameters of the projector are known. The method enables the camera to have much more adaptability for measuring a stationary or moving object with complex shape of surface.


Sensors ◽  
2019 ◽  
Vol 19 (22) ◽  
pp. 4989
Author(s):  
Truong ◽  
Philips ◽  
Deligiannis ◽  
Abrahamyan ◽  
Guan

Extrinsic camera calibration is essential for any computer vision task in a camera network. Typically, researchers place a calibration object in the scene to calibrate all the cameras in a camera network. However, when installing cameras in the field, this approach can be costly and impractical, especially when recalibration is needed. This paper proposes a novel, accurate and fully automatic extrinsic calibration framework for camera networks with partially overlapping views. The proposed method considers the pedestrians in the observed scene as the calibration objects and analyzes the pedestrian tracks to obtain extrinsic parameters. Compared to the state of the art, the new method is fully automatic and robust in various environments. Our method detect human poses in the camera images and then models walking persons as vertical sticks. We apply a brute-force method to determines the correspondence between persons in multiple camera images. This information along with 3D estimated locations of the top and the bottom of the pedestrians are then used to compute the extrinsic calibration matrices. We also propose a novel method to calibrate the camera network by only using the top and centerline of the person when the bottom of the person is not available in heavily occluded scenes. We verified the robustness of the method in different camera setups and for both single and multiple walking people. The results show that the triangulation error of a few centimeters can be obtained. Typically, it requires less than one minute of observing the walking people to reach this accuracy in controlled environments. It also just takes a few minutes to collect enough data for the calibration in uncontrolled environments. Our proposed method can perform well in various situations such as multi-person, occlusions, or even at real intersections on the street.


2006 ◽  
Vol 03 (02) ◽  
pp. 177-203
Author(s):  
YOUFU WU ◽  
JUN SHEN ◽  
MO DAI

For 3D Euclidean reconstruction, the challenging problem is that only the hypothesis of intrinsic parameters can be used to retrieve the camera parameters without additional information. In this paper, we propose a method to find out the intrinsic parameters of a camera using the rank constraint of the relation matrix of absolute conic Ω. The degenerative problems and the problem of variable internal parameters are also studied. The experimental results presented show the good performance compared with the results of other methods for the self-calibration of a camera.


Sensors ◽  
2020 ◽  
Vol 20 (14) ◽  
pp. 3860
Author(s):  
Namhoon Kim ◽  
Junsu Bae ◽  
Cheolhwan Kim ◽  
Soyeon Park ◽  
Hong-Gyoo Sohn

This paper proposes a technique to estimate the distance between an object and a rolling shutter camera using a single image. The implementation of this technique uses the principle of the rolling shutter effect (RSE), a distortion within the rolling-shutter-type camera. The proposed technique has a mathematical strength compared to other single photo-based distance estimation methods that do not consider the geometric arrangement. The relationship between the distance and RSE angle was derived using the camera parameters (focal length, shutter speed, image size, etc.). Mathematical equations were derived for three different scenarios. The mathematical model was verified through experiments using a Nikon D750 and Nikkor 50 mm lens mounted on a car with varying speeds, object distances, and camera parameters. The results show that the mathematical model provides an accurate distance estimation of an object. The distance estimation error using the RSE due to the change in speed remained stable at approximately 10 cm. However, when the distance between the object and camera was more than 10 m, the estimated distance was sensitive to the RSE and the error increased dramatically.


Sign in / Sign up

Export Citation Format

Share Document