scholarly journals Calibration of the Relative Orientation between Multiple Depth Cameras Based on a Three-Dimensional Target

Sensors ◽  
2019 ◽  
Vol 19 (13) ◽  
pp. 3008 ◽  
Author(s):  
Zhe Liu ◽  
Zhaozong Meng ◽  
Nan Gao ◽  
Zonghua Zhang

Depth cameras play a vital role in three-dimensional (3D) shape reconstruction, machine vision, augmented/virtual reality and other visual information-related fields. However, a single depth camera cannot obtain complete information about an object by itself due to the limitation of the camera’s field of view. Multiple depth cameras can solve this problem by acquiring depth information from different viewpoints. In order to do so, they need to be calibrated to be able to accurately obtain the complete 3D information. However, traditional chessboard-based planar targets are not well suited for calibrating the relative orientations between multiple depth cameras, because the coordinates of different depth cameras need to be unified into a single coordinate system, and the multiple camera systems with a specific angle have a very small overlapping field of view. In this paper, we propose a 3D target-based multiple depth camera calibration method. Each plane of the 3D target is used to calibrate an independent depth camera. All planes of the 3D target are unified into a single coordinate system, which means the feature points on the calibration plane are also in one unified coordinate system. Using this 3D target, multiple depth cameras can be calibrated simultaneously. In this paper, a method of precise calibration using lidar is proposed. This method is not only applicable to the 3D target designed for the purposes of this paper, but it can also be applied to all 3D calibration objects consisting of planar chessboards. This method can significantly reduce the calibration error compared with traditional camera calibration methods. In addition, in order to reduce the influence of the infrared transmitter of the depth camera and improve its calibration accuracy, the calibration process of the depth camera is optimized. A series of calibration experiments were carried out, and the experimental results demonstrated the reliability and effectiveness of the proposed method.

Author(s):  
Brian Burns ◽  
Biswanath Samanta

In co-robotics applications, the robots must identify human partners and recognize their status in dynamic interactions for enhanced acceptance and effectiveness as socially interactive agents. Using the data from depth cameras, people can be identified from a person’s skeletal information. This paper presents the implementation of a human identification algorithm using a depth camera (Carmine from PrimeSense), an open-source middleware (NITE from OpenNI) with the Java-based Processing language and an Arduino microcontroller. This implementation and communication sets a framework for future applications of human-robot interactions. Based on the movements of the individual in the depth sensor’s field of view, the program can be set to track a human skeleton or the closest pixel in the image. Joint locations in the tracked human can be isolated for specific usage by the program. Joints include the head, torso, shoulders, elbows, hands, knees and feet. Logic and calibration techniques were used to create systems such as a facial tracking pan and tilt servomotor mechanism. The control system presented here sets groundwork for future implementation into student built animatronic figures and mobile robot platforms such as Turtlebot.


2020 ◽  
Vol 17 (1) ◽  
pp. 172988141989671 ◽  
Author(s):  
Luis R Ramírez-Hernández ◽  
Julio C Rodríguez-Quiñonez ◽  
Moises J Castro-Toscano ◽  
Daniel Hernández-Balbuena ◽  
Wendy Flores-Fuentes ◽  
...  

Computer vision systems have demonstrated to be useful in applications of autonomous navigation, especially with the use of stereo vision systems for the three-dimensional mapping of the environment. This article presents a novel camera calibration method to improve the accuracy of stereo vision systems for three-dimensional point localization. The proposed camera calibration method uses the least square method to model the error caused by the image digitalization and the lens distortion. To obtain particular three-dimensional point coordinates, the stereo vision systems use the information of two images taken by two different cameras. Then, the system locates the two-dimensional pixel coordinates of the three-dimensional point in both images and coverts them into angles. With the obtained angles, the system finds the three-dimensional point coordinates through a triangulation process. The proposed camera calibration method is applied in the stereo vision systems, and a comparative analysis between the real and calibrated three-dimensional data points is performed to validate the improvements. Moreover, the developed method is compared with three classical calibration methods to analyze their advantages in terms of accuracy with respect to tested methods.


Sensors ◽  
2018 ◽  
Vol 18 (10) ◽  
pp. 3265 ◽  
Author(s):  
Xuan Wang ◽  
Jiro Tanaka

Biometric authentication is popular in authentication systems, and gesture as a carrier of behavior characteristics has the advantages of being difficult to imitate and containing abundant information. This research aims to use three-dimensional (3D) depth information of gesture movement to perform authentication with less user effort. We propose an approach based on depth cameras, which satisfies three requirements: Can authenticate from a single, customized gesture; achieves high accuracy without an excessive number of gestures for training; and continues learning the gesture during use of the system. To satisfy these requirements respectively: We use a sparse autoencoder to memorize the single gesture; we employ data augmentation technology to solve the problem of insufficient data; and we use incremental learning technology for allowing the system to memorize the gesture incrementally over time. An experiment has been performed on different gestures in different user situations that demonstrates the accuracy of one-class classification (OCC), and proves the effectiveness and reliability of the approach. Gesture authentication based on 3D depth cameras could be achieved with reduced user effort.


Sensors ◽  
2020 ◽  
Vol 20 (4) ◽  
pp. 1130 ◽  
Author(s):  
Huaiyu Cai ◽  
Weisong Pang ◽  
Xiaodong Chen ◽  
Yi Wang ◽  
Haolin Liang

Aiming at the problems of feature point calibration method of 3D light detection and ranging (LiDAR) and camera calibration that are calibration boards in various forms, incomplete information extraction methods and large calibration errors, a novel calibration board with local gradient depth information and main plane square corner information (BWDC) was designed. In addition, the "three-step fitting interpolation method" was proposed to select feature points and obtain the corresponding coordinates of feature points in the LiDAR coordinate system and camera pixel coordinate system based on BWDC. Finally, calibration experiments were carried out, and the calibration results were verified by methods such as incremental verification and reprojection error comparison. The calibration results show that using BWDC and the "three-step fitting interpolation method" can solve quite accurate coordinate transformation matrix and intrinsic and external parameters of sensors, which dynamically change within 0.2% in the repeatable experiments. The difference between the experimental value and the actual value in the incremental verification experiment is about 0.5%. The average reprojection error is 1.8312 pixels, and the value changes at different distances do not exceed 0.1 pixels, which also show that the calibration method is accurate and stable.


1989 ◽  
Vol 111 (1) ◽  
pp. 31-39 ◽  
Author(s):  
E. K. Antonsson ◽  
R. W. Mann

An optoelectronic photogrammetric system to measure the spatial kinematic histories of linkages is presented. A Body Coordinate System approach produces both three-dimensional position and orientation trajectories with no singularities for any rotation. Bandwidth is in excess of 300 Hz for each of 10 link elements. A detailed analysis and measurement of the intrinsic camera calibration corrections has been performed using 12,000 calibration points per camera. An independent verification of spatial accuracy has been performed.


Author(s):  
Chuang-Yuan Chiu ◽  
Michael Thelwell ◽  
Terry Senior ◽  
Simon Choppin ◽  
John Hart ◽  
...  

KinectFusion is a typical three-dimensional reconstruction technique which enables generation of individual three-dimensional human models from consumer depth cameras for understanding body shapes. The aim of this study was to compare three-dimensional reconstruction results obtained using KinectFusion from data collected with two different types of depth camera (time-of-flight and stereoscopic cameras) and compare these results with those of a commercial three-dimensional scanning system to determine which type of depth camera gives improved reconstruction. Torso mannequins and machined aluminium cylinders were used as the test objects for this study. Two depth cameras, Microsoft Kinect V2 and Intel Realsense D435, were selected as the representatives of time-of-flight and stereoscopic cameras, respectively, to capture scan data for the reconstruction of three-dimensional point clouds by KinectFusion techniques. The results showed that both time-of-flight and stereoscopic cameras, using the developed rotating camera rig, provided repeatable body scanning data with minimal operator-induced error. However, the time-of-flight camera generated more accurate three-dimensional point clouds than the stereoscopic sensor. Thus, this suggests that applications requiring the generation of accurate three-dimensional human models by KinectFusion techniques should consider using a time-of-flight camera, such as the Microsoft Kinect V2, as the image capturing sensor.


Author(s):  
Badrinath Roysam ◽  
Hakan Ancin ◽  
Douglas E. Becker ◽  
Robert W. Mackin ◽  
Matthew M. Chestnut ◽  
...  

This paper summarizes recent advances made by this group in the automated three-dimensional (3-D) image analysis of cytological specimens that are much thicker than the depth of field, and much wider than the field of view of the microscope. The imaging of thick samples is motivated by the need to sample large volumes of tissue rapidly, make more accurate measurements than possible with 2-D sampling, and also to perform analysis in a manner that preserves the relative locations and 3-D structures of the cells. The motivation to study specimens much wider than the field of view arises when measurements and insights at the tissue, rather than the cell level are needed.The term “analysis” indicates a activities ranging from cell counting, neuron tracing, cell morphometry, measurement of tracers, through characterization of large populations of cells with regard to higher-level tissue organization by detecting patterns such as 3-D spatial clustering, the presence of subpopulations, and their relationships to each other. Of even more interest are changes in these parameters as a function of development, and as a reaction to external stimuli. There is a widespread need to measure structural changes in tissue caused by toxins, physiologic states, biochemicals, aging, development, and electrochemical or physical stimuli. These agents could affect the number of cells per unit volume of tissue, cell volume and shape, and cause structural changes in individual cells, inter-connections, or subtle changes in higher-level tissue architecture. It is important to process large intact volumes of tissue to achieve adequate sampling and sensitivity to subtle changes. It is desirable to perform such studies rapidly, with utmost automation, and at minimal cost. Automated 3-D image analysis methods offer unique advantages and opportunities, without making simplifying assumptions of tissue uniformity, unlike random sampling methods such as stereology.12 Although stereological methods are known to be statistically unbiased, they may not be statistically efficient. Another disadvantage of sampling methods is the lack of full visual confirmation - an attractive feature of image analysis based methods.


Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1091
Author(s):  
Izaak Van Crombrugge ◽  
Rudi Penne ◽  
Steve Vanlanduit

Knowledge of precise camera poses is vital for multi-camera setups. Camera intrinsics can be obtained for each camera separately in lab conditions. For fixed multi-camera setups, the extrinsic calibration can only be done in situ. Usually, some markers are used, like checkerboards, requiring some level of overlap between cameras. In this work, we propose a method for cases with little or no overlap. Laser lines are projected on a plane (e.g., floor or wall) using a laser line projector. The pose of the plane and cameras is then optimized using bundle adjustment to match the lines seen by the cameras. To find the extrinsic calibration, only a partial overlap between the laser lines and the field of view of the cameras is needed. Real-world experiments were conducted both with and without overlapping fields of view, resulting in rotation errors below 0.5°. We show that the accuracy is comparable to other state-of-the-art methods while offering a more practical procedure. The method can also be used in large-scale applications and can be fully automated.


Author(s):  
Heather Johnston ◽  
Colleen Dewis ◽  
John Kozey

Objective The objectives were to compare cylindrical and spherical coordinate representations of the maximum reach envelope (MRE) and apply these to a comparison of age and load on the MRE. Background The MRE is a useful measurement in the design of workstations and quantifying functional capability of the upper body. As a dynamic measure, there are human factors that impact the size, shape, and boundaries of the MRE. Method Three-dimensional reach measures were recorded using a computerized potentiometric system for anthropometric measures (CPSAM) on two adult groups (aged 18–25 years and 35–70 years). Reach trials were performed holding .0, .5, and 1 kg. Results Three-dimensional Cartesian coordinates were transformed into cylindrical ( r, θ , Z) and spherical ( r, θ, ϕ) coordinates. Median reach distance vectors were calculated for 54 panels within the MRE as created by incremented banding of the respective coordinate systems. Reach distance and reach area were compared between the two groups and the loaded conditions using a spherical coordinate system. Both younger adults and unloaded condition produced greater reach distances and reach areas. Conclusions Where a cylindrical coordinate system may reflect absolute reference for design, a normalized spherical coordinate system may better reflect functional range of motion and better compare individual and group differences. Age and load are both factors that impact the MRE. Application These findings present measurement considerations for use in human reach investigation and design.


Sign in / Sign up

Export Citation Format

Share Document