camera coordinate system
Recently Published Documents


TOTAL DOCUMENTS

20
(FIVE YEARS 7)

H-INDEX

2
(FIVE YEARS 0)

Author(s):  
Tianyun Yuan ◽  
Yu Song ◽  
Gerald A. Kraan ◽  
Richard HM Goossens

Abstract Measuring the motions of human hand joints is often a challenge due to the high number of degrees of freedom. In this study, we proposed a hand tracking system utilizing action cameras and ArUco markers to continuously measure the rotation angles of hand joints. Three methods were developed to estimate the joint rotation angles. The pos-based method transforms marker positions to a reference coordinate system (RCS) and extracts a hand skeleton to identify the rotation angles. Similarly, the orient-x-based method calculates the rotation angles from the transformed x-orientations of the detected markers in the RCS. In contrast, the orient-mat-based method first identifies the rotation angles in each camera coordinate system using the detected orientations, and then, synthesizes the results regarding each joint. Experiment results indicated that the repeatability errors with one camera regarding different marker sizes were around 2.64 to 27.56 degrees and 0.60 to 2.36 degrees using the marker positions and orientations respectively. When multiple cameras were employed to measure the joint rotation angles, the angles measured by using the three methods were comparable with that measured by a goniometer. Despite larger deviations occurred when using the pos-based method. Further analysis indicated that the results of using the orient-mat-based method can describe more types of joint rotations, and the effectiveness of this method was verified by capturing hand movements of several participants. Thus it is recommended for measuring joint rotation angles in practical setups.


2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Qin Shi ◽  
Huansheng Song ◽  
Shijie Sun

Calibration of extrinsic parameters of the RGB-D camera can be applied in many fields, such as 3D scene reconstruction, robotics, and target detection. Many calibration methods employ a specific calibration object (i.e., a chessboard, cuboid, etc.) to calibrate the extrinsic parameters of the RGB-D color camera without using the depth map. As a result, it is difficult to simplify the calibration process, and the color sensor gets calibrated instead of the depth sensor. To this end, we propose a method that employs the depth map to perform extrinsic calibration automatically. In detail, the depth map is first transformed to a 3D point cloud in the camera coordinate system, and then the planes in the 3D point cloud are automatically detected using the Maximum Likelihood Estimation Sample Consensus (MLESAC) method. After that, according to the constraint relationship between the ground plane and the world coordinate system, all planes are traversed and screened until the ground plane is obtained. Finally, the extrinsic parameters are calculated using the spatial relationship between the ground plane and the camera coordinate system. The results show that the mean roll angle error of extrinsic parameter calibration was −1.14°. The mean pitch angle error was 4.57°, and the mean camera height error was 3.96 cm. The proposed method can accurately and automatically estimate the extrinsic parameters of a camera. Furthermore, after parallel optimization, it can achieve real-time performance for automatically estimating a robot’s attitude.


Actuators ◽  
2021 ◽  
Vol 10 (4) ◽  
pp. 85
Author(s):  
Jiang Hua ◽  
Liangcai Zeng

A robot can identify the position of a target and complete a grasping based on the hand–eye calibration algorithm, through which the relationship between the robot coordinate system and the camera coordinate system can be established. The accuracy of the hand–eye calibration algorithm affects the real-time performance of the visual servo system and the robot manipulation. The traditional calibration technique is based on a perfect mathematical model AX = XB, in which the X represents the relationship of (A) the camera coordinate system and (B) the robot coordinate system. The traditional solution to the transformation matrix has a certain extent of limitation and instability. To solve this problem, an optimized neural-network-based hand–eye calibration method was developed to establish a non-linear relationship between robotic coordinates and pixel coordinates that can compensate for the nonlinear distortion of the camera lens. The learning process of the hand–eye calibration model can be interpreted as B=fA, which is the coordinate transformation relationship trained by the neural network. An accurate hand–eye calibration model can finally be obtained by continuously optimizing the network structure and parameters via training. Finally, the accuracy and stability of the method were verified by experiments on a robot grasping system.


2021 ◽  
Vol 2021 ◽  
pp. 1-19
Author(s):  
Le Zhang ◽  
Rui Li ◽  
Zhiqiang Li ◽  
Yuyao Meng ◽  
Jinxin Liang ◽  
...  

In order to improve the weeding efficiency and protect farm crops, accurate and fast weeds removal guidance to agricultural mobile robots is an utmost important topic. Based on this motivation, we propose a time-efficient quadratic traversal algorithm for the removal guidance of weeds around the recognized corn in the field. To recognize the weeds and corns, a Faster R-CNN neural network is implemented in real-time recognition. Then, an ultra-green characterization (EXG) hyperparameter is used for grayscale image processing. An improved OTSU (IOTSU) algorithm is proposed to accurately generate and optimize the binary image. Compared to the traditional OTSU algorithm, the improved OTSU algorithm effectively shortens the search speed of the algorithm and reduces the calculation processing time by compressing the range of the search grayscale range. Finally, based on the contour of the target plants and the Canny edge detection operator, the shortest weeding path guidance can be calculated by the proposed quadratic traversal algorithm. The experimental results proved that our search success rate can reach 90.0% on the testing date. This result ensured the accurate selection of the target 2D coordinates in the pixel coordinate system. Transforming the target 2D coordinate point in the pixel coordinate system into the 3D coordinate point in the camera coordinate system as well as using a depth camera can achieve multitarget depth ranging and path planning for an optimized weeding path.


Sensors ◽  
2021 ◽  
Vol 21 (2) ◽  
pp. 400
Author(s):  
Sheng Lu ◽  
Zhaojie Luo ◽  
Feng Gao ◽  
Mingjie Liu ◽  
KyungHi Chang ◽  
...  

Lane detection is a significant technology for autonomous driving. In recent years, a number of lane detection methods have been proposed. However, the performance of fast and slim methods is not satisfactory in sophisticated scenarios and some robust methods are not fast enough. Consequently, we proposed a fast and robust lane detection method by combining a semantic segmentation network and an optical flow estimation network. Specifically, the whole research was divided into three parts: lane segmentation, lane discrimination, and mapping. In terms of lane segmentation, a robust semantic segmentation network was proposed to segment key frames and a fast and slim optical flow estimation network was used to track non-key frames. In the second part, density-based spatial clustering of applications with noise (DBSCAN) was adopted to discriminate lanes. Ultimately, we proposed a mapping method to map lane pixels from pixel coordinate system to camera coordinate system and fit lane curves in the camera coordinate system that are able to provide feedback for autonomous driving. Experimental results verified that the proposed method can speed up robust semantic segmentation network by three times at most and the accuracy fell 2% at most. In the best of circumstances, the result of the lane curve verified that the feedback error was 3%.


Entropy ◽  
2020 ◽  
Vol 22 (8) ◽  
pp. 806
Author(s):  
Seong Hyun Kim ◽  
Ju Yong Chang

Although the performance of the 3D human shape reconstruction method has improved considerably in recent years, most methods focus on a single person, reconstruct a root-relative 3D shape, and rely on ground-truth information about the absolute depth to convert the reconstruction result to the camera coordinate system. In this paper, we propose an end-to-end learning-based model for single-shot, 3D, multi-person shape reconstruction in the camera coordinate system from a single RGB image. Our network produces output tensors divided into grid cells to reconstruct the 3D shapes of multiple persons in a single-shot manner, where each grid cell contains information about the subject. Moreover, our network predicts the absolute position of the root joint while reconstructing the root-relative 3D shape, which enables reconstructing the 3D shapes of multiple persons in the camera coordinate system. The proposed network can be learned in an end-to-end manner and process images at about 37 fps to perform the 3D multi-person shape reconstruction task in real time.


2019 ◽  
Author(s):  
A. Mancebo ◽  
L. DeMars ◽  
C. T. Ertsgaard ◽  
E. M. Puchner

AbstractSpatial light modulation using cost efficient digital mirror arrays (DMA) is finding broad applications in fluorescence microscopy due to the reduction of phototoxicity and bleaching and the ability to manipulate proteins in optogenetic experiments. However, the precise calibration of DMAs and their application to single-molecule localization microscopy (SMLM) remained a challenge because of non-linear distortions between the DMA and camera coordinate system caused by optical components. Here we develop a fast and easy to implement calibration procedure that determines these distortions by means of an optical feedback and matches the DMA and camera coordinate system with ~50 nm precision. As a result, a region from a fluorescence image can be selected with a higher precision for illumination compared to manual alignment of the DMA. We first demonstrate the application of our precisely calibrated light modulation by performing a proof-of concept fluorescence recovery after photobleaching experiment with the endoplasmic reticulum-localized protein IRE1 fused to GFP. Next, we develop a spatial feedback photoactivation approach for SMLM in which only regions of the cell are selected for photoactivation that contain photoactivatable fluorescent proteins. The reduced exposure of the cells to 405 nm light increases the possible imaging time by 44% until phototoxic effects cause a dominant fluorescence background and a change in the cell’s morphology. As a result, the mean number of reliable single molecule localizations is also significantly increased by 28%. Since the localization precision and the ability for single molecule tracking is not altered compared to traditional photoactivation of the entire field of view, spatial feedback photoactivation significantly improves the quality of SMLM images and the precision of single molecule tracking. Our calibration method therefore lays the foundation for improved SMLM with active feedback photoactivation far beyond the applications in this work.Statement of significanceActively patterned illumination in fluorescence microscopy can reduce bleaching and phototoxicity as well as actively manipulate proteins in optogenetic applications. Matching the coordinate system of the camera and the light patterning device such as digital mirror arrays (DMA) remains a challenge. We developed a fast and easy calibration procedure that determines and corrects for the transformation between the camera and DMA coordinate system with ~50 nm precision. Using this approach, we develop spatial feedback photoactivation for Single Molecule Localization Microscopy (SMLM) to photoswitch only intracellular regions containing photoswitchable fluorophores. Our results show a 44% improvement in the possible data acquisition time before phototoxic effects become detectable and a 28% increase in detected localizations. Spatial feedback photoactivation thus significantly improves SMLM experiments.


Symmetry ◽  
2018 ◽  
Vol 10 (12) ◽  
pp. 715 ◽  
Author(s):  
Dong-seok Lee ◽  
Soon-kak Kwon

In this paper, an intra prediction method is proposed for coding of depth pictures using plane modelling. Each pixel in the depth picture is related to the distance from a camera to an object surface, and pixels corresponding to a flat surface of an object form a relationship with the 2D plane surface. The plane surface can be represented by a simple equation in the 3D camera coordinate system in such a way that the coordinate system of depth pixels can be transformed to the camera coordinate system. This paper finds the parameters which define the plane surface closest to given depth pixels. The plane model is then used to predict the depth pixels on the plane surface. A depth prediction method is also devised for efficient intra prediction of depth pictures, using variable-size blocks. For prediction with variable-size blocks, the plane surface that occupies a large part of the picture can be predicted using a large block size. The simulation results of the proposed method show that the mean squared error is reduced by up to 96.6% for a block size of 4 × 4 pixels and reduced by up to 98% for a block size of 16 × 16, compared with the intra prediction modes of H.264/AVC and H.265/HEVC.


2018 ◽  
Vol 930 (12) ◽  
pp. 2-8
Author(s):  
A.A. Kluykov

The article represents the algorithm of attitude determination in gradiometer coordinate system with respect to inertial space. The problem can be solved in two steps. The first step is to determine the values of matrix transformation from celestial system (ICRF) to star camera coordinate system (SSRF) using observations star. The second step is to determine the values of matrix transformation from star camera coordinate system (SSRF) to gradiometer coordinate system (GRF). This problem is solved through mounting sensor systems on board of a satellite. Due to the mission GOCE three star cameras are mounted there. The matrix of transformation from star camera coordinate system (SSRF) to gradiometer coordinate system (GRF) is determined for every star camera. The values of transformation matrix are represented in file of data AUX_EGG_DB. Processing star camera’s (star cameras’) observations include the following steps


Sign in / Sign up

Export Citation Format

Share Document