scholarly journals METRIC CALIBRATION OF A FOCUSED PLENOPTIC CAMERA BASED ON A 3D CALIBRATION TARGET

Author(s):  
N. Zeller ◽  
C. A. Noury ◽  
F. Quint ◽  
C. Teulière ◽  
U. Stilla ◽  
...  

In this paper we present a new calibration approach for focused plenoptic cameras. We derive a new mathematical projection model of a focused plenoptic camera which considers lateral as well as depth distortion. Therefore, we derive a new depth distortion model directly from the theory of depth estimation in a focused plenoptic camera. In total the model consists of five intrinsic parameters, the parameters for radial and tangential distortion in the image plane and two new depth distortion parameters. In the proposed calibration we perform a complete bundle adjustment based on a 3D calibration target. The residual of our optimization approach is three dimensional, where the depth residual is defined by a scaled version of the inverse virtual depth difference and thus conforms well to the measured data. Our method is evaluated based on different camera setups and shows good accuracy. For a better characterization of our approach we evaluate the accuracy of virtual image points projected back to 3D space.

Author(s):  
N. Zeller ◽  
C. A. Noury ◽  
F. Quint ◽  
C. Teulière ◽  
U. Stilla ◽  
...  

In this paper we present a new calibration approach for focused plenoptic cameras. We derive a new mathematical projection model of a focused plenoptic camera which considers lateral as well as depth distortion. Therefore, we derive a new depth distortion model directly from the theory of depth estimation in a focused plenoptic camera. In total the model consists of five intrinsic parameters, the parameters for radial and tangential distortion in the image plane and two new depth distortion parameters. In the proposed calibration we perform a complete bundle adjustment based on a 3D calibration target. The residual of our optimization approach is three dimensional, where the depth residual is defined by a scaled version of the inverse virtual depth difference and thus conforms well to the measured data. Our method is evaluated based on different camera setups and shows good accuracy. For a better characterization of our approach we evaluate the accuracy of virtual image points projected back to 3D space.


Sensors ◽  
2020 ◽  
Vol 20 (20) ◽  
pp. 5765 ◽  
Author(s):  
Seiya Ito ◽  
Naoshi Kaneko ◽  
Kazuhiko Sumi

This paper proposes a novel 3D representation, namely, a latent 3D volume, for joint depth estimation and semantic segmentation. Most previous studies encoded an input scene (typically given as a 2D image) into a set of feature vectors arranged over a 2D plane. However, considering the real world is three-dimensional, this 2D arrangement reduces one dimension and may limit the capacity of feature representation. In contrast, we examine the idea of arranging the feature vectors in 3D space rather than in a 2D plane. We refer to this 3D volumetric arrangement as a latent 3D volume. We will show that the latent 3D volume is beneficial to the tasks of depth estimation and semantic segmentation because these tasks require an understanding of the 3D structure of the scene. Our network first constructs an initial 3D volume using image features and then generates latent 3D volume by passing the initial 3D volume through several 3D convolutional layers. We apply depth regression and semantic segmentation by projecting the latent 3D volume onto a 2D plane. The evaluation results show that our method outperforms previous approaches on the NYU Depth v2 dataset.


2012 ◽  
Vol 246-247 ◽  
pp. 22-27
Author(s):  
Zheng Zhang ◽  
Xiao Wei Liu ◽  
Guang You Yang

A kind of calculation model of 3D space transformation is introduced, which is applicable to the monocular vision of robot manipulator, and the three-dimensional space plane mapping problem of image plane to the actual horizontal plane of monocular vision has been solved. It transforms the imaging coordinate system of target in monocular vision into the world coordinate system of the manipulator, so as to calculate the relative position of targets and the manipulator. The algorithm has better accuracy and reliability, which is proved by contrasting and testing the calculation result of object coordinate system transformed to the actual position coordinates to the sampling points in embedded platform.


Author(s):  
John C. Russ

Three-dimensional (3D) images consisting of arrays of voxels can now be routinely obtained from several different types of microscopes. These include both the transmission and emission modes of the confocal scanning laser microscope (but not its most common reflection mode), the secondary ion mass spectrometer, and computed tomography using electrons, X-rays or other signals. Compared to the traditional use of serial sectioning (which includes sequential polishing of hard materials), these newer techniques eliminate difficulties of alignment of slices, and maintain uniform resolution in the depth direction. However, the resolution in the z-direction may be different from that within each image plane, which makes the voxels non-cubic and creates some difficulties for subsequent analysis.


Micromachines ◽  
2021 ◽  
Vol 12 (4) ◽  
pp. 444
Author(s):  
Guoning Si ◽  
Liangying Sun ◽  
Zhuo Zhang ◽  
Xuping Zhang

This paper presents the design, fabrication, and testing of a novel three-dimensional (3D) three-fingered electrothermal microgripper with multiple degrees of freedom (multi DOFs). Each finger of the microgripper is composed of a V-shaped electrothermal actuator providing one DOF, and a 3D U-shaped electrothermal actuator offering two DOFs in the plane perpendicular to the movement of the V-shaped actuator. As a result, each finger possesses 3D mobilities with three DOFs. Each beam of the actuators is heated externally with the polyimide film. The durability of the polyimide film is tested under different voltages. The static and dynamic properties of the finger are also tested. Experiments show that not only can the microgripper pick and place microobjects, such as micro balls and even highly deformable zebrafish embryos, but can also rotate them in 3D space.


2021 ◽  
Vol 7 (1) ◽  
Author(s):  
Wei Luo ◽  
Yuma Nakamura ◽  
Jinseon Park ◽  
Mina Yoon

AbstractRecent experiments identified Co3Sn2S2 as the first magnetic Weyl semimetal (MWSM). Using first-principles calculation with a global optimization approach, we explore the structural stabilities and topological electronic properties of cobalt (Co)-based shandite and alloys, Co3MM’X2 (M/M’ = Ge, Sn, Pb, X = S, Se, Te), and identify stable structures with different Weyl phases. Using a tight-binding model, for the first time, we reveal that the physical origin of the nodal lines of a Co-based shandite structure is the interlayer coupling between Co atoms in different Kagome layers, while the number of Weyl points and their types are mainly governed by the interaction between Co and the metal atoms, Sn, Ge, and Pb. The Co3SnPbS2 alloy exhibits two distinguished topological phases, depending on the relative positions of the Sn and Pb atoms: a three-dimensional quantum anomalous Hall metal, and a MWSM phase with anomalous Hall conductivity (~1290 Ω−1 cm−1) that is larger than that of Co2Sn2S2. Our work reveals the physical mechanism of the origination of Weyl fermions in Co-based shandite structures and proposes topological quantum states with high thermal stability.


Sensors ◽  
2021 ◽  
Vol 21 (14) ◽  
pp. 4719
Author(s):  
Huei-Yung Lin ◽  
Yuan-Chi Chung ◽  
Ming-Liang Wang

This paper presents a novel self-localization technique for mobile robots using a central catadioptric camera. A unified sphere model for the image projection is derived by the catadioptric camera calibration. The geometric property of the camera projection model is utilized to obtain the intersections of the vertical lines and ground plane in the scene. Different from the conventional stereo vision techniques, the feature points are projected onto a known planar surface, and the plane equation is used for depth computation. The 3D coordinates of the base points on the ground are calculated using the consecutive image frames. The derivation of motion trajectory is then carried out based on the computation of rotation and translation between the robot positions. We develop an algorithm for feature correspondence matching based on the invariability of the structure in the 3D space. The experimental results obtained using the real scene images have demonstrated the feasibility of the proposed method for mobile robot localization applications.


Sensor Review ◽  
2017 ◽  
Vol 37 (3) ◽  
pp. 312-321 ◽  
Author(s):  
Yixiang Bian ◽  
Can He ◽  
Kaixuan Sun ◽  
Longchao Dai ◽  
Hui Shen ◽  
...  

Purpose The purpose of this paper is to design and fabricate a three-dimensional (3D) bionic airflow sensing array made of two multi-electrode piezoelectric metal-core fibers (MPMFs), inspired by the structure of a cricket’s highly sensitive airflow receptor (consisting of two cerci). Design/methodology/approach A metal core was positioned at the center of an MPMF and surrounded by a hollow piezoceramic cylinder. Four thin metal films were spray-coated symmetrically on the surface of the fiber that could be used as two pairs of sensor electrodes. Findings In 3D space, four output signals of the two MPMFs arrays can form three “8”-shaped spheres. Similarly, the sensing signals for the same airflow are located on a spherical surface. Originality/value Two MPMF arrays are sufficient to detect the speed and direction of airflow in all three dimensions.


Author(s):  
E. Sandgren ◽  
S. Venkataraman

Abstract A design optimization approach to robot path planning in a two dimensional workplace is presented. Obstacles are represented as a series of rectangular regions and collision detection is performed by an operation similar to clipping in computer graphics. The feasible design space is approximated by a discrete set of robot arm and gripper positions. Control is applied directly through the angular motion of each link. Feasible positions which are located between the initial and final robot link positions are grouped into stages. A dynamic programming algorithm is applied to locate the best state within each stage which minimizes the overall path length. An example is presented involving a three link planar manipulator. Extensions to three dimensional robot path planning and real time control in a dynamically changing workplace are discussed.


2017 ◽  
Vol 139 (12) ◽  
Author(s):  
Chuanfeng Wang

Curve-tracking control is challenging and fundamental in many robotic applications for an autonomous agent to follow a desired path. In this paper, we consider a particle, representing a fully actuated autonomous robot, moving at unit speed under steering control in the three-dimensional (3D) space. We develop a feedback control law that enables the particle to track any smooth curve in the 3D space. Representing the 3D curve in the natural Frenet frame, we construct the control law under which the moving direction of the particle will be aligned with the tangent direction of the desired curve and the distance between the particle and the desired curve will converge to zero. We demonstrate the effectiveness of the proposed 3D curve-tracking control law in simulations.


Sign in / Sign up

Export Citation Format

Share Document