scholarly journals Volumetric Object Reconstruction in Multi-Camera Scenarios

Author(s):  
Enrique Hernández Murillo ◽  
Gonzalo López Nicolás ◽  
Rosario Aragüés

Volumetric reconstruction of unknown objects is essential in robotic manipulation. Building the 3D model requires a set of views so we consider a multi-camera scenario. We study an effective configuration strategy to address camera constraints such as the limited field of view or self-occlusions.

Author(s):  
J. Li-Chee-Ming ◽  
C. Armenakis

This paper presents a novel application of the Visual Servoing Platform’s (ViSP) for pose estimation in indoor and GPS-denied outdoor environments. Our proposed solution integrates the trajectory solution from RGBD-SLAM into ViSP’s pose estimation process. Li-Chee-Ming and Armenakis (2015) explored the application of ViSP in mapping large outdoor environments, and tracking larger objects (i.e., building models). Their experiments revealed that tracking was often lost due to a lack of model features in the camera’s field of view, and also because of rapid camera motion. Further, the pose estimate was often biased due to incorrect feature matches. This work proposes a solution to improve ViSP’s pose estimation performance, aiming specifically to reduce the frequency of tracking losses and reduce the biases present in the pose estimate. This paper explores the integration of ViSP with RGB-D SLAM. We discuss the performance of the combined tracker in mapping indoor environments and tracking 3D wireframe indoor building models, and present preliminary results from our experiments.


2003 ◽  
Vol 15 (3) ◽  
pp. 293-303
Author(s):  
Haiquan Yang ◽  
◽  
Nobuyuki Kita ◽  
Yasuyo Kita

A method is proposed to correct the initial position and pose estimates of a camera-head by aligning a 3D model of its surrounding environment with an observed 2D image that is captured by a foveated wideangle lens in the camera. Because of the wide field of view of the lens, the algorithm can converge even when the initial error is large, and the precision of the result is high since the resolution of the fovea of the lens is high.


Mathematics ◽  
2021 ◽  
Vol 9 (18) ◽  
pp. 2288
Author(s):  
Rohan Tahir ◽  
Allah Bux Sargano ◽  
Zulfiqar Habib

In recent years, learning-based approaches for 3D reconstruction have gained much popularity due to their encouraging results. However, unlike 2D images, 3D cannot be represented in its canonical form to make it computationally lean and memory-efficient. Moreover, the generation of a 3D model directly from a single 2D image is even more challenging due to the limited details available from the image for 3D reconstruction. Existing learning-based techniques still lack the desired resolution, efficiency, and smoothness of the 3D models required for many practical applications. In this paper, we propose voxel-based 3D object reconstruction (V3DOR) from a single 2D image for better accuracy, one using autoencoders (AE) and another using variational autoencoders (VAE). The encoder part of both models is used to learn suitable compressed latent representation from a single 2D image, and a decoder generates a corresponding 3D model. Our contribution is twofold. First, to the best of the authors’ knowledge, it is the first time that variational autoencoders (VAE) have been employed for the 3D reconstruction problem. Second, the proposed models extract a discriminative set of features and generate a smoother and high-resolution 3D model. To evaluate the efficacy of the proposed method, experiments have been conducted on a benchmark ShapeNet data set. The results confirm that the proposed method outperforms state-of-the-art methods.


2019 ◽  
Vol 632 ◽  
pp. L5 ◽  
Author(s):  
F. Scholten ◽  
F. Preusker ◽  
S. Elgner ◽  
K.-D. Matz ◽  
R. Jaumann ◽  
...  

After its release and a descent and bouncing phase, the Hayabusa2 lander MASCOT came to a final rest and MASCOT’s camera MASCam acquired a set of images of the surface of Ryugu. With MASCam’s instantaneous field of view of about 1 mrad, the images provide pixel scales from 0.2 to 0.5 mm pixel−1 in the foreground and up to 1 cm pixel−1 for surface parts in the background. Using a stereo-photogrammetric analysis of the MASCam images taken from slightly different positions due to commanded and unintentional movements of the MASCOT lander, we were able to determine the orientation for the different measurement positions. Furthermore, we derived a 3D surface model of MASCOT’s vicinity. Although the conditions for 3D stereo processing were poor due to very small stereo angles, the derived 3D model has about 0.5 cm accuracy in the foreground at 20 cm distance and about 1.5 cm at a distance of 40–50 cm.


Author(s):  
Abdulrahman Al-Shanoon ◽  
Haoxiang Lang ◽  
Ying Wang ◽  
Yunfei Zhang ◽  
Wenxin Hong

Sign in / Sign up

Export Citation Format

Share Document