Graph-cut-based 3D model segmentation for articulated object reconstruction

Author(s):  
Inkyu Han ◽  
Hyoungnyoun Kim ◽  
Ji-Hyung Park
2021 ◽  
Vol 29 ◽  
pp. 133-140
Author(s):  
Bin Liu ◽  
Shujun Liu ◽  
Guanning Shang ◽  
Yanjie Chen ◽  
Qifeng Wang ◽  
...  

BACKGROUND: There is a great demand for the extraction of organ models from three-dimensional (3D) medical images in clinical medicine diagnosis and treatment. OBJECTIVE: We aimed to aid doctors in seeing the real shape of human organs more clearly and vividly. METHODS: The method uses the minimum eigenvectors of Laplacian matrix to automatically calculate a group of basic matting components that can properly define the volume image. These matting components can then be used to build foreground images with the help of a few user marks. RESULTS: We propose a direct 3D model segmentation method for volume images. This is a process of extracting foreground objects from volume images and estimating the opacity of the voxels covered by the objects. CONCLUSIONS: The results of segmentation experiments on different parts of human body prove the applicability of this method.


Author(s):  
Y. Wang ◽  
R. Liu ◽  
S. Endo ◽  
Y. Uehara

2014 ◽  
Vol 1049-1050 ◽  
pp. 1417-1420
Author(s):  
Hui Jia ◽  
Guo Hua Geng ◽  
Jian Gang Zhang

3D model segmentation is a new research focus in the field of computer graphics. The segmentation algorithm of this paper is consistent segmentation which is about a group of 3D model with shape similarity. A volume-based shape-function called the shape diameter function (SDF) is used to on behalf of the characteristics of the model. Gaussian mixture model (GMM) is fitting k Gaussians to the SDF values, and EM algorithm is used to segment 3D models consistently. The experimental results show that this algorithm can effectively segment the 3D models consistently.


2008 ◽  
Vol E91-D (4) ◽  
pp. 1149-1158 ◽  
Author(s):  
B. ZHENG ◽  
J. TAKAMATSU ◽  
K. IKEUCHI

Mathematics ◽  
2021 ◽  
Vol 9 (18) ◽  
pp. 2288
Author(s):  
Rohan Tahir ◽  
Allah Bux Sargano ◽  
Zulfiqar Habib

In recent years, learning-based approaches for 3D reconstruction have gained much popularity due to their encouraging results. However, unlike 2D images, 3D cannot be represented in its canonical form to make it computationally lean and memory-efficient. Moreover, the generation of a 3D model directly from a single 2D image is even more challenging due to the limited details available from the image for 3D reconstruction. Existing learning-based techniques still lack the desired resolution, efficiency, and smoothness of the 3D models required for many practical applications. In this paper, we propose voxel-based 3D object reconstruction (V3DOR) from a single 2D image for better accuracy, one using autoencoders (AE) and another using variational autoencoders (VAE). The encoder part of both models is used to learn suitable compressed latent representation from a single 2D image, and a decoder generates a corresponding 3D model. Our contribution is twofold. First, to the best of the authors’ knowledge, it is the first time that variational autoencoders (VAE) have been employed for the 3D reconstruction problem. Second, the proposed models extract a discriminative set of features and generate a smoother and high-resolution 3D model. To evaluate the efficacy of the proposed method, experiments have been conducted on a benchmark ShapeNet data set. The results confirm that the proposed method outperforms state-of-the-art methods.


Author(s):  
Enrique Hernández Murillo ◽  
Gonzalo López Nicolás ◽  
Rosario Aragüés

Volumetric reconstruction of unknown objects is essential in robotic manipulation. Building the 3D model requires a set of views so we consider a multi-camera scenario. We study an effective configuration strategy to address camera constraints such as the limited field of view or self-occlusions.


Sign in / Sign up

Export Citation Format

Share Document