scholarly journals Trilateral convolutional neural network for 3D shape reconstruction of objects from a single depth view

2019 ◽  
Vol 13 (13) ◽  
pp. 2457-2466 ◽  
Author(s):  
Patricio Rivera ◽  
Edwin Valarezo Añazco ◽  
Mun-Taek Choi ◽  
Tae-Seong Kim
Photonics ◽  
2021 ◽  
Vol 8 (11) ◽  
pp. 459
Author(s):  
Hieu Nguyen ◽  
Zhaoyang Wang

Accurate three-dimensional (3D) shape reconstruction of objects from a single image is a challenging task, yet it is highly demanded by numerous applications. This paper presents a novel 3D shape reconstruction technique integrating a high-accuracy structured-light method with a deep neural network learning scheme. The proposed approach employs a convolutional neural network (CNN) to transform a color structured-light fringe image into multiple triple-frequency phase-shifted grayscale fringe images, from which the 3D shape can be accurately reconstructed. The robustness of the proposed technique is verified, and it can be a promising 3D imaging tool in future scientific and industrial applications.


2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Fei Wang ◽  
Yu Yang ◽  
Baoquan Zhao ◽  
Dazhi Jiang ◽  
Siwei Chen ◽  
...  

In this paper, we introduce a novel 3D shape reconstruction method from a single-view sketch image based on a deep neural network. The proposed pipeline is mainly composed of three modules. The first module is sketch component segmentation based on multimodal DNN fusion and is used to segment a given sketch into a series of basic units and build a transformation template by the knots between them. The second module is a nonlinear transformation network for multifarious sketch generation with the obtained transformation template. It creates the transformation representation of a sketch by extracting the shape features of an input sketch and transformation template samples. The third module is deep 3D shape reconstruction using multifarious sketches, which takes the obtained sketches as input to reconstruct 3D shapes with a generative model. It fuses and optimizes features of multiple views and thus is more likely to generate high-quality 3D shapes. To evaluate the effectiveness of the proposed method, we conduct extensive experiments on a public 3D reconstruction dataset. The results demonstrate that our model can achieve better reconstruction performance than peer methods. Specifically, compared to the state-of-the-art method, the proposed model achieves a performance gain in terms of the five evaluation metrics by an average of 25.5% on the man-made model dataset and 23.4% on the character object dataset using synthetic sketches and by an average of 31.8% and 29.5% on the two datasets, respectively, using human drawing sketches.


2021 ◽  
pp. 102228
Author(s):  
Xiang Chen ◽  
Nishant Ravikumar ◽  
Yan Xia ◽  
Rahman Attar ◽  
Andres Diaz-Pinto ◽  
...  

Author(s):  
Riccardo Spezialetti ◽  
David Joseph Tan ◽  
Alessio Tonioni ◽  
Keisuke Tateno ◽  
Federico Tombari

Sign in / Sign up

Export Citation Format

Share Document