A Triangle Mesh Reconstruction Method Taking into Account Silhouette Images

Author(s):  
Michihiro Mikamo ◽  
Yoshinori Oki ◽  
Marco Visentini-Scarzanella ◽  
Hiroshi Kawasaki ◽  
Ryo Furukawa ◽  
...  
Author(s):  
Ji Ma ◽  
Hsi-Yung Feng ◽  
Lihui Wang

Automatic and reliable reconstruction of sharp features remains an open research issue in triangle mesh surface reconstruction. This paper presents a new feature sensitive mesh reconstruction method based on dependable neighborhood geometric information per input point. Such information is derived from the matching result of the local umbrella mesh constructed at each point. The proposed algorithm is different from the existing post-processing algorithms. The proposed algorithm reconstructs the triangle mesh via an integrated and progressive reconstruction process and features a unified multi-level inheritance priority queuing mechanism to prioritize the inclusion of each candidate triangle. A novel flatness sensitive filter, referred to as the normal vector cone filter, is introduced in this work and used to reliably reconstruct sharp features. In addition, the proposed algorithm aims to reconstruct a watertight manifold triangle mesh that passes through the complete original point set without point addition and removal. The algorithm has been implemented and validated using publicly available point cloud data sets. Compared to the original object geometry, it is seen that the reconstructed triangle meshes preserve the sharp features well and only contain minor shape deviations.


2003 ◽  
Vol 19 (1) ◽  
pp. 23-37 ◽  
Author(s):  
Yong-Jin Liu ◽  
Matthew Ming-Fai Yuen

Author(s):  
Zhengjie Deng ◽  
Shuqian He ◽  
Chun Shi ◽  
Jianping Feng ◽  
Cuihua Ma

2002 ◽  
Vol 2 (3) ◽  
pp. 160-170 ◽  
Author(s):  
Jianbing Huang ◽  
Chia-Hsiang Menq

In this paper, a systematic scheme is proposed and novel technologies are developed to automatically reconstruct a CAD model from a set of point clouds scanned from the boundary surface of an existing object. The proposed scheme is composed of three major steps. In the first step, multiple input point clouds are incrementally integrated into a watertight triangle mesh to recover the object shape. In the second step, mesh segmentation is applied to the triangle mesh to extract individual geometric feature surfaces. Finally, the manifold topology describing the connectivity information between different geometric surfaces is automatically extracted and the mathematical description of each geometric feature is computed. The computed topology and geometry information represented in ACIS modeling kernel form a CAD model that may be used for various downstream applications. Compared with prior work, the proposed approach has the unique advantage that the processes of recognizing geometric features and of reconstructing CAD models are fully automated. Integrated with state of the art scanning devices, the developed model reconstruction method can be used to support reverse engineering of high precision mechanical components. It has potential applications to many engineering problems with a major impact on rapid design and prototyping, shape analysis, and virtual reality.


Author(s):  
S. Song ◽  
R. Qin

Abstract. Image-based 3D modelling are rather mature nowadays with well-acquired images through standard photogrammetric processing pipeline, while fusing 3D dataset generated from images with different views for surface reconstruction remains to be a challenge. Meshing algorithms for image-based 3D dataset requires visibility information for surfaces and such information can be difficult to obtain for 3D point clouds generated from images with different views, sources, resolutions and uncertainties. In this paper, we propose a novel multi-source mesh reconstruction and texture mapping pipeline optimized to address such a challenge. Our key contributions are 1) we extended state-of-the-art image-based surface reconstruction method by incorporating geometric information produced by satellite images to create wide-area surface model. 2) We extended a texture mapping method to accommodate images acquired from different sensors, i.e. side-view perspective images and satellite images. Experiments show that our method creates conforming surface model from these two sources, as well as consistent and well-balanced textures from images with drastically different radiometry (satellite images vs. street-view level images). We compared our proposed pipeline with a typical fusion pipeline - Poisson reconstruction and the results show that our pipeline shows distinctive advantages.


2007 ◽  
Vol 10-12 ◽  
pp. 777-781 ◽  
Author(s):  
Yang Wang ◽  
Han Ming Lv ◽  
Shi Jun Ji

A triangle mesh was reconstructed from an unorganized point cloud through two phases of mesh growing based on different strategies, where regions with high point density usually grow at the first phase and the remaining regions grow later. In each phase of mesh growing, the smoothest regions always grow firstly and then largely avoid errors emerging in sharp regions. The presented test technique of geometric integrity as well as the abnormality disposing method pledged the reconstructed mesh has correct geometry structure. Experiments show that the algorithm is efficient and effective.


Sensors ◽  
2021 ◽  
Vol 21 (13) ◽  
pp. 4572
Author(s):  
Do-Yeop Kim ◽  
Ju-Yong Chang

Three-dimensional human mesh reconstruction from a single video has made much progress in recent years due to the advances in deep learning. However, previous methods still often reconstruct temporally noisy pose and mesh sequences given in-the-wild video data. To address this problem, we propose a human pose refinement network (HPR-Net) based on a non-local attention mechanism. The pipeline of the proposed framework consists of a weight-regression module, a weighted-averaging module, and a skinned multi-person linear (SMPL) module. First, the weight-regression module creates pose affinity weights from a 3D human pose sequence represented in a unit quaternion form. Next, the weighted-averaging module generates a refined 3D pose sequence by performing temporal weighted averaging using the generated affinity weights. Finally, the refined pose sequence is converted into a human mesh sequence using the SMPL module. HPR-Net is a simple but effective post-processing network that can substantially improve the accuracy and temporal smoothness of 3D human mesh sequences obtained from an input video by existing human mesh reconstruction methods. Our experiments show that the noisy results of the existing methods are consistently improved using the proposed method on various real datasets. Notably, our proposed method reduces the pose and acceleration errors of VIBE, the existing state-of-the-art human mesh reconstruction method, by 1.4% and 66.5%, respectively, on the 3DPW dataset.


Sign in / Sign up

Export Citation Format

Share Document