A Mesh Reconstruction Method Based on View Maps

Author(s):  
Zhengjie Deng ◽  
Shuqian He ◽  
Chun Shi ◽  
Jianping Feng ◽  
Cuihua Ma
Author(s):  
Ji Ma ◽  
Hsi-Yung Feng ◽  
Lihui Wang

Automatic and reliable reconstruction of sharp features remains an open research issue in triangle mesh surface reconstruction. This paper presents a new feature sensitive mesh reconstruction method based on dependable neighborhood geometric information per input point. Such information is derived from the matching result of the local umbrella mesh constructed at each point. The proposed algorithm is different from the existing post-processing algorithms. The proposed algorithm reconstructs the triangle mesh via an integrated and progressive reconstruction process and features a unified multi-level inheritance priority queuing mechanism to prioritize the inclusion of each candidate triangle. A novel flatness sensitive filter, referred to as the normal vector cone filter, is introduced in this work and used to reliably reconstruct sharp features. In addition, the proposed algorithm aims to reconstruct a watertight manifold triangle mesh that passes through the complete original point set without point addition and removal. The algorithm has been implemented and validated using publicly available point cloud data sets. Compared to the original object geometry, it is seen that the reconstructed triangle meshes preserve the sharp features well and only contain minor shape deviations.


Author(s):  
S. Song ◽  
R. Qin

Abstract. Image-based 3D modelling are rather mature nowadays with well-acquired images through standard photogrammetric processing pipeline, while fusing 3D dataset generated from images with different views for surface reconstruction remains to be a challenge. Meshing algorithms for image-based 3D dataset requires visibility information for surfaces and such information can be difficult to obtain for 3D point clouds generated from images with different views, sources, resolutions and uncertainties. In this paper, we propose a novel multi-source mesh reconstruction and texture mapping pipeline optimized to address such a challenge. Our key contributions are 1) we extended state-of-the-art image-based surface reconstruction method by incorporating geometric information produced by satellite images to create wide-area surface model. 2) We extended a texture mapping method to accommodate images acquired from different sensors, i.e. side-view perspective images and satellite images. Experiments show that our method creates conforming surface model from these two sources, as well as consistent and well-balanced textures from images with drastically different radiometry (satellite images vs. street-view level images). We compared our proposed pipeline with a typical fusion pipeline - Poisson reconstruction and the results show that our pipeline shows distinctive advantages.


Sensors ◽  
2021 ◽  
Vol 21 (13) ◽  
pp. 4572
Author(s):  
Do-Yeop Kim ◽  
Ju-Yong Chang

Three-dimensional human mesh reconstruction from a single video has made much progress in recent years due to the advances in deep learning. However, previous methods still often reconstruct temporally noisy pose and mesh sequences given in-the-wild video data. To address this problem, we propose a human pose refinement network (HPR-Net) based on a non-local attention mechanism. The pipeline of the proposed framework consists of a weight-regression module, a weighted-averaging module, and a skinned multi-person linear (SMPL) module. First, the weight-regression module creates pose affinity weights from a 3D human pose sequence represented in a unit quaternion form. Next, the weighted-averaging module generates a refined 3D pose sequence by performing temporal weighted averaging using the generated affinity weights. Finally, the refined pose sequence is converted into a human mesh sequence using the SMPL module. HPR-Net is a simple but effective post-processing network that can substantially improve the accuracy and temporal smoothness of 3D human mesh sequences obtained from an input video by existing human mesh reconstruction methods. Our experiments show that the noisy results of the existing methods are consistently improved using the proposed method on various real datasets. Notably, our proposed method reduces the pose and acceleration errors of VIBE, the existing state-of-the-art human mesh reconstruction method, by 1.4% and 66.5%, respectively, on the 3DPW dataset.


Author(s):  
Michihiro Mikamo ◽  
Yoshinori Oki ◽  
Marco Visentini-Scarzanella ◽  
Hiroshi Kawasaki ◽  
Ryo Furukawa ◽  
...  

2019 ◽  
Vol 9 (6) ◽  
pp. 1086-1094 ◽  
Author(s):  
Zhong Chen ◽  
Zhiwei Hou ◽  
Mujian Xia ◽  
Yuegang Xing

In medical applications, it is important to reconstruct surface meshes from Computed Tomography (CT) images. Surface mesh reconstruction of biological tissues actually suffers from staircase artifacts, due to anisotropic CT data. To solve this problem, this paper proposes an adaptive surface mesh reconstruction method. We convert the contour pixels of medical image to contour points and exploit the adaptive spherical cover to produce an approximating surface based on the contour points. Due to the reconstruction quality depending on the accurate normal estimation, computing the normal vectors from the negative gradient based on 3D binary volume data instead of classical principal component analysis (PCA), and then covering contour points by adaptive spheres, linking the auxiliary points in the spheres for reconstructing adaptive triangular meshes. The presented method has been used in CT images of the first cervical vertebrae (C1), scapula, as well as the third lumbar vertebrae (L3) and the results are analyzed regarding their smoothness, accuracy and mesh quality. The results show that our method can reconstruct smooth, accurate and high-quality adaptive surface meshes.


Sign in / Sign up

Export Citation Format

Share Document