texture reconstruction
Recently Published Documents


TOTAL DOCUMENTS

61
(FIVE YEARS 6)

H-INDEX

8
(FIVE YEARS 0)

2021 ◽  
pp. 720-724
Author(s):  
Chen Qijing ◽  
Peng Fang ◽  
Zhang Meng


2021 ◽  
Vol 13 (21) ◽  
pp. 4254
Author(s):  
Mingyun Wen ◽  
Jisun Park ◽  
Kyungeun Cho

This study focuses on reconstructing accurate meshes with high-resolution textures from single images. The reconstruction process involves two networks: a mesh-reconstruction network and a texture-reconstruction network. The mesh-reconstruction network estimates a deformation map, which is used to deform a template mesh to the shape of the target object in the input image, and a low-resolution texture. We propose reconstructing a mesh with a high-resolution texture by enhancing the low-resolution texture through use of the super-resolution method. The architecture of the texture-reconstruction network is like that of a generative adversarial network comprising a generator and a discriminator. During the training of the texture-reconstruction network, the discriminator must focus on learning high-quality texture predictions and to ignore the difference between the generated mesh and the actual mesh. To achieve this objective, we used meshes reconstructed using the mesh-reconstruction network and textures generated through inverse rendering to generate pseudo-ground-truth images. We conducted experiments using the 3D-Future dataset, and the results prove that our proposed approach can be used to generate improved three-dimensional (3D) textured meshes compared to existing methods, both quantitatively and qualitatively. Additionally, through our proposed approach, the texture of the output image is significantly improved.



2021 ◽  
pp. 104311
Author(s):  
Xiaoxing Zeng ◽  
Ruyun Hu ◽  
Wu Shi ◽  
Yu Qiao


2021 ◽  
Vol 40 (2) ◽  
pp. 523-535
Author(s):  
Hyomin Kim ◽  
Jungeon Kim ◽  
Hyeonseo Nam ◽  
Jaesik Park ◽  
Seungyong Lee




2020 ◽  
Vol 12 (23) ◽  
pp. 3908
Author(s):  
Shenhong Li ◽  
Xiongwu Xiao ◽  
Bingxuan Guo ◽  
Lin Zhang

The Markov Random Field (MRF) energy function, constructed by existing OpenMVS-based 3D texture reconstruction algorithms, considers only the image label of the adjacent triangle face for the smoothness term and ignores the planar-structure information of the model. As a result, the generated texture charts results have too many fragments, leading to a serious local miscut and color discontinuity between texture charts. This paper fully utilizes the planar structure information of the mesh model and the visual information of the 3D triangle face on the image and proposes an improved, faster, and high-quality texture chart generation method based on the texture chart generation algorithm of the OpenMVS. This methodology of the proposed approach is as follows: (1) The visual quality on different visual images of each triangle face is scored using the visual information of the triangle face on each image in the mesh model. (2) A fully automatic Variational Shape Approximation (VSA) plane segmentation algorithm is used to segment the blocked 3D mesh models. The proposed fully automatic VSA-based plane segmentation algorithm is suitable for multi-threaded parallel processing, which solves the VSA framework needed to manually set the number of planes and the low computational efficiency in a large scene model. (3) The visual quality of the triangle face on different visual images is used as the data term, and the image label of adjective triangle and result of plane segmentation are utilized as the smoothness term to construct the MRF energy function. (4) An image label is assigned to each triangle by the minimizing energy function. A texture chart is generated by clustering the topologically-adjacent triangle faces with the same image label, and the jagged boundaries of the texture chart are smoothed. Three sets of data of different types were used for quantitative and qualitative evaluation. Compared with the original OpenMVS texture chart generation method, the experiments show that the proposed approach significantly reduces the number of texture charts, significantly improves miscuts and color differences between texture charts, and highly boosts the efficiency of VSA plane segmentation algorithm and OpenMVS texture reconstruction.



2020 ◽  
Vol 42 (10) ◽  
pp. 2508-2522 ◽  
Author(s):  
Lan Xu ◽  
Zhuo Su ◽  
Lei Han ◽  
Tao Yu ◽  
Yebin Liu ◽  
...  


Sensors ◽  
2020 ◽  
Vol 20 (15) ◽  
pp. 4330 ◽  
Author(s):  
Xinqi Liu ◽  
Jituo Li ◽  
Guodong Lu

High-quality 3D reconstruction results are very important in many application fields. However, current texture generation methods based on point sampling and fusion often produce blur. To solve this problem, we propose a new volumetric fusion strategy which can be embedded in the current online and offline reconstruction framework as a basic module to achieve excellent geometry and texture effects. The improvement comes from two aspects. Firstly, we establish an adaptive weight field to evaluate and adjust the reliability of data from RGB-D images by using a probabilistic and heuristic method. By using this adaptive weight field to guide the voxel fusion process, we can effectively preserve the local texture structure of the mesh, avoid wrong texture problems and suppress the influence of outlier noise on the geometric surface. Secondly, we use a new texture fusion strategy that combines replacement, integration, and fixedness operations to fuse and update voxel texture to reduce blur. Experimental results demonstrate that compared with the classical KinectFusion, our approach can significantly improve the accuracy in geometry and texture clarity, and can achieve equivalent texture reconstruction effects in real-time as the offline reconstruction methods such as intrinsic3d, even better in relief scenes.





Sign in / Sign up

Export Citation Format

Share Document