texture generation
Recently Published Documents


TOTAL DOCUMENTS

121
(FIVE YEARS 29)

H-INDEX

13
(FIVE YEARS 3)

2021 ◽  
Author(s):  
Stefan Hormann ◽  
Arka Bhowmick ◽  
Michael Weiher ◽  
Karl Leiss ◽  
Gerhard Rigoll
Keyword(s):  

2021 ◽  
pp. 103748
Author(s):  
Peng Wang ◽  
Heng Sun ◽  
Xiangzhi Bai ◽  
Sheng Guo ◽  
Darui Jin

Author(s):  
Ying Gao ◽  
Xiaohan Feng ◽  
Tiange Zhang ◽  
Eric Rigall ◽  
Huiyu Zhou ◽  
...  

Author(s):  
Xuejing Lei ◽  
Ganning Zhao ◽  
Kaitai Zhang ◽  
C.-C. Jay Kuo

An explainable, efficient, and lightweight method for texture generation, called TGHop (an acronym of Texture Generation PixelHop), is proposed in this work. Although synthesis of visually pleasant texture can be achieved by deep neural networks, the associated models are large in size, difficult to explain in theory, and computationally expensive in training. In contrast, TGHop is small in its model size, mathematically transparent, efficient in training and inference, and able to generate high-quality texture. Given an exemplary texture, TGHop first crops many sample patches out of it to form a collection of sample patches called the source. Then, it analyzes pixel statistics of samples from the source and obtains a sequence of fine-to-coarse subspaces for these patches by using the PixelHop++ framework. To generate texture patches with TGHop, we begin with the coarsest subspace, which is called the core, and attempt to generate samples in each subspace by following the distribution of real samples. Finally, texture patches are stitched to form texture images of a large size. It is demonstrated by experimental results that TGHop can generate texture images of superior quality with a small model size and at a fast speed.


2021 ◽  
Vol 171 ◽  
pp. 266-277
Author(s):  
Hanwen Xu ◽  
Xinming Tang ◽  
Bo Ai ◽  
Xiaoming Gao ◽  
Fanlin Yang ◽  
...  

Author(s):  
Zihan Liu ◽  
Guanghong Gong ◽  
Ni Li ◽  
Zihao Yu

Three-dimensional (3D) reconstruction of a human head with high precision has promising applications in scientific research, product design and other fields. However, it still faces resistance from two factors. One is inaccurate registration caused by symmetrical distribution of head feature points, and the other is economic burden due to high-accuracy sensors. Research on 3D reconstruction with portable consumer RGB-D sensors such as the Microsoft Kinect has been highlighted in recent years. Based on our multi-Kinect system, a precise and low-cost three-dimensional modeling method and its system implementation are introduced in this paper. A registration method for multi-source point clouds is provided, which can reduce the fusion differences and reconstruct the head model accurately. In addition, a template-based texture generation algorithm is presented to generate a fine texture. The comparison and analysis of our experiments show that our method can reconstruct a head model in an acceptable time with less memory and better effect.


Author(s):  
Vadim Sanzharov ◽  
Vladimir Frolov ◽  
Alexey Voloboy

Photorealistic rendering systems have recently found new applications in artificial intelligence, specifically in computer vision for the purpose of generation of image and video sequence datasets. The problem associated with this application is producing large number of photorealistic images with high variability of 3d models and their appearance. In this work, we propose an approach based on combining existing procedural texture generation techniques and domain randomization to generate large number of highly variative digital assets during the rendering process. This eliminates the need for a large pre-existing database of digital assets (only a small set of 3d models is required), and generates objects with unique appearance during rendering stage, reducing the needed post-processing of images and storage requirements. Our approach uses procedural texturing and material substitution to rapidly produce large number of variations of digital assets. The proposed solution can be used to produce training datasets for artificial intelligence applications and can be combined with most of state-of-the-art methods of scene generation.


Sign in / Sign up

Export Citation Format

Share Document