texture mapping
Recently Published Documents


TOTAL DOCUMENTS

590
(FIVE YEARS 72)

H-INDEX

32
(FIVE YEARS 4)

Author(s):  
Xingquan Cai ◽  
Dingwei Feng ◽  
Mohan Cai ◽  
Chen Sun ◽  
Haiyan Sun

To address the issues of low efficiencies and serious mapping distortions in current mesh parameterization methods, we present a low distortion mesh parameterization mapping method based on proxy function and combined Newton’s method in this paper. First, the proposed method calculates visual blind areas and distortion prone areas of a 3D mesh model, and generates a model slit. Afterwards, the method performs the Tutte mapping on the cut three-dimensional mesh model, measures the mapping distortion of the model, and outputs a distortion metric function and distortion values. Finally, the method sets iteration parameters, establishes a reference mesh, and finds the optimal coordinate points to get a convergent mesh model. When calculating mapping distortions, Dirichlet energy function is used to measure the isometric mapping distortion, and MIPS energy function is used to measure the conformal mapping distortion. To find the minimum value of the mapping distortion metric function, we use an optimal solution method combining proxy functions and combined Newton’s method. The experimental data show that the proposed method has high execution efficiency, fast descending speed of mapping distortion energy and stable optimal value convergence quality. When a texture mapping is performed, the texture is evenly colored, close laid and uniformly lined, which meets the standards in practical applications.


2021 ◽  
Vol 14 (1) ◽  
pp. 50
Author(s):  
Haiqing He ◽  
Jing Yu ◽  
Penggen Cheng ◽  
Yuqian Wang ◽  
Yufeng Zhu ◽  
...  

Most 3D CityGML building models in street-view maps (e.g., Google, Baidu) lack texture information, which is generally used to reconstruct real-scene 3D models by photogrammetric techniques, such as unmanned aerial vehicle (UAV) mapping. However, due to its simplified building model and inaccurate location information, the commonly used photogrammetric method using a single data source cannot satisfy the requirement of texture mapping for the CityGML building model. Furthermore, a single data source usually suffers from several problems, such as object occlusion. We proposed a novel approach to achieve CityGML building model texture mapping by multiview coplanar extraction from UAV remotely sensed or terrestrial images to alleviate these problems. We utilized a deep convolutional neural network to filter out object occlusion (e.g., pedestrians, vehicles, and trees) and obtain building-texture distribution. Point-line-based features are extracted to characterize multiview coplanar textures in 2D space under the constraint of a homography matrix, and geometric topology is subsequently conducted to optimize the boundary of textures by using a strategy combining Hough-transform and iterative least-squares methods. Experimental results show that the proposed approach enables texture mapping for building façades to use 2D terrestrial images without the requirement of exterior orientation information; that is, different from the photogrammetric method, a collinear equation is not an essential part to capture texture information. In addition, the proposed approach can significantly eliminate blurred and distorted textures of building models, so it is suitable for automatic and rapid texture updates.


2021 ◽  
Vol 13 (24) ◽  
pp. 5135
Author(s):  
Yahya Alshawabkeh ◽  
Ahmad Baik ◽  
Ahmad Fallatah

The work described in the paper emphasizes the importance of integrating imagery and laser scanner techniques (TLS) to optimize the geometry and visual quality of Heritage BIM. The fusion-based workflow was approached during the recording of Zee Ain Historical Village in Saudi Arabia. The village is a unique example of traditional human settlements, and represents a complex natural and cultural heritage site. The proposed workflow divides data integration into two levels. At the basic level, UAV photogrammetry with enhanced mobility and visibility is used to map the ragged terrain and supplement TLS point data in upper and unaccusable building zones where shadow data originated. The merging of point clouds ensures that the building’s overall geometry is correctly rebuilt and that data interpretation is improved during HBIM digitization. In addition to the correct geometry, texture mapping is particularly important in the area of cultural heritage. Constructing a realistic texture remains a challenge in HBIM; because the standard texture and materials provided in BIM libraries do not allow for reliable representation of heritage structures, mapping and sharing information are not always truthful. Thereby, at the second level, the workflow proposed true orthophoto texturing method for HBIM models by combining close-range imagery and laser data. True orthophotos have uniform scale that depicts all objects in their respective planimetric positions, providing reliable and realistic mapping. The process begins with the development of a Digital Surface Model (DSM) by sampling TLS 3D points in a regular grid, with each cell uniquely associated with a model point. Then each DSM cell is projected in the corresponding perspective imagery in order to map the relevant spectral information. The methods allow for flexible data fusion and image capture using either a TLS-installed camera or a separate camera at the optimal time and viewpoint for radiometric data. The developed workflows demonstrated adequate results in terms of complete and realistic textured HBIM, allowing for a better understanding of the complex heritage structures.


2021 ◽  
Vol 12 (4) ◽  
pp. 39-61
Author(s):  
Adnane Ouazzani Chahdi ◽  
◽  
Anouar Ragragui ◽  
Akram Halli ◽  
Khalid Satori ◽  
...  

Per-pixel displacement mapping is a texture mapping technique that adds the microrelief effect to 3D surfaces without increasing the density of their corresponding meshes. This technique relies on ray tracing algorithms to find the intersection point between the viewing ray and the microrelief stored in a 2D texture called a depth map. This intersection makes it possible to deter- mine the corresponding pixel to produce an illusion of surface displacement instead of a real one. Cone tracing is one of the per-pixel displacement map- ping techniques for real-time rendering that relies on the encoding of the empty space around each pixel of the depth map. During the preprocessing stage, this space is encoded in the form of top-opened cones and then stored in a 2D texture, and during the rendering stage, it is used to converge more quickly to the intersection point. Cone tracing technique produces satisfacto- ry results in the case of flat surfaces, but when it comes to curved surfaces, it does not support the silhouette at the edges of the 3D mesh, that is to say, the relief merges with the surface of the object, and in this case, it will not be rendered correctly. To overcome this limitation, we have presented two new cone tracing algorithms that allow taking into consideration the curvature of the 3D surface to determine the fragments belonging to the silhouette. These two algorithms are based on a quadratic approximation of the object geometry at each vertex of the 3D mesh. The main objective of this paper is to achieve a texture mapping with a realistic appearance and at a low cost so that the rendered objects will have real and complex details that are vis- ible on their entire surface and without modifying their geometry. Based on the ray-tracing algorithm, our contribution can be useful for current graphics card generation, since the programmable units and the frameworks associat- ed with the new graphics cards integrate today the technology of ray tracing.


2021 ◽  
Vol 10 (12) ◽  
pp. 798
Author(s):  
Xuequan Zhang ◽  
Wei Liu ◽  
Bing Liu ◽  
Xin Zhao ◽  
Zihe Hu

A high-fidelity 3D urban building model requires large quantities of detailed textures, which can be non-tiled or tiled ones. The fast loading and rendering of these models remain challenges in web-based large-scale 3D city visualization. The traditional texture atlas methods compress all the textures of a model into one atlas, which needs extra blank space, and the size of the atlas is uncontrollable. This paper introduces a size-adaptive texture atlas method that can pack all the textures of a model without losing accuracy and increasing extra storage space. Our method includes two major steps: texture atlas generation and texture atlas remapping. First, all the textures of a model are classified into non-tiled and tiled ones. The maximum supported size of the texture is acquired from the graphics hardware card, and all the textures are packed into one or more atlases. Then, the texture atlases are remapped onto the geometric meshes. For the triangle with the original non-tiled texture, new texture coordinates in the texture atlases can be calculated directly. However, as for the triangle with the original tiled texture, it is clipped into many unit triangles to apply texture mapping. Although the method increases the mesh vertex number, the increased geometric vertices have much less impact on the rendering efficiency compared with the method of increasing the texture space. The experiment results show that our method can significantly improve building model rendering efficiency for large-scale 3D city visualization.


2021 ◽  
Vol 12 (1) ◽  
pp. 206-218
Author(s):  
Victor Gouveia de M. Lyra ◽  
Adam H. M. Pinto ◽  
Gustavo C. R. Lima ◽  
João Paulo Lima ◽  
Veronica Teichrieb ◽  
...  

With the growth of access to faster computers and more powerful cameras, the 3D reconstruction of objects has become one of the public's main topics of research and demand. This task is vigorously applied in creating virtual environments, creating object models, and other activities. One of the techniques for obtaining 3D features is photogrammetry, mapping objects and scenarios using only images. However, this process is very costly and can be pretty time-consuming for large datasets. This paper proposes a robust, efficient reconstruction pipeline with a low runtime in batch processing and permissive code. It is even possible to commercialize it without the need to keep the code open. We mix an improved structure from motion algorithm and a recurrent multi-view stereo reconstruction. We also use the Point Cloud Library for normal estimation, surface reconstruction, and texture mapping. We compare our results with state-of-the-art techniques using benchmarks and our datasets. The results showed a decrease of 69.4% in the average execution time, with high quality but a greater need for more images to achieve complete reconstruction.


2021 ◽  
Vol 2066 (1) ◽  
pp. 012042
Author(s):  
Xiaoxue Yang

Abstract With the rapid development of computer technology and measurement technology, three-dimensional point cloud data, as an important form of data in computer graphics, is used by light reactions in reverse engineering, surveying, robotics, virtual reality, stereo 3D imaging, Indoor scene reconstruction and many other fields. This paper aims to study the key technology of 3D point cloud data multi-view image texture mapping seam fusion, and propose a joint coding and compression scheme of multi-view image texture to replace the previous independent coding scheme of applying MVC standard compression to multi-view image texture. Experimental studies have shown that multi-view texture depth joint coding has different degrees of performance improvement compared with the other two current 3D MVD data coding schemes. Especially for Ballet and Dancer sequences with better depth video quality, the performance of JMVDC is very obvious. Compared with the KS_ IBP structure, the gain can reach as high as 1.34dB at the same bit rate.


2021 ◽  
pp. 1-28
Author(s):  
Bob Davis ◽  
Phil Harding ◽  
Matt Leivers

Newly discovered and previously documented Late Neolithic chalk plaques from the Stonehenge locality have been subjected to new, non-invasive techniques which allow access to previously unseen elements of archaeological evidence. The application of these methods – involving Reflectance Transformation Imaging (RTI) and Polynomial Texture Mapping (PTM) – has revealed detail of the surface preparation and allowed methods and sequence of the compositions to be unpicked, clarifying their complexities. The results reveal a range of approaches to the compositions, some of which demonstrate planning, order, and intention while others include less systematic, rapidly executed sketches. Investigations of lines and surfaces have been made, supplemented by preliminary studies of replicated test pieces, to examine potential implements used in their creation and remark on plaque biographies and surface attrition following manufacture. Furthermore, detail revealed by RTI provides indications of the orientations in which some of the plaques should be viewed and – in one instance – suggests a ‘reflected’ element that may not be entirely abstract. Results from improved radiocarbon determinations place the plaques in the early part of the 3rd millennium bc which, together with identification of individual motifs, allows the plaques and the designs to be reconsidered within the corpus of Neolithic art in the British Isles.


2021 ◽  
Vol 13 (17) ◽  
pp. 3458
Author(s):  
Chong Yang ◽  
Fan Zhang ◽  
Yunlong Gao ◽  
Zhu Mao ◽  
Liang Li ◽  
...  

With the progress of photogrammetry and computer vision technology, three-dimensional (3D) reconstruction using aerial oblique images has been widely applied in urban modelling and smart city applications. However, state-of-the-art image-based automatic 3D reconstruction methods cannot effectively handle the unavoidable geometric deformation and incorrect texture mapping problems caused by moving cars in a city. This paper proposes a method to address this situation and prevent the influence of moving cars on 3D modelling by recognizing moving cars and combining the recognition results with a photogrammetric 3D modelling procedure. Through car detection using a deep learning method and multiview geometry constraints, we can analyse the state of a car’s movement and apply a proper preprocessing method to the geometrically model generation and texture mapping steps of 3D reconstruction pipelines. First, we apply the traditional Mask R-CNN object detection method to detect cars from oblique images. Then, a detected car and its corresponding image patch calculated by the geometry constraints in the other view images are used to identify the moving state of the car. Finally, the geometry and texture information corresponding to the moving car will be processed according to its moving state. Experiments on three different urban datasets demonstrate that the proposed method is effective in recognizing and removing moving cars and can repair the geometric deformation and error texture mapping problems caused by moving cars. In addition, the methods proposed in this paper can be applied to eliminate other moving objects in 3D modelling applications.


Sign in / Sign up

Export Citation Format

Share Document