scholarly journals Per-pixel displacement mapping using cone tracing with correct silhouette

2021 ◽  
Vol 12 (4) ◽  
pp. 39-61
Author(s):  
Adnane Ouazzani Chahdi ◽  
◽  
Anouar Ragragui ◽  
Akram Halli ◽  
Khalid Satori ◽  
...  

Per-pixel displacement mapping is a texture mapping technique that adds the microrelief effect to 3D surfaces without increasing the density of their corresponding meshes. This technique relies on ray tracing algorithms to find the intersection point between the viewing ray and the microrelief stored in a 2D texture called a depth map. This intersection makes it possible to deter- mine the corresponding pixel to produce an illusion of surface displacement instead of a real one. Cone tracing is one of the per-pixel displacement map- ping techniques for real-time rendering that relies on the encoding of the empty space around each pixel of the depth map. During the preprocessing stage, this space is encoded in the form of top-opened cones and then stored in a 2D texture, and during the rendering stage, it is used to converge more quickly to the intersection point. Cone tracing technique produces satisfacto- ry results in the case of flat surfaces, but when it comes to curved surfaces, it does not support the silhouette at the edges of the 3D mesh, that is to say, the relief merges with the surface of the object, and in this case, it will not be rendered correctly. To overcome this limitation, we have presented two new cone tracing algorithms that allow taking into consideration the curvature of the 3D surface to determine the fragments belonging to the silhouette. These two algorithms are based on a quadratic approximation of the object geometry at each vertex of the 3D mesh. The main objective of this paper is to achieve a texture mapping with a realistic appearance and at a low cost so that the rendered objects will have real and complex details that are vis- ible on their entire surface and without modifying their geometry. Based on the ray-tracing algorithm, our contribution can be useful for current graphics card generation, since the programmable units and the frameworks associat- ed with the new graphics cards integrate today the technology of ray tracing.

Electronics ◽  
2021 ◽  
Vol 10 (1) ◽  
pp. 82
Author(s):  
Geonwoo Kim ◽  
Deokwoo Lee

Recovery of three-dimensional (3D) coordinates using a set of images with texture mapping to generate a 3D mesh has been of great interest in computer graphics and 3D imaging applications. This work aims to propose an approach to adaptive view selection (AVS) that determines the optimal number of images to generate the synthesis result using the 3D mesh and textures in terms of computational complexity and image quality (peak signal-to-noise ratio (PSNR)). All 25 images were acquired by a set of cameras in a 5×5 array structure, and rectification had already been performed. To generate the mesh, depth map extraction was carried out by calculating the disparity between the matched feature points. Synthesis was performed by fully exploiting the content included in the images followed by texture mapping. Both the 2D colored images and grey-scale depth images were synthesized based on the geometric relationship between the images, and to this end, three-dimensional synthesis was performed with a smaller number of images, which was less than 25. This work determines the optimal number of images that sufficiently provides a reliable 3D extended view by generating a mesh and image textures. The optimal number of images contributes to an efficient system for 3D view generation that reduces the computational complexity while preserving the quality of the result in terms of the PSNR. To substantiate the proposed approach, experimental results are provided.


2006 ◽  
Vol 128 (9) ◽  
pp. 945-952 ◽  
Author(s):  
Sandip Mazumder

Two different algorithms to accelerate ray tracing in surface-to-surface radiation Monte Carlo calculations are investigated. The first algorithm is the well-known binary spatial partitioning (BSP) algorithm, which recursively bisects the computational domain into a set of hierarchically linked boxes that are then made use of to narrow down the number of ray-surface intersection calculations. The second algorithm is the volume-by-volume advancement (VVA) algorithm. This algorithm is new and employs the volumetric mesh to advance the ray through the computational domain until a legitimate intersection point is found. The algorithms are tested for two classical problems, namely an open box, and a box in a box, in both two-dimensional (2D) and three-dimensional (3D) geometries with various mesh sizes. Both algorithms are found to result in orders of magnitude gains in computational efficiency over direct calculations that do not employ any acceleration strategy. For three-dimensional geometries, the VVA algorithm is found to be clearly superior to BSP, particularly for cases with obstructions within the computational domain. For two-dimensional geometries, the VVA algorithm is found to be superior to the BSP algorithm only when obstructions are present and are densely packed.


2014 ◽  
Vol 41 (10) ◽  
pp. 102302 ◽  
Author(s):  
Xiaoyao Fan ◽  
Songbai Ji ◽  
Alex Hartov ◽  
David W. Roberts ◽  
Keith D. Paulsen

Robotica ◽  
2018 ◽  
Vol 36 (10) ◽  
pp. 1493-1509
Author(s):  
Diego Mercado ◽  
Pedro Castillo ◽  
Rogelio Lozano

SUMMARYSafe and accurate navigation for autonomous trajectory tracking of quadrotors using monocular vision is addressed in this paper. A second order Sliding Mode (2-SM) control algorithm is used to track desired trajectories, providing robustness against model uncertainties and external perturbations. The time-scale separation of the translational and rotational dynamics allows to design position controllers by giving a desired reference in roll and pitch angles, which is suitable for practical validation in quad-rotors equipped with an internal attitude controller. A Lyapunov based analysis proved the closed-loop stability of the system despite the presence of unknown external perturbations. Monocular vision fused with inertial measurements are used to estimate the vehicle's pose with respect to unstructured scenes. In addition, the distance to potential collisions is detected and computed using the sparse depth map coming also from the vision algorithm. The proposed strategy is successfully tested in real-time experiments, using a low-cost commercial quadrotor.


Symmetry ◽  
2020 ◽  
Vol 12 (5) ◽  
pp. 748
Author(s):  
Yulong An ◽  
Yanmei Zhang ◽  
Haichao Guo ◽  
Jing Wang

Low-cost Laser Detection and Ranging (LiDAR) is crucial to three-dimensional (3D) imaging in applications such as remote sensing, target detection, and machine vision. In conventional nonscanning time-of-flight (TOF) LiDAR, the intensity map is obtained by a detector array and the depth map is measured in the time domain which requires costly sensors and short laser pulses. To overcome such limitations, this paper presents a nonscanning 3D laser imaging method that combines compressive sensing (CS) techniques and electro-optic modulation. In this novel scheme, electro-optic modulation is applied to map the range information into the intensity of echo pulses symmetrically and the measurements of pattern projection with symmetrical structure are received by the low bandwidth detector. The 3D imaging can be extracted from two gain modulated images that are recovered by solving underdetermined inverse problems. An integrated regularization model is proposed for the recovery problems and the minimization functional model is solved by a proposed algorithm applying the alternating direction method of multiplier (ADMM) technique. The simulation results on various subrates for 3D imaging indicate that our proposed method is feasible and achieves performance improvement over conventional methods in systems with hardware limitations. This novel method will be highly valuable for practical applications with advantages of low cost and flexible structure at wavelengths beyond visible spectrum.


2015 ◽  
Vol 2015 ◽  
pp. 1-10 ◽  
Author(s):  
Saša R. Pavlovic ◽  
Velimir P. Stefanovic

This study presents the geometric aspects of the focal image for a solar parabolic concentrator (SPC) using the ray tracing technique to establish parameters that allow the designation of the most suitable geometry for coupling the SPC to absorber-receiver. The efficient conversion of solar radiation into heat at these temperature levels requires a use of concentrating solar collectors. In this paper detailed optical design of the solar parabolic dish concentrator is presented. The system has diameter D=3800 mm and focal distance f=2260 mm. The parabolic dish of the solar system consists of 11 curvilinear trapezoidal reflective petals. For the construction of the solar collectors, mild steel-sheet and square pipe were used as the shell support for the reflecting surfaces. This paper presents optical simulations of the parabolic solar concentrator unit using the ray tracing software TracePro. The total flux on the receiver and the distribution of irradiance for absorbing flux on center and periphery receiver are given. The goal of this paper is to present the optical design of a low-tech solar concentrator that can be used as a potentially low cost tool for laboratory scale research on the medium-temperature thermal processes, cooling, industrial processes, polygeneration systems, and so forth.


2013 ◽  
Vol 57 (6) ◽  
pp. 605011-605017
Author(s):  
Pablo Revuelta Sanz ◽  
Belén Ruiz Mezcua ◽  
José M. Sánchez Pena
Keyword(s):  
Low Cost ◽  

Author(s):  
R. Ravanelli ◽  
A. Nascetti ◽  
M. Crespi

Today range cameras are widespread low-cost sensors based on two different principles of operation: we can distinguish between Structured Light (SL) range cameras (Kinect v1, Structure Sensor, ...) and Time Of Flight (ToF) range cameras (Kinect v2, ...). Both the types are easy to use 3D scanners, able to reconstruct dense point clouds at high frame rate. However the depth maps obtained are often noisy and not enough accurate, therefore it is generally essential to improve their quality. Standard RGB cameras can be a valuable solution to solve such issue. The aim of this paper is therefore to evaluate the integration feasibility of these two different 3D modelling techniques, characterized by complementary features and based on standard low-cost sensors. <br><br> For this purpose, a 3D model of a DUPLO<sup>TM</sup> bricks construction was reconstructed both with the Kinect v2 range camera and by processing one stereo pair acquired with a Canon Eos 1200D DSLR camera. The scale of the photgrammetric model was retrieved from the coordinates measured by Kinect v2. The preliminary results are encouraging and show that the foreseen integration could lead to an higher metric accuracy and a major level of completeness with respect to that obtained by using only separated techniques.


Sign in / Sign up

Export Citation Format

Share Document